CN113080964A - Self-closing data processing method and device based on intervention robot - Google Patents

Self-closing data processing method and device based on intervention robot Download PDF

Info

Publication number
CN113080964A
CN113080964A CN202110271584.6A CN202110271584A CN113080964A CN 113080964 A CN113080964 A CN 113080964A CN 202110271584 A CN202110271584 A CN 202110271584A CN 113080964 A CN113080964 A CN 113080964A
Authority
CN
China
Prior art keywords
scene
value
preset
verification
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110271584.6A
Other languages
Chinese (zh)
Inventor
张海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Qilu Health Technology Co ltd
Original Assignee
Guangzhou Qilu Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Qilu Health Technology Co ltd filed Critical Guangzhou Qilu Health Technology Co ltd
Priority to CN202110271584.6A priority Critical patent/CN113080964A/en
Publication of CN113080964A publication Critical patent/CN113080964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Fuzzy Systems (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-closing data processing method and a device based on an intervening robot, wherein the method is suitable for the intervening robot and comprises the following steps: acquiring a scene image of a current scene, and performing scene verification on the scene image; when the scene verification is passed, recording behavior actions executed by the child to be detected within a preset time interval, and calculating corresponding action characteristic values in the behavior actions; and generating a classification result according to the action characteristic value. The invention can avoid the problem that the abnormal behavior of the children occurs and the diagnosis result is influenced due to the environment, the whole detection and diagnosis time is short, the diagnosis efficiency can be effectively improved, and the detection and diagnosis are based on the real-time reflection of the children, so the diagnosis accuracy can be effectively improved.

Description

Self-closing data processing method and device based on intervention robot
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a self-closing data processing method and device based on an intervening robot.
Background
Autism, also known as autistic disorder, is a representative disease of Pervasive Developmental Disorder (PDD). DSM-IV-TR classifies PDD into 5 types: autistic disorder, Retts syndrome, childhood disorganized disorder, asperger's syndrome, and unspecified PDD.
The autistic patients can only receive good correction effect by early discovery, early intervention and early treatment. In order to diagnose children as early as possible, a common method at present is that a doctor carries out clinical observation and test on the children and judges according to the test result and the clinical behavior of the children.
However, the conventional detection analysis method has the following problems: because doctors need to process more cases, the processing efficiency is low, different children have different adaptability to different occasions or environments, the emotion uneasiness of the children is easily caused in the environment for clinical diagnosis of the doctors, abnormal behaviors of the children are caused, the diagnosis result is influenced, and meanwhile, the young children easily resist or fear strangers, so that the children are difficult to cooperate with the doctors to carry out detection and diagnosis, the diagnosis efficiency is reduced, and the diagnosis accuracy is further reduced.
Disclosure of Invention
The invention provides a self-closing data processing method and device based on an intervening robot, wherein the method can attract children and the robot to play by utilizing the curiosity of the robot of the children, and can classify the children according to the current environment and the behaviors of the children in the playing process so as to be used by doctors for diagnosis, thereby improving the diagnosis efficiency of the doctors.
The embodiment of the invention provides a self-closing data processing method based on an intervening robot, which is suitable for the intervening robot, and comprises the following steps:
acquiring a scene image of a current scene, and performing scene verification on the scene image;
when the scene verification is passed, recording behavior actions executed by the child to be detected within a preset time interval, and calculating corresponding action characteristic values in the behavior actions;
and generating a classification result according to the action characteristic value.
In a possible implementation manner of the first aspect, the performing scene verification on the scene image specifically includes:
extracting scene features from the scene image;
matching the scene characteristics with a preset scene library to obtain a scene matching value;
judging whether the scene matching value is larger than a preset matching value or not;
if the scene matching value is larger than a preset matching value, determining that the scene verification is passed;
and if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
In a possible implementation manner of the first aspect, the calculating a corresponding action feature value in the behavior action includes:
the behavioral actions include limb actions
Acquiring an action form corresponding to the limb action;
collecting a plurality of bone joint point coordinates from the motion form;
and calculating the bone coordinate values corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate values as action characteristic values.
In a possible implementation manner of the first aspect, the calculating a corresponding action feature value in the behavior action includes:
the behavioral actions include eye movements;
acquiring eye positions corresponding to the two eye movements respectively to obtain two eye positions;
and calculating the coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
In a possible implementation manner of the first aspect, the generating a classification result according to the action feature value includes:
judging whether the action characteristic value is larger than a preset characteristic value or not;
if the action characteristic value is larger than a preset characteristic value, generating a normal classification result;
and if the action characteristic value is smaller than a preset characteristic value, generating an abnormal classification result.
A second aspect of the embodiments of the present invention provides an intervention robot-based self-closing data processing apparatus, the apparatus being adapted for an intervention robot, the apparatus comprising:
the verification module is used for acquiring a scene image of a current scene and performing scene verification on the scene image;
the computing module is used for recording the behavior actions executed by the children to be detected within a preset time interval when the scene verification is passed, and computing corresponding action characteristic values in the behavior actions;
and the detection module is used for generating a classification result according to the action characteristic value.
In a possible implementation manner of the second aspect, the verification module is further configured to:
extracting scene features from the scene image;
matching the scene characteristics with a preset scene library to obtain a scene matching value;
judging whether the scene matching value is larger than a preset matching value or not;
if the scene matching value is larger than a preset matching value, determining that the scene verification is passed;
and if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
In a possible implementation manner of the second aspect, the calculation module is further configured to:
the behavioral actions include limb actions
Acquiring an action form corresponding to the limb action;
collecting a plurality of bone joint point coordinates from the motion form;
and calculating the bone coordinate values corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate values as action characteristic values.
In a possible implementation manner of the second aspect, the calculation module is further configured to:
the behavioral actions include eye movements;
acquiring eye positions corresponding to the two eye movements respectively to obtain two eye positions;
and calculating the coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
Compared with the prior art, the self-closing data processing method and device based on the intervention robot provided by the embodiment of the invention have the beneficial effects that: the invention can detect the current scene of the child, determine whether the current scene is suitable for detection and classification, record the behavior of the child when the current scene is suitable for detection and classification, and finally generate a classification result according to the behavior of the child. The invention can avoid the problems of low diagnosis efficiency caused by processing a plurality of cases and abnormal behaviors of children caused by environment to influence the diagnosis result, has short time for whole detection and classification, can effectively improve the data processing efficiency, is based on real-time reflection of the children, can effectively improve the classification accuracy, and finally can send the classification result to a doctor who carries out diagnosis and treatment for reference of the doctor.
Drawings
FIG. 1 is a schematic flow chart of a self-closing data processing method based on an intervention robot according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an interventional robot according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a self-closing data processing device based on an intervention robot according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The current commonly used detection and analysis methods have the following problems: because doctors need to process more cases, the processing efficiency is low, different children have different adaptability to different occasions or environments, the emotion uneasiness of the children is easily caused in the environment for clinical diagnosis of the doctors, abnormal behaviors of the children are caused, the diagnosis result is influenced, and meanwhile, the young children easily resist or fear strangers, so that the children are difficult to cooperate with the doctors to carry out detection and diagnosis, the diagnosis efficiency is reduced, and the diagnosis accuracy is further reduced.
In order to solve the above problem, a self-closing data processing method based on an intervention robot provided by the embodiments of the present application will be described and explained in detail by the following specific embodiments.
Referring to fig. 1, a flow diagram of a self-closing data processing method based on an intervention robot according to an embodiment of the present invention is shown.
Referring to fig. 2, a schematic structural diagram of an intervention robot according to an embodiment of the present invention is shown, in this embodiment, the self-closing data processing method based on the intervention robot may be applied to an intervention robot, the intervention robot is provided with a robot body 1 and a training table 2, a child may sit in the training table 2 to actively communicate with the robot body 1, the robot body 1 may collect a scene image of a current scene and record various behaviors of the contacted child when contacting the child, and perform detection and classification based on the scene image and the child behavior data.
By way of example, the method may include:
s11, acquiring a scene image of the current scene, and carrying out scene verification on the scene image.
In this embodiment, the surface of the intervening robot may be provided with one or more cameras, and scene images may be collected by the cameras.
Specifically, scene images of the current scene may be acquired from different directions, for example, rotated in situ by one circle, and 10 images may be captured during the rotation, and the capturing angle interval of each image differs by 36 degrees. Or the scene image can be obtained by rotating the camera in place for a circle and then shooting a panoramic image or shooting images at different specific angles respectively.
After the scene image is acquired, the scene image can be subjected to scene verification, and whether the current scene or environment is suitable for detection can be determined through the scene verification. The factors of the scene causing errors to the detection can be eliminated through scene verification, and the accuracy of the detection is improved.
In order to improve the efficiency of the scene verification and make the scene verification more suitable for the actual usage requirement, the step S11 may include the following sub-steps:
and a substep S111 of extracting scene features from the scene image.
In actual operation, N random frames with random sizes and random positions can be randomly generated in the scene image.
The random size means that the size of each random frame is random, and optionally, the shape of the random frame may include any one of various shapes such as a rectangle, a square, and a circle, for example, if the random frame is a rectangle, N random frames with random length-width ratios may be generated, and if the random frame is a circle, N random frames with random radius may be generated. Optionally, a size threshold of the random frame may be preset, so that the sizes of the N randomly generated random frames are all smaller than the size threshold, to prevent the generated random frame from being too large to accurately extract the local features of the scene image.
The random position can mean that the position of each random frame in a scene image is random, an overlapped image area can exist between every two random frames, and by generating the random frame with the random position, a plurality of image areas in different positions can be collected, so that the richness of the extracted local image is ensured.
The image area contained in each random frame generated can be identified, the image features in each random frame are extracted, and the image area contained in each random frame belongs to the local image of the scene image, so the image features in each random frame can be used as the local features of the scene image to obtain the N local features of the scene image. Optionally, the extracted image features of each random frame may include, but are not limited to, edge features, color features, texture features, and the like, and the image features may be extracted in various ways, for example, the image features may be extracted by using the color histogram, texture descriptor, and the like, or by using a neural network model (e.g., CNN, SAE, and the like), which is not limited herein. Effective local features in a scene image can be extracted by using a randomly generated random frame, and the problem that the effective local features are difficult to extract due to the fact that some complex scenes do not have definite fixed features is solved.
And finally, fusing the N local features of the scene image to obtain the scene feature. The fusing mode can comprise adopting parallel strategy fusing, adopting serial strategy fusing and the like.
And a substep S112, matching the scene characteristics with a preset scene library to obtain a scene matching value.
And matching the obtained scene characteristics with a plurality of preset characteristics in a preset scene library, wherein each preset characteristic corresponds to a score value, and the score value corresponding to the preset characteristic is taken as a scene matching value. And if no preset feature matched with the scene feature exists, the scene matching value corresponding to the scene feature is 0.
For example, there are 3 preset features, respectively a hospital, a school, and a playground, corresponding to point values of 1 point, 2 points, and 3 points, respectively. And if the scene features are matched with the playground features, the scene matching value corresponding to the scene features is 3 points.
And a substep S113 of judging whether the scene matching value is greater than a preset matching value.
And a substep S114, if the scene matching value is larger than a preset matching value, determining that the scene verification is passed.
And a substep S115, if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
Judging whether the scene matching value is larger than a preset matching value or not, and if the scene matching value is larger than the preset matching value, determining that the scene verification is passed and the scene is suitable for detection; if the scene matching value is smaller than the preset matching value, it can be determined that the scene verification fails and the scene is not suitable for detection.
By carrying out scene verification, whether the current detection scene or detection environment is suitable for detection or not can be determined, the influence on the performance of children due to scene problems can be avoided, and the problem of testing accuracy is reduced.
In an alternative embodiment, if it is determined that the scene verification fails, the detection may be stopped.
And S12, when the scene verification is passed, recording the behavior actions executed by the children to be detected within a preset time interval, and calculating corresponding action characteristic values in the behavior actions.
When the scene verification is passed, the current scene can be determined to be suitable for detection, the intervention robot can record the action of the child within a period of time, and then whether the child is the autism patient is determined through the action or the action executed within the preset time interval.
In one embodiment, upon determining that the scene verification is passed, the intervening robot may perform a predetermined guidance operation, such as playing music, video, or controlling the robot arm to perform various actions, and present the actions to the child for viewing. By executing the guiding operation, children can be attracted firstly, and the reaction and action of the children after watching the guiding operation can be recorded secondly.
In one of the alternative embodiments, the child performed the behavioral action may include a limb action, wherein, as an example, step S12 may include the following sub-steps:
and a substep S121 of acquiring a motion form corresponding to the limb motion.
In actual operation, the intervening robot may be provided with a camera and a sensor, and may acquire a motion image of a child through the camera and the sensor, determine whether a skeleton joint point coordinate difference between a current frame of motion image and a previous frame of motion image is zero, and if the skeleton joint point coordinate difference between the current frame of motion image and the previous frame of motion image is not zero, use a skeleton form of the frame of motion image as a motion form
And a substep S122 of collecting a plurality of bone joint point coordinates from the motion form.
In a specific implementation, the coordinates of each joint point of the bone form can be collected to obtain the coordinates of a plurality of bone joint points.
Specifically, in order to improve the acquisition efficiency, the frame rate that can be acquired by the camera is 30f/s, the 5 th, 10 th, 15 th, 20 th, 25 th frames are extracted from the data acquired every second, the time interval of the acquisition of the skeletal joint point coordinates is 10 seconds, and 20 human skeletal joint coordinates within 10 seconds are extracted, wherein the human skeletal joint coordinates can include a left ankle joint, a right ankle joint, a left elbow joint, a right elbow joint, a left foot, a right foot, a left hand, a right hand, a head, a hip center, a left hip, a right hip, a left knee, a right knee, a double-shoulder center, a left shoulder, a right shoulder, a middle spinal column, a left wrist joint, and a right wrist joint. The limb actions set by the present embodiment may include walking, standing, kicking, squatting, falling, and the like.
And a substep S123 of calculating a bone coordinate value corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate value as an action characteristic value.
After the coordinates of a plurality of bone joint points are collected, the coordinates of the bone joint points are calculated. Specifically, the absolute values of coordinates of a plurality of bone joint points can be calculated to obtain the motion characteristic value.
In order to detect a child with poor movement, in an alternative embodiment, the action includes an eye movement, wherein, as an example, the step S12 may further include the following sub-steps:
and a substep S124 of obtaining eye coordinates corresponding to the two eye movements respectively to obtain the two eye coordinates.
Specifically, two frames of binocular images can be respectively acquired at two preset time nodes through the camera, and the eye coordinate corresponding to each binocular image is respectively acquired.
In order to improve the accuracy of the acquisition, the two preset time nodes are respectively close to the periphery of the child and perform the guiding operation.
And a substep S125 of calculating a coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
The two catch coordinates can be subtracted to obtain a coordinate difference value of the two catch coordinates, and the coordinate difference value of the two catch coordinates is used as an action characteristic value.
And S13, generating a classification result according to the action characteristic value.
In this embodiment, the motion feature values may be used to determine whether the child is ill, and at the same time, a classification result may be generated for the user to perform subsequent treatment or further diagnosis.
In order to improve the accuracy of the detection, the step S13 may include the following sub-steps, as an example:
and a substep S131 of judging whether the action characteristic value is larger than a predetermined characteristic value.
And a substep S132 of generating a normal classification result if the motion feature value is greater than a predetermined feature value.
And a substep S133, if the motion feature value is smaller than a predetermined feature value, generating an abnormality classification result.
In a specific implementation, if a child suffers from autism, the child cannot attract the attention of the guiding robot when the guiding robot performs guiding operation, and the child still performs own action; if the child is not sick, the child can dance and observe different eyes when the organic robot approaches due to the nature and curiosity of the child. Specifically, the skeletal change can be determined by the skeletal coordinate value, and whether the eye is observing can also be determined by the eye coordinate difference value.
Specifically, it may be determined whether the coordinate difference is greater than a preset eye coordinate value, and if the coordinate difference is greater than the preset eye coordinate value, a normal classification result is generated, otherwise, an abnormal classification result is generated. Similarly, it can also be determined whether the bone coordinate value is greater than a preset bone coordinate value, if so, a normal classification result is generated, otherwise, an abnormal classification result is generated.
Finally, the classification result can be sent to a doctor who carries out diagnosis and treatment for the doctor to refer to, so that the diagnosis workload of the doctor is reduced, the diagnosis time is shortened, and the work efficiency of the doctor can be improved.
In this embodiment, an embodiment of the present invention provides a self-closing data processing method based on an intervention robot, which has the following beneficial effects: the invention can detect the current scene of the child, determine whether the current scene is suitable for detection and classification, record the behavior of the child when the current scene is suitable for detection and classification, and finally generate a classification result according to the behavior of the child. The invention can avoid the problems of low diagnosis efficiency caused by processing a plurality of cases and abnormal behaviors of children caused by environment to influence the diagnosis result, has short time for whole detection and classification, can effectively improve the data processing efficiency, is based on real-time reflection of the children, can effectively improve the classification accuracy, and finally can send the classification result to a doctor who carries out diagnosis and treatment for reference of the doctor.
An embodiment of the present invention further provides a self-closing data processing apparatus based on an intervention robot, and referring to fig. 3, a schematic structural diagram of the self-closing data processing apparatus based on the intervention robot according to an embodiment of the present invention is shown.
Wherein, as an example, the device is suitable for an intervening robot, and the self-closing data processing device based on the intervening robot can comprise:
a verification module 301, configured to obtain a scene image of a current scene, and perform scene verification on the scene image;
the calculating module 302 is configured to record behavior actions executed by the child within a preset time interval when the scene verification is passed, and calculate corresponding action characteristic values in the behavior actions;
and the detection module 303 is configured to generate a classification result according to the action characteristic value.
Further, the verification module is further configured to:
extracting scene features from the scene image;
matching the scene characteristics with a preset scene library to obtain a scene matching value;
judging whether the scene matching value is larger than a preset matching value or not;
if the scene matching value is larger than a preset matching value, determining that the scene verification is passed;
and if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
Further, the calculation module is further configured to:
the behavioral actions include limb actions
Acquiring an action form corresponding to the limb action;
collecting a plurality of bone joint point coordinates from the motion form;
and calculating the bone coordinate values corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate values as action characteristic values.
Further, the calculation module is further configured to:
the behavioral actions include eye movements;
acquiring eye positions corresponding to the two eye movements respectively to obtain two eye positions;
and calculating the coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
Further, the detection module may be further configured to:
judging whether the action characteristic value is larger than a preset characteristic value or not;
if the action characteristic value is larger than a preset characteristic value, generating a normal classification result;
and if the action characteristic value is smaller than a preset characteristic value, generating an abnormal classification result.
Further, an embodiment of the present application further provides an electronic device, including: the self-closing data processing method based on the intervention robot comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the self-closing data processing method based on the intervention robot according to the embodiment.
Further, the present application also provides a computer-readable storage medium, which stores computer-executable instructions for causing a computer to execute the self-closing data processing method based on the intervention robot as described in the above embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A self-closing data processing method based on an intervening robot, wherein the method is suitable for the intervening robot, and the method comprises the following steps:
acquiring a scene image of a current scene, and performing scene verification on the scene image;
when the scene verification is passed, recording behavior actions executed by the child to be detected within a preset time interval, and calculating corresponding action characteristic values in the behavior actions;
and generating a classification result according to the action characteristic value.
2. The self-closing data processing method based on the intervention robot as claimed in claim 1, wherein the scene verification is performed on the scene image, specifically:
extracting scene features from the scene image;
matching the scene characteristics with a preset scene library to obtain a scene matching value;
judging whether the scene matching value is larger than a preset matching value or not;
if the scene matching value is larger than a preset matching value, determining that the scene verification is passed;
and if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
3. The interventional robot-based self-closing data processing method according to claim 1, wherein the calculating corresponding action characteristic values in the behavior actions comprises:
the behavioral actions include limb actions
Acquiring an action form corresponding to the limb action;
collecting a plurality of bone joint point coordinates from the motion form;
and calculating the bone coordinate values corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate values as action characteristic values.
4. The interventional robot-based self-closing data processing method according to claim 1, wherein the calculating corresponding action characteristic values in the behavior actions comprises:
the behavioral actions include eye movements;
acquiring eye positions corresponding to the two eye movements respectively to obtain two eye positions;
and calculating the coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
5. The interventional robot-based self-closing data processing method according to claim 1, wherein the generating a classification result from the action feature value comprises:
judging whether the action characteristic value is larger than a preset characteristic value or not;
if the action characteristic value is larger than a preset characteristic value, generating a normal classification result;
and if the action characteristic value is smaller than a preset characteristic value, generating an abnormal classification result.
6. An interventional robot-based self-closing data processing apparatus, the apparatus being adapted for use with an interventional robot, the apparatus comprising:
the verification module is used for acquiring a scene image of a current scene and performing scene verification on the scene image;
the computing module is used for recording the behavior actions executed by the children to be detected within a preset time interval when the scene verification is passed, and computing corresponding action characteristic values in the behavior actions;
and the detection module is used for generating a classification result according to the action characteristic value.
7. The interventional robot-based self-closing data processing device of claim 6, wherein the verification module is further configured to:
extracting scene features from the scene image;
matching the scene characteristics with a preset scene library to obtain a scene matching value;
judging whether the scene matching value is larger than a preset matching value or not;
if the scene matching value is larger than a preset matching value, determining that the scene verification is passed;
and if the scene matching value is smaller than a preset matching value, determining that the scene verification is not passed.
8. The interventional robot-based self-closing data processing device of claim 6, wherein the computing module is further configured to:
the behavioral actions include limb actions
Acquiring an action form corresponding to the limb action;
collecting a plurality of bone joint point coordinates from the motion form;
and calculating the bone coordinate values corresponding to the coordinates of the plurality of bone joint points, and taking the bone coordinate values as action characteristic values.
9. The interventional robot-based self-closing data processing device of claim 6, wherein the computing module is further configured to:
the behavioral actions include eye movements;
acquiring eye positions corresponding to the two eye movements respectively to obtain two eye positions;
and calculating the coordinate difference value of the two catch coordinates, and taking the coordinate difference value as an action characteristic value.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the interventional robot-based self-closing data processing method of any one of claims 1-5.
CN202110271584.6A 2021-03-12 2021-03-12 Self-closing data processing method and device based on intervention robot Pending CN113080964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110271584.6A CN113080964A (en) 2021-03-12 2021-03-12 Self-closing data processing method and device based on intervention robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110271584.6A CN113080964A (en) 2021-03-12 2021-03-12 Self-closing data processing method and device based on intervention robot

Publications (1)

Publication Number Publication Date
CN113080964A true CN113080964A (en) 2021-07-09

Family

ID=76667207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110271584.6A Pending CN113080964A (en) 2021-03-12 2021-03-12 Self-closing data processing method and device based on intervention robot

Country Status (1)

Country Link
CN (1) CN113080964A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170027805A1 (en) * 2013-03-15 2017-02-02 John C. Simmons Vision-Based Diagnosis and Treatment
CN109431522A (en) * 2018-10-19 2019-03-08 昆山杜克大学 Autism early screening device based on name reaction normal form
CN109620259A (en) * 2018-12-04 2019-04-16 北京大学 Based on eye movement technique and machine learning to the system of autism children's automatic identification
CN109717878A (en) * 2018-12-28 2019-05-07 上海交通大学 A kind of detection system and application method paying attention to diagnosing normal form jointly for autism
WO2019147955A1 (en) * 2018-01-25 2019-08-01 The Children's Hospital Of Philadelphia Biometric sensor device for digital quantitative phenotyping
CN111326253A (en) * 2018-12-14 2020-06-23 深圳先进技术研究院 Method for evaluating multi-modal emotional cognitive ability of patients with autism spectrum disorder
US20200345290A1 (en) * 2019-05-04 2020-11-05 Vidar Vignesson Dynamic neuropsychological assessment tool

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170027805A1 (en) * 2013-03-15 2017-02-02 John C. Simmons Vision-Based Diagnosis and Treatment
WO2019147955A1 (en) * 2018-01-25 2019-08-01 The Children's Hospital Of Philadelphia Biometric sensor device for digital quantitative phenotyping
CN109431522A (en) * 2018-10-19 2019-03-08 昆山杜克大学 Autism early screening device based on name reaction normal form
CN109620259A (en) * 2018-12-04 2019-04-16 北京大学 Based on eye movement technique and machine learning to the system of autism children's automatic identification
CN111326253A (en) * 2018-12-14 2020-06-23 深圳先进技术研究院 Method for evaluating multi-modal emotional cognitive ability of patients with autism spectrum disorder
CN109717878A (en) * 2018-12-28 2019-05-07 上海交通大学 A kind of detection system and application method paying attention to diagnosing normal form jointly for autism
US20200345290A1 (en) * 2019-05-04 2020-11-05 Vidar Vignesson Dynamic neuropsychological assessment tool

Similar Documents

Publication Publication Date Title
US11803241B2 (en) Wearable joint tracking device with muscle activity and methods thereof
Nakano et al. Evaluation of 3D markerless motion capture accuracy using OpenPose with multiple video cameras
Hesse et al. Computer vision for medical infant motion analysis: State of the art and rgb-d data set
Ghasemzadeh et al. Coordination analysis of human movements with body sensor networks: A signal processing model to evaluate baseball swings
Capecci et al. Accuracy evaluation of the Kinect v2 sensor during dynamic movements in a rehabilitation scenario
KR20170052628A (en) Motor task analysis system and method
D’Antonio et al. Validation of a 3D markerless system for gait analysis based on OpenPose and two RGB webcams
JP2012518236A (en) Method and system for gesture recognition
CN113658211B (en) User gesture evaluation method and device and processing equipment
Ajay et al. A pervasive and sensor-free Deep Learning system for Parkinsonian gait analysis
US20210059569A1 (en) Fall risk evaluation method, fall risk evaluation device, and non-transitory computer-readable recording medium in which fall risk evaluation program is recorded
Rybarczyk et al. Recognition of physiotherapeutic exercises through DTW and low-cost vision-based motion capture
CN111048205A (en) Method and device for assessing symptoms of Parkinson's disease
Postolache et al. Tailored virtual reality for smart physiotherapy
Zhang et al. Comparison of OpenPose and HyperPose artificial intelligence models for analysis of hand-held smartphone videos
JP7376677B2 (en) Image processing system, endoscope system and method of operating the endoscope system
CN113080964A (en) Self-closing data processing method and device based on intervention robot
CN112438723A (en) Cognitive function evaluation method, cognitive function evaluation device, and storage medium
Chalvatzaki et al. Estimating double support in pathological gaits using an hmm-based analyzer for an intelligent robotic walker
Lim et al. Depth image based gait tracking and analysis via robotic walker
WO2023097773A1 (en) Gait analysis method, apparatus, and device, and storage medium
Wong et al. Enhanced classification of abnormal gait using BSN and depth
Ripic et al. Validity of artificial intelligence-based markerless motion capture system for clinical gait analysis: Spatiotemporal results in healthy adults and adults with Parkinson’s disease
Siddiq et al. Falling Estimation Based on PoseNet Using Camera with Difference Absolute Standard Deviation Value and Average Amplitude Change on Key-Joint
CN114052725A (en) Gait analysis algorithm setting method and device based on human body key point detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination