CN116311491A - Intelligent medication monitoring method, device, equipment and storage medium - Google Patents

Intelligent medication monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN116311491A
CN116311491A CN202211090185.0A CN202211090185A CN116311491A CN 116311491 A CN116311491 A CN 116311491A CN 202211090185 A CN202211090185 A CN 202211090185A CN 116311491 A CN116311491 A CN 116311491A
Authority
CN
China
Prior art keywords
medication
user
behavior
preset
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211090185.0A
Other languages
Chinese (zh)
Inventor
谢雪梅
谈小鹏
甘礼福
刘艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Technology of Xidian University
Original Assignee
Guangzhou Institute of Technology of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Technology of Xidian University filed Critical Guangzhou Institute of Technology of Xidian University
Priority to CN202211090185.0A priority Critical patent/CN116311491A/en
Publication of CN116311491A publication Critical patent/CN116311491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/40ICT specially adapted for the handling or processing of medical references relating to drugs, e.g. their side effects or intended usage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Social Psychology (AREA)
  • Medicinal Chemistry (AREA)
  • Pharmacology & Pharmacy (AREA)
  • Toxicology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention discloses an intelligent medication monitoring method, device, equipment and storage medium, which are characterized in that video images of a medication process of a user are obtained; estimating the human body posture of the video image by adopting a preset neural network model, and determining body key points of the user; judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user; detecting a medication label in the video image according to a pre-trained target detection model, and determining the medication behavior of the user; and determining a monitoring result of the user medication according to the action behavior and the medication behavior. By monitoring specific medication behavior of the user, specific medication conditions of the user are monitored.

Description

Intelligent medication monitoring method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to an intelligent medication monitoring method, apparatus, device, and storage medium.
Background
Some chronic diseases are often life-long and patients need to take the medicine at home for a long period of time to control the condition. Due to various medication, the medicine taking has poor automatic taking property, insufficient health consciousness, cognitive dysfunction or lack of nursing staff and other factors. The traditional manual medicine monitoring mode can not meet the clinical requirements that patients are extremely easy to forget to take medicine, take medicine by mistake, miss medicine, delay taking medicine, stop medicine without permission, delay treatment caused by random increase and decrease of medicine dosage and other events, and the medicine safety problem of serious consequences is caused. For the continuation of medical services, safe administration on time, following medical orders is called heavy.
Most of the prior art in the field of intelligent medication monitoring is a medicine auxiliary system, and users are reminded of taking medicines according to preset time and dosage through an intelligent medicine box, a medicine taking application program APP and the like, but the medication condition of the users cannot be monitored specifically.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent medication monitoring method, device, equipment and storage medium.
The embodiment of the invention provides an intelligent medication monitoring method, which comprises the following steps:
acquiring a video image of a user in a medication process;
estimating the human body posture of the video image by adopting a preset neural network model, and determining body key points of the user;
judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user;
detecting a medication label in the video image according to a pre-trained target detection model, and determining the medication behavior of the user;
and determining a monitoring result of the user medication according to the action behavior and the medication behavior.
Preferably, the estimating of the human body posture of the video image by using a preset neural network model determines body key points of the user, including;
extracting the upper body skeleton gesture from the video image by using BlazePose convolutional neural network;
and detecting preset key points in the upper body skeleton gesture, and carrying out two-dimensional plane projection on the detected key points to obtain the position, direction and scale information of the body key points.
As a preferred aspect, the body keypoints include: mouth, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist and right wrist;
the step of judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user specifically comprises the following steps:
calculating a first Euclidean distance between preset first key point pairs, a second Euclidean distance between preset second key point pairs and a third Euclidean distance between preset third key point pairs in real time;
when the first Euclidean distance is sequentially identified to be in a preset first range, the second Euclidean distance is in a preset second range and the third Euclidean distance is in a preset third range, judging that medicine taking, medicine taking and medicine taking processes sequentially occur and judging that the action behavior is normal;
otherwise, judging that the action behavior is abnormal.
Preferably, the training process of the target detection model specifically includes:
acquiring a training image, framing all medicine bottle samples and tablet samples in the training image by using a Labelimg data marking tool, marking the names of the medicine bottles and the tablet names to acquire a training image calibrated in advance, and taking the marked training image as a data set;
inputting all data in the data set into a target detection model based on YOLO V5, and scaling all training images to a preset size in equal proportion;
outputting a prediction frame of each training image in the data set according to an initial anchor frame preset by the target detection model, comparing the detection result of the output prediction frame with the calibration of the image, calculating the difference between the output prediction frame and the calibration of the image, and reversely updating anchor frame parameters of the target detection model;
and carrying out iterative training on the data set by adopting a preset single training grabbing number, a preset iteration number and a preset training round according to the target detection model to obtain a trained target detection model.
As a preferable scheme, the detecting the medication label in the video image according to the pre-trained target detection model, and determining the medication behavior of the user specifically includes:
intercepting video frames in the video image according to a preset frame number;
sequentially inputting the acquired video frames into the target detection model, and outputting the medicine bottle samples and the tablet samples identified in the video images and coordinate information corresponding to the medicine bottle samples and the tablet samples;
determining medication information of a user through statistics of the output medicine bottle samples and the tablet samples;
comparing the medication information with preset doctor's advice information;
when the medication information is the same as the doctor's advice information, judging that the medication behavior is normal;
and when the medication information is different from the doctor's advice information, judging that the medication behavior is abnormal.
Preferably, the determining the monitoring result of the user medication according to the action behavior and the medication behavior specifically includes:
outputting a medication monitoring result to be normal when the action behavior is normal and the medication behavior is normal;
when the action behavior is abnormal and the medication behavior is normal, outputting a medication monitoring result that the medication is not taken;
when the action behavior is normal and the medication behavior is abnormal, outputting a medication monitoring result as a medication taking error;
and outputting a medication monitoring result to be abnormal medication when the action behavior is abnormal and the medication behavior is abnormal.
Preferably, before acquiring the video image of the user medication process, the method further comprises:
starting monitoring equipment according to the medication time preset by the user, and acquiring a corresponding monitoring image;
detecting the human body of the monitoring image;
outputting a voice medicine taking prompt when the human body is not detected in the acquired monitoring image;
when a human body is not detected in the acquired monitoring image, the acquired monitoring image is taken as the video image.
The embodiment of the invention also provides an intelligent medication monitoring device, which comprises:
the image acquisition module is used for acquiring video images of the drug administration process of the user;
the gesture estimation module is used for estimating the human body gesture of the video image by adopting a preset neural network model and determining body key points of the user;
the action analysis module is used for judging the action of the acquired body key points according to a preset action characteristic rule and determining the action behavior of the user;
the medication analysis module is used for detecting medication labels in the video images according to a pre-trained target detection model and determining medication behaviors of the user;
and the result determining module is used for determining the monitoring result of the user medication according to the action behavior and the medication behavior.
Preferably, the attitude estimation module is specifically configured to;
extracting the upper body skeleton gesture from the video image by using BlazePose convolutional neural network;
and detecting preset key points in the upper body skeleton gesture, and carrying out two-dimensional plane projection on the detected key points to obtain the position, direction and scale information of the body key points.
Preferably, the body keypoints comprise: mouth, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist and right wrist;
the action analysis module is specifically used for:
calculating a first Euclidean distance between preset first key point pairs, a second Euclidean distance between preset second key point pairs and a third Euclidean distance between preset third key point pairs in real time;
when the first Euclidean distance is sequentially identified to be in a preset first range, the second Euclidean distance is in a preset second range and the third Euclidean distance is in a preset third range, judging that medicine taking, medicine taking and medicine taking processes sequentially occur and judging that the action behavior is normal;
otherwise, judging that the action behavior is abnormal.
Preferably, the training process of the target detection model specifically includes:
acquiring a training image, framing all medicine bottle samples and tablet samples in the training image by using a Labelimg data marking tool, marking the names of the medicine bottles and the tablet names to acquire a training image calibrated in advance, and taking the marked training image as a data set;
inputting all data in the data set into a target detection model based on YOLO V5, and scaling all training images to a preset size in equal proportion;
outputting a prediction frame of each training image in the data set according to an initial anchor frame preset by the target detection model, comparing the detection result of the output prediction frame with the calibration of the image, calculating the difference between the output prediction frame and the calibration of the image, and reversely updating anchor frame parameters of the target detection model;
and carrying out iterative training on the data set by adopting a preset single training grabbing number, a preset iteration number and a preset training round according to the target detection model to obtain a trained target detection model.
Preferably, the medication analysis module is specifically configured to:
intercepting video frames in the video image according to a preset frame number;
sequentially inputting the acquired video frames into the target detection model, and outputting the medicine bottle samples and the tablet samples identified in the video images and coordinate information corresponding to the medicine bottle samples and the tablet samples;
determining medication information of a user through statistics of the output medicine bottle samples and the tablet samples;
comparing the medication information with preset doctor's advice information;
when the medication information is the same as the doctor's advice information, judging that the medication behavior is normal;
and when the medication information is different from the doctor's advice information, judging that the medication behavior is abnormal.
Preferably, the result determining module is specifically configured to:
outputting a medication monitoring result to be normal when the action behavior is normal and the medication behavior is normal;
when the action behavior is abnormal and the medication behavior is normal, outputting a medication monitoring result that the medication is not taken;
when the action behavior is normal and the medication behavior is abnormal, outputting a medication monitoring result as a medication taking error;
and outputting a medication monitoring result to be abnormal medication when the action behavior is abnormal and the medication behavior is abnormal.
Preferably, the device further comprises a starting module, specifically configured to:
before a video image of a user medication process is acquired, starting monitoring equipment according to medication time preset by the user, and acquiring a corresponding monitoring image;
detecting the human body of the monitoring image;
outputting a voice medicine taking prompt when the human body is not detected in the acquired monitoring image;
when a human body is not detected in the acquired monitoring image, the acquired monitoring image is taken as the video image.
The embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein when the computer program runs, equipment where the computer readable storage medium is located is controlled to execute the intelligent medication monitoring method according to any one of the above embodiments.
The embodiment of the invention also provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the intelligent medication monitoring method according to any one of the above embodiments is realized when the processor executes the computer program.
The invention provides an intelligent medication monitoring method, device, equipment and storage medium, which are used for acquiring video images of a medication process of a user; estimating the human body posture of the video image by adopting a preset neural network model, and determining body key points of the user; judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user; detecting a medication label in the video image according to a pre-trained target detection model, and determining the medication behavior of the user; and determining a monitoring result of the user medication according to the action behavior and the medication behavior. By monitoring specific medication behavior of the user, specific medication conditions of the user are monitored.
Drawings
FIG. 1 is a schematic flow chart of an intelligent medication monitoring method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a user upper body skeletal gesture provided by an embodiment of the present invention;
FIG. 3 is a flow chart of an intelligent medication monitoring method according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent medication monitoring device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an intelligent medication monitoring method, referring to fig. 1, which is a flow chart of the intelligent medication monitoring method provided by the embodiment of the invention, wherein the steps S1 to S5 of the method are as follows:
s1, acquiring a video image of a user in a medication process;
s2, estimating the human body posture of the video image by adopting a preset neural network model, and determining body key points of the user;
s3, judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user;
s4, detecting a medication label in the video image according to a pre-trained target detection model, and determining the medication behavior of the user;
s5, determining a monitoring result of the user medication according to the action behavior and the medication behavior.
When the embodiment is implemented, when the medication behavior of the user is detected, the video image of the medication process of the user is required to be acquired through the monitoring equipment deployed at the specific position, and the monitoring equipment can be arranged at the medicine placement position so as to be convenient for monitoring the behavior of the user on the medicine bottle;
acquiring a video image of a user in a medication process, estimating the human body posture of the video image by adopting a preset neural network model, and determining the position of a body key point of the user in the video image;
it should be noted that, human body posture estimation through a preset neural network model is an important task in computer vision, and is also an essential step for understanding human actions and behaviors by a computer, various methods for estimating human body posture by a deep learning model are disclosed in the prior art, and the performance of far exceeding the traditional method is achieved; therefore, in practical implementation, the estimation of the human body posture is often converted into the problem of predicting the human body key points, namely, the position coordinates of each key point of the human body are predicted first, and then the spatial position relationship among the key points is determined according to priori knowledge, so that the predicted human body skeleton key points are obtained;
analyzing the predicted body key point actions, comparing the predicted body key point actions with preset action feature rules, determining action behaviors of a user, and analyzing whether the medication process is normal or not;
the user's medication behavior generally includes taking, pouring and taking, and the video image is perceived through algorithms such as human body posture estimation, and finally a judgment result of whether the user takes the medicine is obtained;
after confirming the action of taking medicine by a user, monitoring the type and quantity of the current medicine of the user, specifically, judging whether the patient takes the correct medicine or not according to the characteristics of the appearance of the medicine, the medicine itself and the like detected by a pre-trained target detection model, and determining the medicine taking behavior of the user;
and determining a monitoring result of the user medication according to the action behavior and the medication behavior.
And determining whether the user normally completes the medication behavior by respectively detecting the medication action behavior of the user and the medication behavior of the medicine.
In yet another embodiment of the present invention, the step S2 specifically includes:
the BlazePose lightweight convolutional neural network architecture is used for human body posture estimation, and the BlazePose convolutional neural network can be adapted to the performance requirements of mobile equipment, so that the application scene of the scheme can be greatly widened.
The BlazePose convolutional neural network generates 33 body key points for a person to run at a speed exceeding 30 frames per second, and the BlazePose convolutional neural network of the human body posture estimation network is utilized to extract the skeleton posture of the upper body of the human body in each frame in the video image;
the human body gesture refers to a description of the distribution of human body joints on an image two-dimensional plane, the projection of the human body joints on the image two-dimensional plane is generally described by using line segments or rectangles, the two-dimensional gesture of the human body is described by the line segment length and line segment angle distribution or the rectangle size and rectangle direction, and the position, the direction and the scale information of each key point of the human body are determined.
Body key point information of the human body posture is extracted through BlazePose convolutional neural network and used for carrying out specific action analysis.
In yet another embodiment provided by the present invention, the body keypoints comprise: mouth, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist and right wrist;
the step of judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user specifically comprises the following steps:
calculating a first Euclidean distance between preset first key point pairs, a second Euclidean distance between preset second key point pairs and a third Euclidean distance between preset third key point pairs in real time;
when the first Euclidean distance is sequentially identified to be in a preset first range, the second Euclidean distance is in a preset second range and the third Euclidean distance is in a preset third range, judging that medicine taking, medicine taking and medicine taking processes sequentially occur and judging that the action behavior is normal;
otherwise, judging that the action behavior is abnormal.
In the implementation of this embodiment, referring to fig. 2, a schematic diagram of a skeleton gesture of a user's upper body according to an embodiment of the present invention is shown;
the determined body key points include mouth 0, neck 1, left shoulder 2, right shoulder 3, left elbow 4, right elbow 5, left wrist 6 and right wrist 7;
extracting body key points of a user in the video image through preset key points, and analyzing medicine taking actions, specifically:
the Euclidean distance between some key point pairs is calculated to determine the action of the user in the process of taking medicine;
calculating a first Euclidean distance between preset first key point pairs, a second Euclidean distance between preset second key point pairs and a third Euclidean distance between preset third key point pairs in real time;
for example, taking the left wrist 6 and the right wrist 7 as the first key point pair, calculating the euclidean distance between the left wrist 6 and the right wrist 7 can be used to determine whether the user has the action of taking medicine, because both hands are required to be simultaneously used after opening the medicine bottle or opening the medicine box;
the left wrist 6 and the right wrist 7 are used as a second key point pair, the Euclidean distance between the left wrist 6 and the right wrist 7 is calculated, and the action of judging whether a user has medicine pouring action or not can be performed, and the medicine quantity can be accurately determined only by enabling both hands simultaneously when the medicine is poured;
the euclidean distance between the left wrist 6/right wrist 7 and the mouth 0 is calculated using the pair of the left wrist 6/right wrist 7 and the mouth 0 as a third key point, and can be used to determine whether the user has an action of taking medicine.
When the first Euclidean distance is sequentially identified to be in a preset first range, the second Euclidean distance is in a preset second range and the third Euclidean distance is in a preset third range, judging that medicine taking, medicine taking and medicine taking processes sequentially occur and judging that the action behavior is normal;
otherwise, if any Euclidean distance does not exist, judging that the medication behavior is abnormal, judging that the action behavior is abnormal, and possibly not taking the medicine.
The medicine taking behavior of the user is determined through the Euclidean distance of the coordinates among the body key points, and the medicine taking action of the user can be accurately detected.
In still another embodiment of the present invention, the training process of the target detection model specifically includes:
acquiring a training image, framing all medicine bottle samples and tablet samples in the training image by using a Labelimg data marking tool, marking the names of the medicine bottles and the tablet names to acquire a training image calibrated in advance, and taking the marked training image as a data set;
inputting all data in the data set into a target detection model based on YOLO V5, and scaling all training images to a preset size in equal proportion;
outputting a prediction frame of each training image in the data set according to an initial anchor frame preset by the target detection model, comparing the detection result of the output prediction frame with the calibration of the image, calculating the difference between the output prediction frame and the calibration of the image, and reversely updating anchor frame parameters of the target detection model;
and carrying out iterative training on the data set by adopting a preset single training grabbing number, a preset iteration number and a preset training round according to the target detection model to obtain a trained target detection model.
In the implementation of this embodiment, the target detected by the target detection model is a drug in this embodiment, and the position of the target in the image is classified.
The conventional object detection algorithm selects an area through a sliding window, then extracts the features of the selected image block, and finally classifies the extracted features by using a classifier, thereby detecting an object in the image. The traditional target detection algorithm has the problems of difficult feature extraction and low detection precision.
In order for the system to operate in near real time, the selection of the target detection model preferably selects a lightweight network in addition to accuracy. The embodiment adopts a Yolo series network, ensures relatively stable performance in the detection process, and has strong advantages in rapid deployment of the model through a lightweight network.
In this embodiment, it is required to detect larger targets such as a medicine bottle and smaller targets such as a tablet at the same time, and the Yolov5 target detection model can self-define and select out of network complexity, and can adaptively adjust parameters of the anchor frame in the model training process, and the network can learn different anchor frames according to different data sets, so that target detection of different sizes in different data sets has better generalization capability, and the problem of difficulty in small target detection is alleviated.
The training specific steps of the target detection model based on the YOLO v5 are as follows:
acquiring sample data to obtain a training image;
marking data, adopting a Labelimg data marking tool to frame all medicine bottle samples and tablet samples in a training image, marking the names of the medicine bottles and the tablet names to obtain a training image which is calibrated in advance, storing and converting marking files, and taking the trained image as a data set, wherein the ratio of the training set to the testing set is 8:2;
loading a target detection model of a data set based on YOLO V5 to obtain the width w and the height h of all training images, and scaling the images to img_size with a specified size in equal proportion so as to facilitate detection and identification;
and outputting a prediction frame on the basis of an initial anchor frame set by an algorithm, comparing with a group trunk, calculating the difference between the two, reversely updating, iterating network parameters, and detecting anchor frame parameters of the model by the target.
In training, setting the batch_size of a target detection model based on YOLO V5 as 16, and iteratively training 300 epochs to obtain the target detection model;
the coordinates and label of the medicine bottle tablet can be detected rapidly through the trained target detection model, so that the event detection in medicine taking behavior is solved.
In yet another embodiment of the present invention, the step S4 specifically includes:
intercepting video frames in the video image according to a preset frame number;
sequentially inputting the acquired video frames into the target detection model, and outputting the medicine bottle samples and the tablet samples identified in the video images and coordinate information corresponding to the medicine bottle samples and the tablet samples;
determining medication information of a user through statistics of the output medicine bottle samples and the tablet samples;
comparing the medication information with preset doctor's advice information;
when the medication information is the same as the doctor's advice information, judging that the medication behavior is normal;
and when the medication information is different from the doctor's advice information, judging that the medication behavior is abnormal.
When the embodiment is implemented, before medication monitoring is performed, relevant doctor's advice information needs to be input in advance; the doctor advice information comprises corresponding medicine names and medicine consumption;
intercepting video frames in the video image according to a preset frame number;
sequentially inputting the acquired video frames into the target detection model, and outputting the medicine bottle samples and the tablet samples identified in the video images and coordinate information corresponding to the medicine bottle samples and the tablet samples;
determining medication information of a user through statistics of the output medicine bottle samples and the tablet samples, wherein the medication information of the user comprises each medicine name and the corresponding medicine quantity;
when the medication information is the same as the doctor's advice information, judging that the medication behavior is normal;
and when the medication information is different from the doctor's advice information, judging that the medication behavior is abnormal.
And comparing the medication information with preset doctor's advice information, and specifically monitoring whether the medication dosage of the user is normal.
In still another embodiment of the present invention, the determining, according to the action behavior and the medication behavior, a monitoring result of the medication of the user specifically includes:
outputting a medication monitoring result to be normal when the action behavior is normal and the medication behavior is normal;
when the action behavior is abnormal and the medication behavior is normal, outputting a medication monitoring result that the medication is not taken;
when the action behavior is normal and the medication behavior is abnormal, outputting a medication monitoring result as a medication taking error;
and outputting a medication monitoring result to be abnormal medication when the action behavior is abnormal and the medication behavior is abnormal.
When the embodiment is implemented, the action behavior and the medication behavior of the user need to be comprehensively analyzed when the medication process of the user is analyzed;
when the action behavior is normal and the medication behavior is normal, the user is shown to complete the medication of the medical advice by adopting normal medication action, and the output medication monitoring result is normal;
when the action behavior is abnormal and the medication behavior is normal, indicating that the user takes the medicine abnormally, and outputting a medication monitoring result that the user does not take the medicine;
when the action behavior is normal and the medication behavior is abnormal, indicating that the medicine taken by the user is not in compliance with the doctor's advice, and outputting a medication monitoring result as a medicine taking error;
when the action behaviors are abnormal and the medication behaviors are abnormal, the condition that the user takes medicines abnormally is indicated, the taken medicines are not in compliance with medical advice, and the medication monitoring result is output as medication abnormality.
By comprehensively analyzing the medication behavior and the action behavior, whether the medication of the user is normal or not is determined, and when the medication is abnormal, an abnormal link is output, so that the follow-up correction is facilitated.
In yet another embodiment of the present invention, before acquiring the video image of the user medication process, the method further includes:
starting monitoring equipment according to the medication time preset by the user, and acquiring a corresponding monitoring image;
detecting the human body of the monitoring image;
outputting a voice medicine taking prompt when the human body is not detected in the acquired monitoring image;
when a human body is not detected in the acquired monitoring image, the acquired monitoring image is taken as the video image.
In the implementation of this embodiment, referring to fig. 3, a flow chart of an intelligent medication monitoring method according to another embodiment of the present invention is shown, where the method includes:
triggering the starting of the camera through time according to the preset medication time;
detecting the person in the monitoring image by a human body detection algorithm;
outputting a timing prompt and outputting a voice medicine taking prompt when no person is detected;
when a person is detected, taking the acquired monitoring image as the video image;
carrying out Model training on the Model, and carrying out medicine taking record of medicine bottle detection, medicine pouring detection, medicine taking detection and medicine taking behavior judgment;
after the taking action is completed, the taking action is finished, and the next time is waited for triggering again.
And the automatic medication behavior monitoring is realized by starting medication monitoring at fixed time.
Another embodiment of the present invention provides an intelligent medication monitoring device, referring to fig. 4, which is a schematic structural diagram of the intelligent medication monitoring device provided by the embodiment of the present invention, where the device includes:
the image acquisition module is used for acquiring video images of the drug administration process of the user;
the gesture estimation module is used for estimating the human body gesture of the video image by adopting a preset neural network model and determining body key points of the user;
the action analysis module is used for judging the action of the acquired body key points according to a preset action characteristic rule and determining the action behavior of the user;
the medication analysis module is used for detecting medication labels in the video images according to a pre-trained target detection model and determining medication behaviors of the user;
and the result determining module is used for determining the monitoring result of the user medication according to the action behavior and the medication behavior.
It should be noted that, the intelligent medication monitoring device provided by the embodiment of the present invention is used for executing all the flow steps of the intelligent medication monitoring method in the above embodiment, and the working principles and beneficial effects of the two correspond to each other one by one, so that the description is omitted.
Referring to fig. 5, a schematic structural diagram of a terminal device according to an embodiment of the present invention is provided. The terminal device of this embodiment includes: a processor, a memory, and a computer program, such as an intelligent medication monitoring program, stored in the memory and executable on the processor. The steps of the above-described embodiments of the intelligent medication monitoring method, such as steps S1 to S5 shown in fig. 1, are implemented when the processor executes the computer program. Alternatively, the processor may implement the functions of the modules in the above-described device embodiments when executing the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used for describing the execution of the computer program in the terminal device. For example, the computer program may be divided into a code uploading module, a software packaging module, a software storage module, a device connection module and a device testing module, and specific functions of the modules are not described again.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of the terminal device, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal device, and which connects various parts of the entire terminal device using various interfaces and lines.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Wherein the terminal device integrated modules/units may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of code, object code, executable files, or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (10)

1. An intelligent medication monitoring method, characterized in that the method comprises:
acquiring a video image of a user in a medication process;
estimating the human body posture of the video image by adopting a preset neural network model, and determining body key points of the user;
judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user;
detecting a medication label in the video image according to a pre-trained target detection model, and determining the medication behavior of the user;
and determining a monitoring result of the user medication according to the action behavior and the medication behavior.
2. The intelligent medication monitoring method of claim 1, wherein the estimating the human body posture of the video image by using a preset neural network model, and determining the body key points of the user, specifically comprises;
extracting the upper body skeleton gesture from the video image by using BlazePose convolutional neural network;
and detecting preset key points in the upper body skeleton gesture, and carrying out two-dimensional plane projection on the detected key points to obtain the position, direction and scale information of the body key points.
3. The intelligent medication monitoring method of claim 1, wherein the body keypoints comprise: mouth, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist and right wrist;
the step of judging the action of the acquired body key points according to a preset action characteristic rule, and determining the action behavior of the user specifically comprises the following steps:
calculating a first Euclidean distance between preset first key point pairs, a second Euclidean distance between preset second key point pairs and a third Euclidean distance between preset third key point pairs in real time;
when the first Euclidean distance is sequentially identified to be in a preset first range, the second Euclidean distance is in a preset second range and the third Euclidean distance is in a preset third range, judging that medicine taking, medicine taking and medicine taking processes sequentially occur and judging that the action behavior is normal;
otherwise, judging that the action behavior is abnormal.
4. The intelligent medication monitoring method of claim 1, wherein the training process of the target detection model specifically comprises:
acquiring a training image, framing all medicine bottle samples and tablet samples in the training image by using a Labelimg data marking tool, marking the names of the medicine bottles and the tablet names to acquire a training image calibrated in advance, and taking the marked training image as a data set;
inputting all data in the data set into a target detection model based on YOLO V5, and scaling all training images to a preset size in equal proportion;
outputting a prediction frame of each training image in the data set according to an initial anchor frame preset by the target detection model, comparing the detection result of the output prediction frame with the calibration of the image, calculating the difference between the output prediction frame and the calibration of the image, and reversely updating anchor frame parameters of the target detection model;
and carrying out iterative training on the data set by adopting a preset single training grabbing number, a preset iteration number and a preset training round according to the target detection model to obtain a trained target detection model.
5. The intelligent medication monitoring method of claim 1, wherein the detecting medication labels in the video image according to a pre-trained object detection model, and determining the medication behavior of the user, specifically comprises:
intercepting video frames in the video image according to a preset frame number;
sequentially inputting the acquired video frames into the target detection model, and outputting the medicine bottle samples and the tablet samples identified in the video images and coordinate information corresponding to the medicine bottle samples and the tablet samples;
determining medication information of a user through statistics of the output medicine bottle samples and the tablet samples;
comparing the medication information with preset doctor's advice information;
when the medication information is the same as the doctor's advice information, judging that the medication behavior is normal;
and when the medication information is different from the doctor's advice information, judging that the medication behavior is abnormal.
6. The intelligent medication monitoring method of claim 1, wherein the determining the monitoring result of the user medication according to the action behavior and the medication behavior specifically comprises:
outputting a medication monitoring result to be normal when the action behavior is normal and the medication behavior is normal;
when the action behavior is abnormal and the medication behavior is normal, outputting a medication monitoring result that the medication is not taken;
when the action behavior is normal and the medication behavior is abnormal, outputting a medication monitoring result as a medication taking error;
and outputting a medication monitoring result to be abnormal medication when the action behavior is abnormal and the medication behavior is abnormal.
7. The intelligent medication monitoring method of claim 1, wherein prior to acquiring the video image of the user medication process, the method further comprises:
starting monitoring equipment according to the medication time preset by the user, and acquiring a corresponding monitoring image;
detecting the human body of the monitoring image;
outputting a voice medicine taking prompt when the human body is not detected in the acquired monitoring image;
when a human body is not detected in the acquired monitoring image, the acquired monitoring image is taken as the video image.
8. An intelligent medication monitoring device, the device comprising:
the image acquisition module is used for acquiring video images of the drug administration process of the user;
the gesture estimation module is used for estimating the human body gesture of the video image by adopting a preset neural network model and determining body key points of the user;
the action analysis module is used for judging the action of the acquired body key points according to a preset action characteristic rule and determining the action behavior of the user;
the medication analysis module is used for detecting medication labels in the video images according to a pre-trained target detection model and determining medication behaviors of the user;
and the result determining module is used for determining the monitoring result of the user medication according to the action behavior and the medication behavior.
9. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the intelligent medication monitoring method according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the intelligent medication monitoring method according to any one of claims 1 to 7.
CN202211090185.0A 2022-09-07 2022-09-07 Intelligent medication monitoring method, device, equipment and storage medium Pending CN116311491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211090185.0A CN116311491A (en) 2022-09-07 2022-09-07 Intelligent medication monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211090185.0A CN116311491A (en) 2022-09-07 2022-09-07 Intelligent medication monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116311491A true CN116311491A (en) 2023-06-23

Family

ID=86800045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211090185.0A Pending CN116311491A (en) 2022-09-07 2022-09-07 Intelligent medication monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116311491A (en)

Similar Documents

Publication Publication Date Title
US10991094B2 (en) Method of analyzing dental image for correction diagnosis and apparatus using the same
CN111870237B (en) Blood pressure detection device, blood pressure detection apparatus, and blood pressure detection medium
CN111598038B (en) Facial feature point detection method, device, equipment and storage medium
CN112101124B (en) Sitting posture detection method and device
US10762172B2 (en) Apparatus and method for object confirmation and tracking
US20170177826A1 (en) Assessing cognition using item-recall trials with accounting for item position
EP3412200B1 (en) Skin condition detection method, electronic apparatus, and skin condition detection system
CN107422844B (en) Information processing method and electronic equipment
CN113505662B (en) Body-building guiding method, device and storage medium
CN111009297B (en) Supervision method and device for medicine taking behaviors of user and intelligent robot
CN116246768B (en) MRI image inspection intelligent analysis management system based on artificial intelligence
CN113823376B (en) Intelligent medicine taking reminding method, device, equipment and storage medium
Huang et al. An automatic screening method for strabismus detection based on image processing
CN117425431A (en) Electrocardiogram analysis support device, program, electrocardiogram analysis support method, and electrocardiogram analysis support system
CN106580253A (en) Physiological data acquisition method, apparatus and system
CN116492227B (en) Medicine taking prompting method and system based on artificial intelligence
CN116311491A (en) Intelligent medication monitoring method, device, equipment and storage medium
US20230360199A1 (en) Predictive data analysis techniques using a hierarchical risk prediction machine learning framework
CN112488982A (en) Ultrasonic image detection method and device
CN113012190B (en) Hand hygiene compliance monitoring method, device, equipment and storage medium
CN113870973A (en) Information output method, device, computer equipment and medium based on artificial intelligence
CN111967428B (en) Face temperature measurement method and device and storage medium
CN114470719A (en) Full-automatic posture correction training method and system
CN113782146A (en) General medicine recommendation method, device, equipment and medium based on artificial intelligence
WO2021093744A1 (en) Method and apparatus for measuring diameter of pupil, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination