CN114795192B - Joint mobility intelligent detection method and system - Google Patents

Joint mobility intelligent detection method and system Download PDF

Info

Publication number
CN114795192B
CN114795192B CN202210762975.2A CN202210762975A CN114795192B CN 114795192 B CN114795192 B CN 114795192B CN 202210762975 A CN202210762975 A CN 202210762975A CN 114795192 B CN114795192 B CN 114795192B
Authority
CN
China
Prior art keywords
spec
image
sequence
value
time domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210762975.2A
Other languages
Chinese (zh)
Other versions
CN114795192A (en
Inventor
黄峰
尹博
徐硕瑀
罗子芮
谢韶东
骆志强
陈仰新
陶旭泓
熊丹宇
梁桂林
黎志豪
王安涛
谢航
江焕然
吴梦瑶
李宇彤
郝梦真
梁奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202210762975.2A priority Critical patent/CN114795192B/en
Publication of CN114795192A publication Critical patent/CN114795192A/en
Application granted granted Critical
Publication of CN114795192B publication Critical patent/CN114795192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent joint activity detection method and system, wherein a plurality of cameras are used for shooting a user in an all-around manner at a plurality of continuous different moments to obtain an image sequence, a key point detection algorithm is used for marking key points in each image respectively for each image in the image sequence, a joint activity time domain value is obtained through calculation according to the image sequence, and whether the key points are in a normal range or not is judged according to the joint activity time domain value, so that the beneficial effects of dynamically compressing the data volume of an image matrix at a plurality of different moments, greatly saving the calculation cost and improving the time efficiency are realized.

Description

Joint mobility intelligent detection method and system
Technical Field
The invention belongs to the field of data processing, and particularly relates to a joint motion degree intelligent detection method and system.
Background
The size of an included angle between an initial position and a final position passed by a moving arm adjacent to a joint in the rotating process, namely the joint activity degree, has important significance for researching the activity condition of a user, and can reflect whether the change of the human joint activity range is normal or not. The key points are marked in the joints by identifying (moving points, fixed points and axes), so that the physical changes of the joints can be converted into electric signals, and the tracks of the specific points in space can be calculated. The in-vitro measuring system shoots the motion of a human body through a camera, then carries out post-processing on the shot video to be divided into a plurality of time sequences of image matrixes to obtain the motion parameters of the human body, and finally analyzes the motion track according to the parameters. The on-body measuring system is characterized in that mark points are arranged at special positions of a human body, then cameras are arranged at various observation points to capture the motion of the mark points, and the motion postures of joints are calculated through the detected space coordinate data of the mark points. Patent document No. CN202010283719.6 discloses a human joint activity detection system and a detection method, which can detect muscle signals of a muscle part related to a joint to be detected to obtain the muscle activity of the muscle part related to the joint to be detected, but the technical requirements for an integrated chip are high, the calculation cost is high, and the detection of whether the joint activity is normal or not from a captured image is not sufficient.
Disclosure of Invention
The present invention is directed to a method and system for intelligently detecting joint mobility, so as to solve one or more technical problems in the prior art and provide at least one useful choice or creation condition.
The invention provides an intelligent joint activity degree detection method and system, wherein a plurality of cameras are used for shooting a user in an all-around mode at a plurality of continuous different moments to obtain an image sequence, a key point detection algorithm is used for each image in the image sequence to mark key points in each image, a joint activity time domain value is obtained through calculation according to the image sequence, and whether the activity of the user is in a normal range or not is judged according to the joint activity time domain value.
In order to achieve the above object, according to an aspect of the present invention, there is provided a joint-motion-degree intelligent detection method, including the steps of:
s100, shooting a user in all directions by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence;
s200, for each image in the image sequence, respectively marking key points in each image by using a key point detection algorithm;
s300, calculating to obtain a joint movement time domain value according to the image sequence;
s400, judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value; if not, pushing the alarm information to a mobile device or a database.
Further, in S100, a plurality of cameras are used to capture the user in all directions at a plurality of consecutive different times, and the method for obtaining the image sequence includes: the method comprises the steps of selecting a plurality of continuous different moments, using three cameras to respectively shoot a main view, a top view and a side view of a user, enabling image matrixes obtained by shooting by the cameras to be identical in size, shooting the main view, the top view and the side view of the user by the three cameras at each moment, combining the obtained main view, the top view and the side view at each moment into a sequence to serve as an image sequence, enabling the number of elements in the image sequence to be identical to that of the moments, enabling the sequence numbers of the elements in the image sequence to be identical to that of the moments, and enabling each element in the image sequence to be composed of the image matrixes of the three images of the main view, the top view and the side view at the moments with the same sequence numbers.
Further, in S200, for each image in the image sequence, using a keypoint detection algorithm, a method for respectively marking keypoints in each image is as follows: and respectively marking key points in each image in the image sequence by using a key point detection algorithm, setting the pixel value of the position of the key point in the image matrix to be 1, and setting the rest positions to be 0, so as to convert the pixel value of each image in the image sequence into a value of 0 or 1.
Further, in S300, according to the image sequence, the method for calculating the joint movement time domain value includes:
recording the serial number of each moment in the continuous different moments as T, and recording the number of the moments in the continuous different moments as T, namely, the serial number of an element in the image sequence is also T, the number of the element in the image sequence is also T, and T belongs to [1, T ];
recording an image sequence as Rouseq, wherein an element with a sequence number t in the Rouseq is Rouseq (t), a main view in the Rouseq (t) is fv (t), a side view in the Rouseq (t) is sv (t), a top view in the Rouseq (t) is tv (t), Rouseq (t) = [ fv (t), sv (t), tv (t) ];
because the image matrixes of the images shot by the cameras are the same in size, the sizes of the image matrixes are unified into n rows and m columns, the serial number of the rows in the image matrixes is i, i belongs to [1, n ], the serial number of the columns in the image matrixes is j, and j belongs to [1, m ];
the element with the row number of i and the column number of j in fv (t) is fv (t) i, j, the element with the row number of i and the column number of j in sv (t) is sv (t) i, j, the element with the row number of i and the column number of j in tv (t) is tv (t) i, j;
defining a function Cap (), where the input of the function Cap () is a uniform image matrix with a pixel value of 1 or 0 and a size of n × m, (the size of the image matrix in the function Cap () may correspond to the size of the image matrix in S200), and the operation procedure of the function Cap () is: acquiring a set of points with a pixel value of 1 in an input image matrix as a set oneset, acquiring the number of elements in the set oneset as size, recording a binary array formed by a numerical value of a row sequence number i1 and a numerical value of a column sequence number j1 of the elements with the sequence number q in the set oneset as q, q ∈ [1, size ], and recording the binary array as an array q [ i1, j1] wherein i1 represents a row sequence number, j1 represents a column sequence number, q [ i1] represents a numerical value of a row sequence number of an element with the sequence number q in the oneset in the image matrix, q [ j1] represents a numerical value of a column sequence number of an element with the sequence number q in the image matrix, recording the output of a function Cap () as result, and the calculation formula of result is as follows:
Figure 794705DEST_PATH_IMAGE002
the superscript 2 represents taking the power of 2, exp () represents a logarithmic function with a natural constant e as a base, and the obtained result is output;
function Cap () is input to fv (t), sv (t) and tv (t) contained in each element Rouseq (t) in the sequence Rouseq, the function Cap () is input to fv (t) to obtain Cap (fv (t)), the function Cap () is input to sv (t) to obtain Cap (sv (t)), and the function Cap () is input to tv (t) to obtain Cap (tv (t));
defining joint motion time domain values of Rouseq (t) as Spec (t), Spec (t) = [ Cap (fv (t)), Cap (sv (t)), Cap (tv (t)) ], further, marking the 1 st element in Spec (t) as Spec (t)1, the 2 nd element in Spec (t) as Spec (t)2, and the 3 rd element in Spec (t) as Spec (t) 3;
the joint motion time domain value sequence is a sequence consisting of joint motion time domain values Spec (t) respectively corresponding to elements Rouseq (t), and is marked as Spec, and the serial numbers of the elements in the Spec are consistent with the serial numbers of the elements in the Rouseq;
the beneficial effects of calculating the joint movement time domain value are as follows: at present, a large amount of redundant data can be generated by a traditional joint activity degree identification method, the time complexity is high, the calculated joint activity time domain value can be used for better measuring the trend of the motion trail of a key point along with the change of the moment, and the data volume of image matrixes at a plurality of different moments can be better compressed according to the trend of the change of the joint activity time domain value sequence, so that the data redundancy is reduced, and the speed and the accuracy are improved.
Further, in S400, the method of determining whether the joint movement time threshold is within the normal range according to the joint movement time threshold includes:
setting a set Errset, wherein the Errset is a set used for collecting samples with abnormal joint motion time domain values, the Errset is a non-anisotropic set, and the initial value of the Errset is a null set;
setting a variable state, wherein the state is a variable used for calculating whether the activity of the user is in a normal range, and setting the initial value of the state to be 0;
the step of judging whether the user activity is in a normal range specifically comprises the following steps:
s601, in a sequence Spec, enabling the initial value of T to be 1 according to the sequence of sequence numbers T from 1 to T;
s602, judging whether the numerical value of t is less than or equal to 1, if so, turning to S606, otherwise, acquiring an element Spec (t) with the sequence number of t in Spec, and turning to S603;
s603, acquiring a former serial number of the serial number t, namely t-1, and acquiring an element Spec (t-1) with the serial number t-1 in the Spec; spec (t)1, Spec (t)2 and Spec (t)3 are obtained from Spec (t); obtaining a 1 st element Spec (t-1)1, a 2 nd element Spec (t-1)2 and a 3 rd element Spec (t-1)3 from Spec (t-1); go to S604;
s604, assigning a state, wherein the assigning method comprises the following steps: calculating the difference between Spec (t)1 minus Spec (t-1)1 as gr1, the difference between Spec (t)2 minus Spec (t-1)2 as gr2, the difference between Spec (t)3 minus Spec (t-1)3 as gr3, the arithmetic mean of Spec (t)1, Spec (t)2 and Spec (t)3 as avg (t), the arithmetic mean of Spec (t-1)1, Spec (t-1)2 and Spec (t-1)3 as avg (t-1), and state1 as the variable used to assign the state, and the current value of state1 is calculated as:
Figure 576454DEST_PATH_IMAGE004
assigning the value of state1 to the state; go to S605;
s605, calculating cos (pi) (-) state) according to the state value, judging whether the cos (pi) (-) state value is less than or equal to 0, if so, adding the current Spec (t) into the Errset, and if so, increasing the number of elements in the Errset by 1 and turning to S606;
s606, judging whether the value of the current T is larger than or equal to the value of T, if so, turning to S607, and if not, increasing the value of the current T by 1 and then turning to S602;
s607, acquiring the number of the elements in the current set Errset as en, and calculating en/T; judging whether en/T is less than or equal to 1/(en +1), if so, determining that the joint movement time domain value is in a normal range, otherwise, determining that the joint movement time domain value is not in the normal range, storing and outputting a judgment result of whether the joint movement time domain value is in the normal range, and then pushing alarm information to a mobile device or a database;
preferably, if the joint movement time domain value is not in the normal range, the alarm information is pushed to the mobile device or the database, the joint movement of the user can be represented to be out of the normal range, and the joint movement time domain value can sensitively reflect the amplitude of the joint movement, so that when the joint movement time domain value is abnormal, the joint movement time domain value is beneficial to analyzing the potential risk of the joint and the rehabilitation degree of the joint of the user;
the method for calculating the specific numerical value of the state and judging according to the number of the elements in the current set Errset has the following beneficial effects: the conventional image recognition system based on the large-scale neural network model has high calculation cost and long running time, and in the calculation process of judging whether the movement of a user is in a normal range according to the joint movement time domain value, because each element in the joint movement time domain value sequence corresponds to each moment, the change condition of image data corresponding to each moment can be quickly counted, the probability in the normal range can be quickly judged according to the counting result, and compared with the conventional image recognition system based on the large-scale neural network model, the calculation cost is greatly saved, and the time efficiency is improved.
The calculation involved in the joint activity degree intelligent detection method is subjected to non-dimensionalization processing.
The invention also provides an intelligent detection system for joint activity, which comprises: the processor executes the computer program to realize the steps in the joint activity intelligent detection method, the joint activity intelligent detection system can be operated in computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud data center, the operable system can include, but is not limited to, the processor, the memory and a server cluster, and the processor executes the computer program to operate in the following units of the system:
the image sequence acquisition unit is used for shooting a user in an all-around manner by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence, and marking key points in each image by using a key point detection algorithm for each image in the image sequence;
the joint movement time domain value calculating unit is used for calculating to obtain a joint movement time domain value according to the image sequence;
and the joint movement time domain value judging unit is used for judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value.
The invention has the beneficial effects that: the invention provides an intelligent joint activity detection method and system, wherein a plurality of cameras are used for shooting a user in an all-around manner at a plurality of continuous different moments to obtain an image sequence, a key point detection algorithm is used for marking key points in each image respectively for each image in the image sequence, a joint activity time domain value is obtained through calculation according to the image sequence, and whether the activity of the user is in a normal range or not is judged according to the joint activity time domain value, so that the beneficial effects of dynamically compressing the data volume of an image matrix at a plurality of different moments, greatly saving the calculation cost and improving the time efficiency are realized.
Drawings
The above and other features of the invention will be more apparent from the detailed description of the embodiments shown in the accompanying drawings in which like reference characters designate the same or similar elements, and it will be apparent that the drawings in the following description are merely exemplary of the invention and that other drawings may be derived by those skilled in the art without inventive effort, wherein:
FIG. 1 is a flow chart of a method for intelligently detecting joint mobility;
fig. 2 is a system configuration diagram of an intelligent joint mobility detection system.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If there is a description of first and second for the purpose of distinguishing technical features only, this is not to be understood as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of technical features indicated.
Fig. 1 is a flowchart illustrating an intelligent joint activity detection method according to the present invention, and a method and a system for intelligently detecting joint activity according to an embodiment of the present invention are described below with reference to fig. 1.
The invention provides an intelligent detection method for joint mobility, which specifically comprises the following steps:
s100, shooting a user in all directions by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence;
s200, for each image in the image sequence, respectively marking key points in each image by using a key point detection algorithm;
s300, calculating to obtain a joint movement time domain value according to the image sequence;
s400, judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value; if not, the alarm information is pushed to the mobile device or a database.
Further, in S100, a plurality of cameras are used to capture the user in all directions at a plurality of consecutive different times, and the method for obtaining the image sequence includes: selecting a plurality of continuous different moments, and using three cameras or common cameras to respectively shoot a front view, a top view and a side view of a user, wherein the cameras can shoot common images and infrared images, the sizes of shot image matrixes are the same, the three cameras are used for shooting the front view, the top view and the side view of the user at each moment, the front view, the top view and the side view can be common images or infrared images, the obtained front view, the top view and the side view at each moment are combined into a sequence to be used as an image sequence, the number of elements in the image sequence is the same as that of the moment, the serial number of the elements in the image sequence is the same as that of the moment, and each element in the image sequence is composed of the image matrixes of the three images of the front view, the top view and the side view at the moment with the same serial number.
Further, in S200, for each image in the image sequence, using a keypoint detection algorithm, a method for respectively marking keypoints in each image is as follows: using a keypoint detection algorithm, it is preferable that the keypoint detection algorithm can use the OpenCV-DNN based hand keypoint detection method of openpos, respectively mark keypoints in each image in the image sequence, set the pixel values of the positions of the keypoints in the image matrix to 1 and 0 for the rest of the positions, thereby converting the pixel value of each image in the image sequence into a value of 0 or 1 (refer to [1] high bar image, research of key technology in infrared image target recognition [ D ] electronic university of hangzhou., [2] xu shi, king 23022222222st, zhanhua, etc.. a keypoint based infrared image human body fall detection method [ J ]. infrared technology, 2021, 43(10):5., [3] Zhe C, Simon T, weii S E, et al, multimedia Multi-person 2D position timing/IEEE 7 [ IEEE ] field C Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.).
Further, in S300, according to the image sequence, the method for calculating the joint movement time domain value includes:
recording the serial number of each time in the continuous different times as T, recording the number of the time in the continuous different times as T, and setting the T as the [1, T ];
recording an image sequence as Rouseq, wherein an element with a sequence number t in the Rouseq is Rouseq (t), a main view in the Rouseq (t) is fv (t), a side view in the Rouseq (t) is sv (t), a top view in the Rouseq (t) is tv (t), Rouseq (t) = [ fv (t), sv (t), tv (t) ];
because the image matrixes of the images shot by the cameras are the same in size, the sizes of the image matrixes are unified into n rows and m columns, the serial number of the rows in the image matrixes is i, i belongs to [1, n ], the serial number of the columns in the image matrixes is j, and j belongs to [1, m ];
the element with the row number of i and the column number of j in fv (t) is fv (t) i, j, the element with the row number of i and the column number of j in sv (t) is sv (t) i, j, the element with the row number of i and the column number of j in tv (t) is tv (t) i, j;
defining a function Cap (), wherein the input of the function Cap () is a uniform image matrix with the pixel value of 1 or 0 and the size of n × m, and the operation process of the function Cap () is as follows: acquiring a set of points with a pixel value of 1 in an input image matrix as a set oneset, acquiring the number of elements in the set oneset as size, and recording the sequence number of the elements in the set oneset as q, q ∈ 1, size, and recording the row sequence number and the column sequence number of the element with the sequence number q in the set oneset as q [ i1, j1], wherein q [ i1] represents the value of the row sequence number of the element with the sequence number q in the oneset in the image matrix, q [ j1] represents the value of the column sequence number of the element with the sequence number q in the image matrix, and the output of a function Cap () is result, and the calculation formula of the result is as follows:
Figure DEST_PATH_IMAGE006
the superscript 2 represents taking the power of 2, exp () represents a logarithmic function with a natural constant e as a base, and the obtained result is output;
function Cap () is input to fv (t), sv (t) and tv (t) contained in each element Rouseq (t) in the sequence Rouseq, the function Cap () is input to fv (t) to obtain Cap (fv (t)), the function Cap () is input to sv (t) to obtain Cap (sv (t)), and the function Cap () is input to tv (t) to obtain Cap (tv (t));
defining joint motion time domain values of Rouseq (t) as Spec (t), Spec (t) = [ Cap (fv (t)), Cap (sv (t)), Cap (tv (t)) ], further, marking the 1 st element in Spec (t) as Spec (t)1, the 2 nd element in Spec (t) as Spec (t)2, and the 3 rd element in Spec (t) as Spec (t) 3;
the joint motion time domain value sequence is a sequence formed by joint motion time domain values Spec (t) respectively corresponding to elements Rouseq (t), and is marked as Spec, and the serial numbers of the elements in the Spec are consistent with the serial numbers of the elements in the Rouseq.
Further, in S400, the method of determining whether the motion of the user is within the normal range according to the joint motion time threshold value includes:
setting a set Errset, wherein the Errset is a set used for collecting samples with abnormal joint motion time domain values, the Errset is a non-anisotropic set, and the initial value of the Errset is a null set;
setting a variable state, wherein the state is a variable used for calculating whether the activity of the user is in a normal range, and setting the initial value of the state to be 0;
the step of judging whether the activity of the user is in a normal range specifically comprises the following steps:
s601, in sequence Spec, making the initial value of T as 1 according to the sequence of sequence number T from 1 to T;
s602, judging whether the numerical value of t is less than or equal to 1, if so, turning to S606, otherwise, acquiring an element Spec (t) with the sequence number of t in Spec, and turning to S603;
s603, acquiring a former serial number of the serial number t, namely t-1, and acquiring an element Spec (t-1) with the serial number t-1 in the Spec; spec (t)1, Spec (t)2 and Spec (t)3 are obtained from Spec (t); obtaining a 1 st element Spec (t-1)1, a 2 nd element Spec (t-1)2 and a 3 rd element Spec (t-1)3 from Spec (t-1); go to S604;
s604, assigning a state, wherein the assigning method comprises the following steps: calculating the difference value of Spec (t)1 minus Spec (t-1)1 as gr1, the difference value of Spec (t)2 minus Spec (t-1)2 as gr2, the difference value of Spec (t)3 minus Spec (t-1)3 as gr3, the arithmetic mean value of Spec (t)1, Spec (t)2 and Spec (t)3 as avg (t), the arithmetic mean value of Spec (t-1)1, Spec (t-1)2 and Spec (t-1)3 as avg (t-1), state1 as a variable for assigning a state, and the current value of state1 is calculated by the formula:
Figure DEST_PATH_IMAGE008
the value of state1 is assigned to the state; go to S605;
s605, calculating cos (pi × state) according to the value of the state, judging whether the value of the cos (pi × state) is less than or equal to 0, if so, adding the current Spec (t) into the Errset, and turning to S606;
s606, judging whether the value of the current T is larger than or equal to the value of T, if so, turning to S607, and if not, increasing the value of the current T by 1 and then turning to S602;
s607, acquiring the number of the elements in the current set Errset as en, and calculating en/T; and judging whether en/T is less than or equal to 1/(en +1), if so, determining that the activity of the user is in a normal range, otherwise, determining that the activity of the user is not in the normal range, and storing and outputting a judgment result of whether the activity of the user is in the normal range.
The intelligent joint activity detection system comprises: the joint motion intelligent detection system can be operated in computing devices such as a desktop computer, a notebook computer, a palm computer, a cloud data center and the like, and the operable systems can include, but are not limited to, a processor, a memory and a server cluster.
As shown in fig. 2, the system for intelligently detecting joint mobility according to an embodiment of the present invention includes: a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the above-mentioned embodiment of the intelligent joint-motion detecting method when executing the computer program, the processor executing the computer program to run in the units of the following system:
the image sequence acquisition unit is used for shooting a user in an all-around manner by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence, and marking key points in each image in the image sequence by using a key point detection algorithm;
the joint movement time domain value calculating unit is used for calculating to obtain a joint movement time domain value according to the image sequence;
and the joint movement time domain value judging unit is used for judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value.
The joint activity degree intelligent detection system can be operated in computing equipment such as desktop computers, notebook computers, palm computers and cloud data centers. The joint mobility intelligent detection system comprises, but is not limited to, a processor and a memory. Those skilled in the art will appreciate that the example is only an example of the joint activity intelligent detection method and system, and does not constitute a limitation to the joint activity intelligent detection method and system, and may include more or less components than the other, or combine some components, or different components, for example, the joint activity intelligent detection system may further include an input-output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete component Gate or transistor logic, discrete hardware components, etc. The general processor can be a microprocessor or the processor can be any conventional processor, and the processor is a control center of the joint activity intelligent detection system and is connected with each subarea of the whole joint activity intelligent detection system by various interfaces and lines.
The memory can be used for storing the computer program and/or the module, and the processor can realize various functions of the joint activity intelligent detection method and system by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention provides an intelligent joint activity detection method and system, wherein a plurality of cameras are used for shooting a user in an all-around manner at a plurality of continuous different moments to obtain an image sequence, a key point detection algorithm is used for marking key points in each image respectively for each image in the image sequence, a joint activity time domain value is obtained through calculation according to the image sequence, and whether the activity of the user is in a normal range or not is judged according to the joint activity time domain value, so that the beneficial effects of dynamically compressing the data volume of an image matrix at a plurality of different moments, greatly saving the calculation cost and improving the time efficiency are realized.
Although the present invention has been described in considerable detail and with reference to certain illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiment, so as to effectively encompass the intended scope of the invention. Furthermore, the foregoing describes the invention in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the invention, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (4)

1. An intelligent detection method for joint mobility, which is characterized by comprising the following steps:
s100, shooting a user in all directions by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence;
s200, for each image in the image sequence, respectively marking key points in each image by using a key point detection algorithm;
s300, calculating to obtain a joint movement time domain value according to the image sequence;
s400, judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value; if not, the alarm information is pushed to the mobile equipment or a database;
in S200, for each image in the image sequence, using a keypoint detection algorithm, a method for respectively marking keypoints in each image is as follows: respectively marking key points in each image in the image sequence by using a key point detection algorithm, setting the pixel value of the position of the key point in the image matrix to be 1, and setting the rest positions to be 0, so as to convert the pixel value of each image in the image sequence into a value of 0 or 1;
in S300, the method for calculating the joint motion time domain value according to the image sequence includes:
recording the serial number of each time in the continuous different times as T, recording the number of the time in the continuous different times as T, and setting the T as the [1, T ];
noting the image sequence as sequence Rouseq, elements with sequence number t in Rouseq as Rouseq (t), front view in Rouseq (t) as fv (t), side view in Rouseq (t) as sv (t), top view in Rouseq (t) as tv (t), Rouseq (t) = [ fv (t), sv (t), tv (t) ];
the element with the row number of i and the column number of j in fv (t) is fv (t) i, j, the element with the row number of i and the column number of j in sv (t) is sv (t) i, j, the element with the row number of i and the column number of j in tv (t) is tv (t) i, j;
defining a function Cap (), wherein the input of the function Cap () is an image matrix, and the operation process of the function Cap () is as follows: acquiring a set of points with a pixel value of 1 in an input image matrix as a set oneset, acquiring the number of elements in the set oneset as size, recording a binary array formed by a numerical value of a row sequence number i1 and a numerical value of a column sequence number j1 of the elements with the sequence number q in the set oneset as q, q ∈ [1, size ], and recording the binary array as an array q [ i1, j1] of the numerical value of the row sequence number i1 and the numerical value of the column sequence number j1 of the elements with the sequence number q in the set oneset, wherein i1 represents a row sequence number, j1 represents a column sequence number, q [ i1] represents the row sequence number of the elements with the sequence number q in the oneset in the image matrix, q [ j1] represents the column sequence number of the elements with the sequence number q in the oneset in the image matrix, the output of a function Cap () is result, and the calculation formula of result is:
Figure DEST_PATH_IMAGE002
outputting the obtained result;
function Cap () is input to fv (t), sv (t) and tv (t) contained in each element Rouseq (t) in the sequence Rouseq, the function Cap () is input to fv (t) to obtain Cap (fv (t)), the function Cap () is input to sv (t) to obtain Cap (sv (t)), and the function Cap () is input to tv (t) to obtain Cap (tv (t));
defining joint motion time domain values of Rouseeq (t) as Spec (t), Spec (t) = [ Cap (fv (t)), Cap (sv (t)), Cap (tv (t)) ], and further, marking the 1 st element in Spec (t) as Spec (t)1, the 2 nd element in Spec (t) as Spec (t)2, and the 3 rd element in Spec (t) as Spec (t) 3;
the joint motion time domain value sequence is a sequence consisting of joint motion time domain values Spec (t) respectively corresponding to elements Rouseq (t), and is marked as Spec, and the serial numbers of the elements in the Spec are consistent with the serial numbers of the elements in the Rouseq;
in S400, the method of determining whether the joint movement time threshold is within the normal range according to the joint movement time threshold includes:
setting a set Errset, wherein the Errset is a set used for collecting samples with abnormal joint motion time domain values, the Errset is a non-anisotropic set, and the initial value of the Errset is a null set;
setting a variable state, wherein the state is a variable used for calculating whether the activity of the user is in a normal range, and setting the initial value of the state to be 0;
the step of judging whether the activity of the user is in a normal range specifically comprises the following steps:
s601, in sequence Spec, making the initial value of T as 1 according to the sequence of sequence number T from 1 to T;
s602, judging whether the numerical value of t is less than or equal to 1, if so, turning to S606, otherwise, acquiring an element Spec (t) with the sequence number of t in Spec, and turning to S603;
s603, acquiring a former serial number of the serial number t, namely t-1, and acquiring an element Spec (t-1) with the serial number t-1 in the Spec; obtaining Spec (t)1, Spec (t)2 and Spec (t)3 from Spec (t); obtaining a 1 st element Spec (t-1)1, a 2 nd element Spec (t-1)2 and a 3 rd element Spec (t-1)3 from Spec (t-1); go to S604;
s604, assigning a state, wherein the assigning method comprises the following steps: calculating the difference between Spec (t)1 minus Spec (t-1)1 as gr1, the difference between Spec (t)2 minus Spec (t-1)2 as gr2, the difference between Spec (t)3 minus Spec (t-1)3 as gr3, the arithmetic mean of Spec (t)1, Spec (t)2 and Spec (t)3 as avg (t), the arithmetic mean of Spec (t-1)1, Spec (t-1)2 and Spec (t-1)3 as avg (t-1), and state1 as the variable used to assign the state, and the current value of state1 is calculated as:
Figure DEST_PATH_IMAGE004
assigning the value of state1 to the state; go to S605;
s605, calculating cos (pi) (-) state) according to the state value, judging whether the cos (pi) (-) state value is less than or equal to 0, if so, adding the current Spec (t) into Errset, and turning to S606;
s606, judging whether the value of the current T is larger than or equal to the value of T, if so, turning to S607, and if not, increasing the value of the current T by 1 and then turning to S602;
s607, acquiring the number of the elements in the current set Errset as en, and calculating en/T; and judging whether the en/T is less than or equal to 1/(en +1), if so, determining that the joint movement time domain value is in the normal range, otherwise, determining that the joint movement time domain value is not in the normal range, storing and outputting a judgment result of whether the joint movement time domain value is in the normal range, and then pushing the alarm information to the mobile equipment or the database.
2. The intelligent detection method for joint mobility according to claim 1, wherein in S100, a plurality of cameras are used to shoot the user in all directions at a plurality of consecutive different times, and the method for obtaining the image sequence comprises: selecting a plurality of continuous different moments, using three cameras to respectively shoot a main view, a top view and a side view of a user, wherein the obtained image matrixes have the same size, shooting the main view, the top view and the side view of the user at each moment by using the three cameras, combining the obtained main view, the top view and the side view at each moment into a sequence as an image sequence, wherein the number of elements in the image sequence is the same as that of the moment, the serial number of the elements in the image sequence is the same as that of the moment, and each element in the image sequence is formed by the image matrixes of the three images of the main view, the top view and the side view at the moment with the same serial number.
3. An intelligent joint activity detection system, comprising: processor, memory and computer program stored in the memory and running on the processor, the processor implementing the steps in a method of intelligently detecting joint motion according to any one of claims 1 to 2 when executing the computer program, the system being run in a computing device of a desktop computer, a laptop computer, a palm computer or a cloud data center.
4. An intelligent joint activity detection system according to claim 3, wherein the processor executes the computer program to run in the following system units:
the image sequence acquisition unit is used for shooting a user in an all-around manner by using a plurality of cameras at a plurality of continuous different moments to obtain an image sequence, and marking key points in each image by using a key point detection algorithm for each image in the image sequence;
the joint movement time domain value calculating unit is used for calculating to obtain a joint movement time domain value according to the image sequence;
and the joint movement time domain value judging unit is used for judging whether the joint movement time domain value is in a normal range or not according to the joint movement time domain value.
CN202210762975.2A 2022-07-01 2022-07-01 Joint mobility intelligent detection method and system Active CN114795192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210762975.2A CN114795192B (en) 2022-07-01 2022-07-01 Joint mobility intelligent detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210762975.2A CN114795192B (en) 2022-07-01 2022-07-01 Joint mobility intelligent detection method and system

Publications (2)

Publication Number Publication Date
CN114795192A CN114795192A (en) 2022-07-29
CN114795192B true CN114795192B (en) 2022-09-16

Family

ID=82522390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210762975.2A Active CN114795192B (en) 2022-07-01 2022-07-01 Joint mobility intelligent detection method and system

Country Status (1)

Country Link
CN (1) CN114795192B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316984B (en) * 2022-10-13 2022-12-27 佛山科学技术学院 Method and system for positioning axis position for measuring hand joint mobility

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102824176A (en) * 2012-09-24 2012-12-19 南通大学 Upper limb joint movement degree measuring method based on Kinect sensor
CN104035557A (en) * 2014-05-22 2014-09-10 华南理工大学 Kinect action identification method based on joint activeness
CN104887238A (en) * 2015-06-10 2015-09-09 上海大学 Hand rehabilitation training evaluation system and method based on motion capture
CN107320108A (en) * 2017-08-14 2017-11-07 佛山科学技术学院 A kind of measurement of range of motion method
CN108154912A (en) * 2017-12-15 2018-06-12 江苏承康医用设备有限公司 One kind removes compensatory safe range of motion evaluation training system applied to rehabilitation medical
CN111481208A (en) * 2020-04-01 2020-08-04 中南大学湘雅医院 Auxiliary system, method and storage medium applied to joint rehabilitation
CN111568428A (en) * 2020-04-13 2020-08-25 汕头大学医学院 Human joint mobility detection system and detection method
CN111938658A (en) * 2020-08-10 2020-11-17 陈雪丽 Joint mobility monitoring system and method for hand, wrist and forearm
CN112370048A (en) * 2020-11-10 2021-02-19 南京紫金体育产业股份有限公司 Movement posture injury prevention method and system based on joint key points and storage medium
WO2021060040A1 (en) * 2019-09-27 2021-04-01 国立研究開発法人理化学研究所 Assessment device, assessment method, program, and information recording medium
CN113368487A (en) * 2021-06-10 2021-09-10 福州大学 OpenPose-based 3D private fitness system and working method thereof
CN113647939A (en) * 2021-08-26 2021-11-16 复旦大学 Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
CN113662533A (en) * 2021-07-15 2021-11-19 华中科技大学 Joint rehabilitation movement monitoring and management system and use method
CN113780253A (en) * 2021-11-12 2021-12-10 佛山科学技术学院 Human body joint motion key point identification method and system
CN215128651U (en) * 2021-03-22 2021-12-14 嘉兴市第二医院 Multifunctional alarm for measuring joint mobility
CN114387678A (en) * 2022-01-11 2022-04-22 凌云美嘉(西安)智能科技有限公司 Method and apparatus for evaluating language readability using non-verbal body symbols
CN114663463A (en) * 2022-04-07 2022-06-24 上海电气集团股份有限公司 Method, system, device, electronic device and storage medium for measuring joint mobility

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102824176A (en) * 2012-09-24 2012-12-19 南通大学 Upper limb joint movement degree measuring method based on Kinect sensor
CN104035557A (en) * 2014-05-22 2014-09-10 华南理工大学 Kinect action identification method based on joint activeness
CN104887238A (en) * 2015-06-10 2015-09-09 上海大学 Hand rehabilitation training evaluation system and method based on motion capture
CN107320108A (en) * 2017-08-14 2017-11-07 佛山科学技术学院 A kind of measurement of range of motion method
CN108154912A (en) * 2017-12-15 2018-06-12 江苏承康医用设备有限公司 One kind removes compensatory safe range of motion evaluation training system applied to rehabilitation medical
WO2021060040A1 (en) * 2019-09-27 2021-04-01 国立研究開発法人理化学研究所 Assessment device, assessment method, program, and information recording medium
CN111481208A (en) * 2020-04-01 2020-08-04 中南大学湘雅医院 Auxiliary system, method and storage medium applied to joint rehabilitation
CN111568428A (en) * 2020-04-13 2020-08-25 汕头大学医学院 Human joint mobility detection system and detection method
CN111938658A (en) * 2020-08-10 2020-11-17 陈雪丽 Joint mobility monitoring system and method for hand, wrist and forearm
CN112370048A (en) * 2020-11-10 2021-02-19 南京紫金体育产业股份有限公司 Movement posture injury prevention method and system based on joint key points and storage medium
CN215128651U (en) * 2021-03-22 2021-12-14 嘉兴市第二医院 Multifunctional alarm for measuring joint mobility
CN113368487A (en) * 2021-06-10 2021-09-10 福州大学 OpenPose-based 3D private fitness system and working method thereof
CN113662533A (en) * 2021-07-15 2021-11-19 华中科技大学 Joint rehabilitation movement monitoring and management system and use method
CN113647939A (en) * 2021-08-26 2021-11-16 复旦大学 Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
CN113780253A (en) * 2021-11-12 2021-12-10 佛山科学技术学院 Human body joint motion key point identification method and system
CN114387678A (en) * 2022-01-11 2022-04-22 凌云美嘉(西安)智能科技有限公司 Method and apparatus for evaluating language readability using non-verbal body symbols
CN114663463A (en) * 2022-04-07 2022-06-24 上海电气集团股份有限公司 Method, system, device, electronic device and storage medium for measuring joint mobility

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于关键点的红外图像人体摔倒检测方法;徐世文等;《红外技术》;20211031;第43卷(第10期);第1003-1007页 *

Also Published As

Publication number Publication date
CN114795192A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
Ramesh et al. Dart: distribution aware retinal transform for event-based cameras
Li et al. Depthwise nonlocal module for fast salient object detection using a single thread
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN114795192B (en) Joint mobility intelligent detection method and system
CN111079536B (en) Behavior analysis method, storage medium and device based on human body key point time sequence
CN111079519B (en) Multi-gesture human body detection method, computer storage medium and electronic equipment
Serrano et al. Spatio-temporal elastic cuboid trajectories for efficient fight recognition using Hough forests
CN111340213B (en) Neural network training method, electronic device, and storage medium
CN112801235A (en) Model training method, prediction device, re-recognition model and electronic equipment
CN112529939A (en) Target track matching method and device, machine readable medium and equipment
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN112560002B (en) Gait behavior-based identity authentication method, device, equipment and storage medium
Taylor et al. Pose-sensitive embedding by nonlinear nca regression
CN110598732B (en) Plant health detection method and device based on image recognition
Miao et al. BCLNet: Bilateral Consensus Learning for Two-View Correspondence Pruning
Clark et al. Perspective correction for improved visual registration using natural features.
Pierobon et al. 3-d body posture tracking for human action template matching
CN114463835A (en) Behavior recognition method, electronic device and computer-readable storage medium
Huang et al. Meta‐action descriptor for action recognition in RGBD video
JP2022534314A (en) Picture-based multi-dimensional information integration method and related equipment
CN113449714B (en) Identification method and system for child playground
Jiao Recognition of Human Body Feature Changes in Sports Health Based on Deep Learning
CN109558830A (en) Hand-written recognition method, device and equipment
CN111353511A (en) Number recognition device and method
US20220405502A1 (en) Body and hand association method and apparatus, device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant