CN116309699B - Method, device and equipment for determining associated reaction degree of target object - Google Patents

Method, device and equipment for determining associated reaction degree of target object Download PDF

Info

Publication number
CN116309699B
CN116309699B CN202310050631.3A CN202310050631A CN116309699B CN 116309699 B CN116309699 B CN 116309699B CN 202310050631 A CN202310050631 A CN 202310050631A CN 116309699 B CN116309699 B CN 116309699B
Authority
CN
China
Prior art keywords
joint
detected
target
determining
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310050631.3A
Other languages
Chinese (zh)
Other versions
CN116309699A (en
Inventor
王晨
彭亮
陈婧瑶
侯增广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310050631.3A priority Critical patent/CN116309699B/en
Publication of CN116309699A publication Critical patent/CN116309699A/en
Application granted granted Critical
Publication of CN116309699B publication Critical patent/CN116309699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and equipment for determining the associated reaction degree of a target object, which relate to the technical field of image processing and are applied to terminal equipment, wherein the method comprises the following steps: receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client; acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video; performing feature analysis processing on the multiple target images to obtain motion features of the target objects; and determining the associated reaction degree of the target object according to the motion characteristics. The method has the advantages that the moving process of the target object is not required to be observed manually, the target video is shot through the client, then a series of processing is carried out on the target video through the terminal equipment, finally the associated reaction degree of the target object is determined, the efficiency is high, and the dependence on observers is greatly reduced.

Description

Method, device and equipment for determining associated reaction degree of target object
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for determining a coherent response degree of a target object.
Background
The associated degree of reaction of the user refers to the manifestation of the user's athletic ability. By determining the associated reaction degree of the user, the method has important significance for determining the current state of the user, providing references for the subsequent behaviors of the user and the like.
Currently, the method for determining the associated reaction degree of a user is mainly implemented by performing a series of preset actions by the user, and then observing the movement process of the preset actions performed by the user by an observer. In this way, too much reliance is placed on the experience of the observer and the determination of the extent of the associated reaction by the user is not accurate.
Disclosure of Invention
The application provides a method, a device and equipment for determining the joint reaction degree of a target object, which are used for solving the problem that the joint reaction degree of the target object is inaccurate by manually observing the target object to execute a preset action in the prior art.
In a first aspect, the present application provides a method for determining a coherent reaction degree of a target object, which is applied to a terminal device, and the method includes:
receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client;
Acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video;
performing feature analysis processing on the multiple target images to obtain motion features of the target objects;
and determining the associated reaction degree of the target object according to the motion characteristics.
In one possible implementation manner, the performing feature analysis processing on the multiple target images to obtain motion features of the target object includes:
determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object;
performing joint point identification processing on the multiple target images to obtain joint position features of the target object, wherein the joint position features comprise positions of the multiple joint points to be detected on the multiple target images;
and acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics.
In a possible implementation manner, the acquiring the joint angle characteristic of the target object according to the joint position characteristic and the to-be-detected area includes:
Determining the to-be-detected joint points included in each to-be-detected area;
determining a joint included angle corresponding to each to-be-detected area according to-be-detected joint points included in each to-be-detected area;
and determining the joint angle characteristics according to the positions of the plurality of joint points to be detected on the plurality of target images and the joint included angles corresponding to the areas to be detected.
In a possible embodiment, the determining the associated reaction degree of the target object according to the motion feature includes:
acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features;
acquiring motion smoothness, joint reaction score and coupling joint information of the target object according to the first feature matrix;
and determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information.
In one possible implementation manner, the obtaining, according to the first feature matrix, the motion smoothness, the joint reaction score and the coupling joint information of the target object includes:
according to the first feature matrix, smoothness of each joint point to be detected, smoothness of each region to be detected, feature distance of each joint point to be detected and feature distance of each region to be detected are obtained;
Determining the motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected;
determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected;
coupling the first feature matrix to obtain a coupled second feature matrix;
and determining the coupling association information according to the second feature matrix.
In one possible implementation manner, the determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected includes:
determining the motion energy value of each to-be-detected joint according to the characteristic distance of each to-be-detected joint and the characteristic distance of each to-be-detected area;
determining an energy value of a desired motion joint and an energy value of a joint in association with the target object when the target object performs the target action according to the motion energy value;
determining the joint response score based on the energy value of the desired motion joint and the energy value of the joint motion.
In a possible implementation manner, the determining the coupling association information according to the second feature matrix includes:
Acquiring time sequence feature vectors of all the nodes to be detected according to the second feature matrix;
acquiring a weight matrix of each node to be detected according to a plurality of sample videos;
and determining the coupling association information according to the time sequence feature vector of each node to be detected and the weight matrix.
In a second aspect, the present application provides a device for determining a degree of associative reaction of a target object, applied to a terminal device, where the device includes:
the receiving module is used for receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client;
the acquisition module is used for acquiring a plurality of corresponding target images according to the target video, and the plurality of target images are arranged according to the time sequence in the target video;
the processing module is used for carrying out characteristic analysis processing on the multiple target images to obtain the motion characteristics of the target objects;
and the determining module is used for determining the associated reaction degree of the target object according to the motion characteristics.
In a possible implementation manner, the processing module is specifically configured to:
Determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object;
performing joint point identification processing on the multiple target images to obtain joint position features of the target object, wherein the joint position features comprise positions of the multiple joint points to be detected on the multiple target images;
and acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics.
In a possible implementation manner, the processing module is specifically configured to:
determining the to-be-detected joint points included in each to-be-detected area;
determining a joint included angle corresponding to each to-be-detected area according to-be-detected joint points included in each to-be-detected area;
and determining the joint angle characteristics according to the positions of the plurality of joint points to be detected on the plurality of target images and the joint included angles corresponding to the areas to be detected.
In one possible implementation manner, the determining module is specifically configured to:
acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features;
Acquiring motion smoothness, joint reaction score and coupling joint information of the target object according to the first feature matrix;
and determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information.
In one possible implementation manner, the determining module is specifically configured to:
according to the first feature matrix, smoothness of each joint point to be detected, smoothness of each region to be detected, feature distance of each joint point to be detected and feature distance of each region to be detected are obtained;
determining the motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected;
determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected;
coupling the first feature matrix to obtain a coupled second feature matrix;
and determining the coupling association information according to the second feature matrix.
In one possible implementation manner, the determining module is specifically configured to:
determining the motion energy value of each to-be-detected joint according to the characteristic distance of each to-be-detected joint and the characteristic distance of each to-be-detected area;
Determining an energy value of a desired motion joint and an energy value of a joint in association with the target object when the target object performs the target action according to the motion energy value;
determining the joint response score based on the energy value of the desired motion joint and the energy value of the joint motion.
In one possible implementation manner, the determining module is specifically configured to:
acquiring time sequence feature vectors of all the nodes to be detected according to the second feature matrix;
acquiring a weight matrix of each node to be detected according to a plurality of sample videos;
and determining the coupling association information according to the time sequence feature vector of each node to be detected and the weight matrix.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method for determining the extent of associative reaction of a target object according to any one of the first aspects when executing the program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of determining a degree of associative reaction of a target object according to any one of the first aspects.
The method, the device and the equipment for determining the associated reaction degree of the target object are applied to terminal equipment, and the terminal equipment firstly receives target video sent by a client, wherein the target video is obtained by shooting a process of executing target actions on the target object by the client; then, the terminal equipment acquires a plurality of corresponding target images according to the target video, and the plurality of target images are arranged according to the time sequence in the target video; after the terminal equipment acquires a plurality of target images, the terminal equipment performs characteristic analysis processing on the plurality of target images to obtain the motion characteristics of the target object, and finally, the associated reaction degree of the target object is determined according to the motion characteristics. According to the scheme provided by the embodiment of the application, the joint reaction degree of the target object is determined without manually observing the motion process of the target object, the target video is shot through the client, and then the terminal equipment performs a series of processing on the target video, so that the joint reaction degree of the target object is finally determined, the efficiency is higher, and the dependence on an observer is greatly reduced.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method for determining a degree of associative reaction of a target object according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of obtaining motion characteristics of a target object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an articulation point to be detected and an area to be detected according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of determining the associated reaction degree of a target object according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of determining associated reaction degree index according to an embodiment of the present application;
FIG. 7 is a schematic illustration of a desired motion joint and associated motion joint provided in accordance with an embodiment of the present application;
FIG. 8 is a schematic diagram of a self-attention coupling algorithm provided by an embodiment of the present application;
FIG. 9 is a weight chart of the relationship between nodes to be detected according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a device for determining the degree of associated reactions of a target object according to an embodiment of the present application;
fig. 11 is a schematic entity structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The associated degree of reaction of the user refers to the manifestation of the user's athletic ability. By determining the associated reaction degree of the user, the method has important significance for determining the current state of the user, providing references for the subsequent behaviors of the user and the like.
Currently, the method for determining the associated reaction degree of a user is mainly implemented by performing a series of preset actions by the user, and then observing the movement process of the preset actions performed by the user by an observer. In this way, too much depending on the experience of the observer, the determination of the extent of the associated reaction by the user is not accurate.
Based on the above, the application provides a method for determining the joint reaction degree of the target object, which can determine the joint reaction degree of the target object without manual observation.
The application scenario of the present application will be described first with reference to fig. 1.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application, as shown in fig. 1, including a client 11 and a terminal device 12, where the client 11 and the terminal device 12 are connected by a wired or wireless connection.
The client 11 may be configured to photograph a motion process of a user to obtain a corresponding video, and then transmit the photographed video to the terminal device 12. After acquiring the video, the terminal device 12 performs analysis based on the video, thereby determining the extent of the associative reaction of the target object.
Video data is collected by the client 11 and connected to the terminal device 12 by the cloud. The embodiment of the application can use a web developer toolkit to realize the construction of an evaluation program interface and a framework through wxml and wxss files. And the target object is guided through the video and the voice to finish the initialization of personal information and joint point registration through the client 11.
A method according to an exemplary embodiment of the present application is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in any way. Rather, embodiments of the application may be applied to any scenario where applicable.
Fig. 2 is a flow chart of a method for determining a joint reaction degree of a target object according to an embodiment of the present application, where the method is applied to a terminal device, as shown in fig. 2, and the method may include:
s21, receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client.
The client and the terminal equipment in the embodiment of the application are connected through wires or wirelessly, and the client is a client corresponding to the target object. In the process of executing the target action, the client can shoot the target object to obtain a target video, and then the target video is sent to the terminal equipment.
The target video is used for determining the associated reaction degree of the target object subsequently, so that the subsequent state of the target object can be evaluated, and a reference is provided for the subsequent behavior of the target object. There are a number of possibilities for the target object. For example, the target object can be a patient recovering after suffering from related diseases, and whether the recovery condition of the target object is good or not is judged by determining the associated reaction degree of the target object, so that a certain reference value can be provided for the subsequent exercise, exercise and other actions of the target object. Taking a target object as an example of a stroke patient, after rehabilitation, the stroke patient may have a change in movement pattern, such as a serious limb joint reaction. In the primary stage of rehabilitation, the joint reaction of the limb is a positive rehabilitation sign, and with the improvement of rehabilitation degree, the joint reaction of the limb becomes a movement disorder expression. Therefore, rancour can provide further accurate rehabilitation guidance comments for patients in conjunction with timely and striving for evaluation of the reaction, and has important significance in the establishment of rehabilitation training schemes.
For example, the target object can be an athlete, and whether the target object is in a peak movement state or not is judged by determining the associated reaction degree of the target object, so that a certain reference value can be provided for the time and frequency of the target object in subsequent competition. For example, the target object may be a normal user, and the current state of the target object can be determined by determining the associated reaction degree of the target object, so that the reference value of whether exercise needs to be enhanced is provided for the normal user.
The target action is related to the extent of associative reaction of the target object to be determined. For example, if it is desired to determine the degree of associative reaction of the upper limb of the target subject, the target motion is typically an upper limb-related motion; if it is desired to determine the extent of the associative reaction of the lower limb of the target subject, the target motion is typically a lower limb related motion, and so on.
Table 1 below illustrates common target actions:
TABLE 1
As shown in table 1, the target actions may include upper limb actions, which may include actions 1 through 5 illustrated in table 1, and/or lower limb actions, which may include actions 6 and 7 illustrated in table 1.
The actions illustrated in table 1 are only one example of the target actions, and do not limit the target actions. According to the difference of the target objects and the difference of the associated reaction degree to be determined, corresponding target actions can be set.
S22, acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video.
After receiving the target video sent by the client, the terminal equipment can decompose the target video to obtain a plurality of target images. It will be appreciated that the video is made up of a plurality of images, so that for a target video, a plurality of target images corresponding to the target video can be obtained, and the plurality of target images make up the target video.
The plurality of target images are provided with respective corresponding time information, the time information of the target images is determined by the positions of the target images in the target video, and the plurality of target images are arranged according to the time sequence in the target video.
S23, performing feature analysis processing on the multiple target images to obtain the motion features of the target object.
For any target image, joint point identification processing can be performed on the target image, so that the joint point of the target object in the target image can be identified. For a plurality of target images, the positions of the joints of the target object in each target image can be obtained according to the processing mode.
Because the plurality of target images are a plurality of images forming the target video, each target image has respective corresponding time sequence, the motion trail of each joint point can be obtained by combining the time information of each target image and the positions of the joint points of the target object in the identified target image, and the motion trail of all the joint points can jointly form the motion characteristics of the target object.
S24, determining the associated reaction degree of the target object according to the motion characteristics.
After the motion characteristics of the target object are obtained, the associated reaction degree of the target object can be determined according to the motion characteristics of the target object. For example, for the target action, a direction range and an amplitude range of movement of each joint point, etc. may be set in an ideal case, and then the direction range and the amplitude range of movement of each joint point when the target object performs the target action are determined according to the movement characteristics of the target object, thereby determining the degree of conjoint reaction of the target object.
The method for determining the associated reaction degree of the target object is applied to terminal equipment, and the terminal equipment firstly receives target video sent by a client, wherein the target video is obtained by shooting a process of executing target actions on the target object by the client; then, the terminal equipment acquires a plurality of corresponding target images according to the target video, and the plurality of target images are arranged according to the time sequence in the target video; after the terminal equipment acquires a plurality of target images, the terminal equipment performs characteristic analysis processing on the plurality of target images to obtain the motion characteristics of the target object, and finally, the associated reaction degree of the target object is determined according to the motion characteristics. According to the scheme provided by the embodiment of the application, the joint reaction degree of the target object is determined without manually observing the motion process of the target object, the target video is shot through the client, and then the terminal equipment performs a series of processing on the target video, so that the joint reaction degree of the target object is finally determined, the efficiency is higher, and the dependence on an observer is greatly reduced.
On the basis of any one of the above embodiments, the following describes the scheme of the application in detail with reference to the accompanying drawings.
Fig. 3 is a schematic flow chart of obtaining motion characteristics of a target object according to an embodiment of the present application, where, as shown in fig. 3, the flow chart includes:
s31, determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object.
The target object comprises a plurality of nodes, and the plurality of nodes to be detected are subsets of the plurality of nodes included in the target object. Aiming at different target actions, a plurality of corresponding nodes to be detected are correspondingly different.
The region to be detected is a region on the target object, and the region to be detected comprises one or more nodes to be detected. The area to be detected is correspondingly different for different target actions.
Fig. 4 is a schematic diagram of an articulation point to be detected and an area to be detected according to an embodiment of the present application, and as shown in fig. 4, an articulation point to be detected and an area to be detected included on a target object 40 are illustrated. The to-be-detected joint point is illustrated in the left example of fig. 4, and the to-be-detected area is illustrated in the right example of fig. 4, and includes ten to-be-detected areas of left shoulder, right shoulder, left elbow, right elbow, left hand, right hand, left hip knee, right hip knee, left foot and right foot, where each to-be-detected area includes a different number of to-be-detected joint points.
S32, performing joint point identification processing on the plurality of target images to obtain joint position characteristics of the target object, wherein the joint position characteristics comprise positions of a plurality of joint points to be detected on the plurality of target images.
Specifically, for any one target image, joint point identification processing can be performed on the target image, so as to obtain positions of a plurality of joints to be detected of the target object on the target image.
And aiming at all the target images, the operation of the joint point identification processing is carried out, so that the positions of a plurality of joint points to be detected of the target object on each target image can be obtained, and the joint position characteristics of the target object are obtained. The operation of the joint point identification process may be implemented, for example, by means of bone identification, model training, etc., and any method for identifying a joint point of an object may be used to perform the joint point identification process, which is not described herein.
S33, acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics.
After the joint position feature is determined, the joint angle feature of the target object can be obtained based on the joint position feature and the region to be detected, wherein the joint angle feature is mainly used for reflecting the joint angle in the region to be detected.
Since the to-be-detected area includes one or more to-be-detected joints, and the joint angle is formed by the to-be-detected joints, the terminal device first determines the to-be-detected joints included in each to-be-detected area.
After the to-be-detected joint points included in each to-be-detected area are determined, the positions of the to-be-detected joint points included in each to-be-detected area on each target image can be obtained based on the joint position features. Therefore, the terminal device can determine the joint angle corresponding to each to-be-detected area according to the to-be-detected joint points included in each to-be-detected area, and determine the joint angle characteristic according to the positions of the to-be-detected joint points on the target images and the joint angles corresponding to each to-be-detected area.
Specifically, after determining the joint points to be detected included in each area to be detected, the terminal device may determine, for any one area to be detected, a joint angle formed by the joint points to be detected included in the area to be detected. Taking fig. 4 as an example, for the region to be detected in the left shoulder, the included joint point to be detected is the joint point a in fig. 4, and the joint point a is respectively connected with the joint point B and the joint point C, and the angle BAC is the joint angle formed by the joint points to be detected included in the region to be detected in the left shoulder.
After the joint included angle corresponding to the to-be-detected area is determined, the size of the joint included angle corresponding to the to-be-detected area on each target image can be determined according to the positions of the to-be-detected joint points included in the to-be-detected area. For any one to-be-detected area, the method is adopted to process, so that the size of the joint included angle corresponding to each to-be-detected area on each target image is obtained, and the sizes of the joint included angles corresponding to all to-be-detected areas on each target image form joint angle characteristics.
After the joint position feature and the joint angle feature are obtained, the degree of joint reaction of the target object can be determined based on the joint position feature and the joint angle feature, and the process is described below with reference to fig. 5.
Fig. 5 is a schematic flow chart of determining a joint reaction degree of a target object according to an embodiment of the present application, where, as shown in fig. 5, the flow chart includes:
s51, acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features.
For any one target image, the joint position characteristics corresponding to the target image comprise the positions of all the joint points to be detected on the target image, and the joint angle characteristics corresponding to the target image comprise the sizes of the joint angles in the region to be detected on the target image. Because the target image has multiple frames, the positions of the joint points to be detected and the joint angles in the region to be detected on the multiple frames of target images are put into a matrix, and a first feature matrix corresponding to the target object can be obtained. First feature matrix A 0 Can be expressed in the following form:
wherein m is the number of frames, i.e. the number of target images, and n is the number of features, i.e. the sum of the number of joint position features and the number of joint angle features. a, a ij The j-th feature, i, j e m, n on the i-th target image is shown.
S52, according to the first feature matrix, obtaining the motion smoothness, the joint reaction score and the coupling joint information of the target object.
The motion smoothness, the joint reaction score, and the coupling joint information are indicators of the joint reaction degree of the reaction target object in 3 different dimensions, and a manner of determining these 3 indicators specifically is described below with reference to fig. 6.
Fig. 6 is a schematic flow chart of determining associated reaction degree indexes according to an embodiment of the present application, as shown in fig. 6, including:
s61, according to the first feature matrix, the smoothness of each joint point to be detected, the smoothness of each region to be detected, the feature distance of each joint point to be detected and the feature distance of each region to be detected are obtained.
In the above embodiment, the first feature matrix A is described 0 Defining Diff (A 0 )=(b ij ) (m-1)×n Wherein b ij =a (i+1)j -a ij
Let A 1 =Diff(A 0 ),A 2 =Diff(A 1 )=(m ij ) (m-2)×n Wherein:
m ij =(a (i+2)j -a (i+1)j )-(a (i+1)j -a ij ) (2)
based on the above formula (2), the smoothness of each joint point to be detected and the smoothness of each region to be detected can be obtained:
wherein, the number of the joint points to be detected is set as n 1 The number of the areas to be detected is n 2 Satisfy n 1 +n 2 =n. When j is E [1, n 1 ]At the time S j For the smoothness of the j-th joint point to be detected, when j is E [ n ] 1 +1,n]At the time S j Is (j-n) 1 ) Smoothness of the individual areas to be detected.
The characteristic distance of each joint point to be detected refers to the distance of each joint point to be detected in different frame target images, and the characteristic distance of each region to be detected refers to the distance of each region to be detected in different frame target images. The feature distance of each node to be detected and the feature distance of each region to be detected can be characterized by a feature distance function, see the following formula (4):
wherein, (m_x p,j ,m_y p,j ,m_z p,j ) For the coordinates of the jth eigenvalue in the p-th frame target image, (m_x) q,j ,m_y q,j ,m_z q,j ) The coordinates of the jth eigenvalue in the qth frame target image are D (p, q, j) the eigenvalue's eigenvalue distance. Let the number of the joint points to be detected be n 1 The number of the areas to be detected is n 2 Satisfy n 1 +n 2 =n. When j is E [1, n 1 ]When D (p, q, j) is the characteristic distance of the j-th joint point to be detected, when j is E [ n ] 1 +1,n]When D (p, q, j) is (j-n) 1 ) Characteristic distance of each region to be detected.
S62, determining motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected.
The motion smoothness is calculated by the following equation (5):
wherein S is j The motion smoothness of the target object can be obtained from the above equation (3) based on the above equations (3) and (5). The motion pattern stage of the target object can be further evaluated by comparing and knowing the difference between the motion smoothness of the target object and the normal value.
S63, determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected.
Specifically, firstly, according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected, the motion energy value of each joint point to be detected is determined. The formula for calculating the motion energy value of each joint point to be detected is shown as the following formula (6):
wherein Energy (j) is the motion Energy value of the j-th joint to be detected, and the Distance (p, 0, j) can be obtained by the formula (4).
Then, the energy value of the desired motion joint when the target object performs the target action, and the energy value of the joint in conjunction with the motion, are determined according to the motion energy value.
The energy value of the desired motion joint is calculated as shown in the following formula (7):
the motion_energy is an energy value of a joint expected to move, u is a joint characteristic index expected to move, the value of u is correspondingly different along with different target actions, and N_motion represents the number of expected joints required by the target actions.
The equation for calculating the energy value of the joint with motion is shown as the following equation (8):
wherein Cojoin_energy is the energy value of the joint with motion, the value of v is correspondingly different along with the target motion, the joint characteristic index with motion is represented, and N_Cojoin represents the number of joint points with motion. Fig. 7 is a schematic diagram of a desired motion joint and a joint motion joint according to an embodiment of the present application, as shown in fig. 7, the desired motion joint (including a joint point a, a joint point B, a joint point E, a joint point F, a joint point G, and a joint point H in fig. 7) and the joint point (including a joint point C and a joint point D in fig. 7) are illustrated for a certain target motion, respectively.
Finally, a joint reaction score is determined based on the energy value of the desired motion joint and the energy value of the joint motion joint. The equation for the score of the reaction is found as follows:
where Cojoin_score is the joint reaction score, motion_energy is the energy value of the desired Motion joint, and Cojoin_energy is the energy value of the joint Motion joint.
S64, performing coupling processing on the first feature matrix to obtain a coupled second feature matrix.
Based on the division of the motion area of the target object, the first feature matrix is correspondingly coupled to obtain a second feature matrix. Taking fig. 4 as an example, the movement function area of the target object may be divided into ten cooperative action areas of a left shoulder, a right shoulder, a left elbow, a right elbow, a left hand, a right hand, a left hip knee, a right hip knee, a left foot, and a right foot, and each area is an area to be detected. Then, according to the joint position features and joint angle features of the joint points to be detected included in each region to be detected, performing coupling processing on the first feature matrix to obtain a second feature matrix X k×k Where k is the number of areas to be detected, taking the case where the areas to be detected include ten areas of left shoulder, right shoulder, left elbow, right elbow, left hand, right hand, left hip knee, right hip knee, left foot, right foot, k=10.
S65, determining the coupling association information according to the second feature matrix.
Firstly, according to the second feature matrix, time sequence feature vectors of all the nodes to be detected are obtained. Let the second feature matrix be X k×k Then for the second feature matrix X k×k The p-th column is the time sequence feature vector of the p-th node to be detected.
And secondly, acquiring a weight matrix of each node to be detected according to the plurality of sample videos.
The embodiment of the application provides a self-attention coupling degree algorithm, which uses a self-attention deep learning algorithm to extract characteristic weights, analyzes the influence of each influence factor on the control capability of the tail end of a joint and better knows the change of the motion mode of a target object.
Fig. 8 is a schematic diagram of a self-attention coupling algorithm according to an embodiment of the present application, as shown in fig. 8, first input sample data, where the sample data is a second feature matrix of an object under normal associated reaction degree. After the sample data is input into the self-attention coupling model, the result is finally output through self-attention processing, softmax function processing, feature extraction and the like. Comparing the output result with the labeling result of the sample data, adjusting parameters of the self-attention coupling model according to the difference between the output result and the labeling result, finally obtaining a trained self-attention coupling model, and obtaining a weight matrix of each joint point to be detected based on the trained self-attention coupling model.
And finally, determining coupling association information according to the time sequence feature vector and the weight matrix of each node to be detected. The calculation formula of the coupling association information is shown as (10) below:
C k×k =(c pq ) k×k (10)
wherein C is k×k For coupling the associated information, also called coupling feature matrix.
c pq =tanh(X p W p +X q W q )·W c (11)
Wherein X is p For the time sequence characteristic vector X of the p-th joint point to be detected q For the q-th time sequence characteristic vector of the joint point to be detected, W p 、W q 、W c The resulting weight matrix, i.e. the parameters of the trained self-attention coupling model, is calculated for the method based on the example of the embodiment of fig. 8.
After the coupling association information is obtained, a relationship weight graph between the p-th to-be-detected node and the q-th to-be-detected node can be drawn based on the coupling association information. Wherein the associative relationship between any p-th to-be-detected node and the q-th to-be-detected node can be characterized based on the following formula (12):
c ` pq =softmax(c pq ) (12)
wherein c pq Can be found by reference to the above formula (11), softmax representing the softmax function, c ` pq Representing the association between the p-th node to be detected and the q-th node to be detected.
FIG. 9 is a graph showing the weight of the relationship between the nodes to be detected according to the embodiment of the present application, wherein c is calculated based on the above formula (12) as shown in FIG. 9 ` pq Then, based on the obtained c ` pq The relationship between the nodes to be detected can be drawn. Wherein c ` pq The larger the value of (a) is, the thicker the connecting line between the p-th to-be-detected joint and the q-th to-be-detected joint is, which means that the association between the p-th to-be-detected joint and the q-th to-be-detected joint is stronger. For a specific target action, if the association between the p-th to-be-detected joint and the q-th to-be-detected joint under the target action is relatively strong under normal conditions, the association reaction degree of the target object is relatively good, and if the association between the p-th to-be-detected joint and the q-th to-be-detected joint under the target action is relatively weak under normal conditions, the association reaction degree of the target object is relatively poor. By drawing the association relation weight graph, the association reaction degree of the target object can be seen more intuitively, and the association reaction degree of the target object can be understood more intuitively.
S53, determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information.
The motion smoothness, the joint reaction score and the coupling joint information are indexes of 3 different dimensions for reflecting the joint reaction degree of the target object, and after the motion smoothness, the joint reaction score and the coupling joint information of the target object are obtained, the joint reaction degree of the target object can be determined according to the motion smoothness, the joint reaction score and the coupling joint information.
For example, a certain weight value may be set for each of the motion smoothness, the joint response score, and the coupling joint information, and the joint response degree of the target object may be comprehensively determined according to the weight value and the motion smoothness, the joint response score, and the coupling joint information of the target object.
In summary, according to the scheme of the embodiment of the application, the joint reaction degree of the target object is determined without manually observing the motion process of the target object, the target video is shot through the client, and then a series of processing is carried out on the target video by the terminal equipment, so that the joint reaction degree of the target object is finally determined, the efficiency is higher, and the dependence on observers is greatly reduced.
The following describes the apparatus for determining the associative reaction degree of the target object according to the present application, and the apparatus for determining the associative reaction degree of the target object described below and the method for determining the associative reaction degree of the target object described above may be referred to correspondingly to each other.
Fig. 10 is a schematic structural diagram of a device for determining a degree of a target object associated with a reaction, which is provided in an embodiment of the present application, and is applied to a terminal device, as shown in fig. 10, where the device includes:
the receiving module 101 is configured to receive a target video sent by a client, where the target video is obtained by shooting a process that the client performs a target action on a target object;
The acquiring module 102 is configured to acquire a plurality of corresponding target images according to the target video, where the plurality of target images are arranged according to a time sequence in the target video;
the processing module 103 is configured to perform feature analysis processing on the multiple target images to obtain motion features of the target object;
a determining module 104, configured to determine a degree of associative reaction of the target object according to the motion feature.
In one possible implementation, the processing module 103 is specifically configured to:
determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object;
performing joint point identification processing on the multiple target images to obtain joint position features of the target object, wherein the joint position features comprise positions of the multiple joint points to be detected on the multiple target images;
and acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics.
In one possible implementation, the processing module 103 is specifically configured to:
Determining the to-be-detected joint points included in each to-be-detected area;
determining a joint included angle corresponding to each to-be-detected area according to-be-detected joint points included in each to-be-detected area;
and determining the joint angle characteristics according to the positions of the plurality of joint points to be detected on the plurality of target images and the joint included angles corresponding to the areas to be detected.
In one possible implementation, the determining module 104 is specifically configured to:
acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features;
acquiring motion smoothness, joint reaction score and coupling joint information of the target object according to the first feature matrix;
and determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information.
In one possible implementation, the determining module 104 is specifically configured to:
according to the first feature matrix, smoothness of each joint point to be detected, smoothness of each region to be detected, feature distance of each joint point to be detected and feature distance of each region to be detected are obtained;
Determining the motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected;
determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected;
coupling the first feature matrix to obtain a coupled second feature matrix;
and determining the coupling association information according to the second feature matrix.
In one possible implementation, the determining module 104 is specifically configured to:
determining the motion energy value of each to-be-detected joint according to the characteristic distance of each to-be-detected joint and the characteristic distance of each to-be-detected area;
determining an energy value of a desired motion joint and an energy value of a joint in association with the target object when the target object performs the target action according to the motion energy value;
determining the joint response score based on the energy value of the desired motion joint and the energy value of the joint motion.
In one possible implementation, the determining module 104 is specifically configured to:
acquiring time sequence feature vectors of all the nodes to be detected according to the second feature matrix;
Acquiring a weight matrix of each node to be detected according to a plurality of sample videos;
and determining the coupling association information according to the time sequence feature vector of each node to be detected and the weight matrix.
Fig. 11 illustrates a physical structure diagram of an electronic device, as shown in fig. 11, which may include: processor 1110, communication interface Communications Interface 1120, memory 1130 and communication bus 1140, wherein processor 1110, communication interface 1120 and memory 1130 communicate with each other via communication bus 1140. Processor 1110 may invoke logic instructions in memory 1130 to perform a method for determining the extent of associative reaction of a target object, for application to a terminal device, the method comprising: receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client; acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video; performing feature analysis processing on the multiple target images to obtain motion features of the target objects; and determining the associated reaction degree of the target object according to the motion characteristics.
Further, the logic instructions in the memory 1130 described above may be implemented in the form of software functional units and sold or used as a stand-alone product, stored on a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. The electronic device in the embodiment of the application can be a terminal device, a client, a mobile phone, a portable mobile phone and the like.
In another aspect, the present application also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer is capable of executing the method for determining the joint reaction degree of the target object provided by the foregoing embodiments, and the method is applied to a terminal device, and includes: receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client; acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video; performing feature analysis processing on the multiple target images to obtain motion features of the target objects; and determining the associated reaction degree of the target object according to the motion characteristics.
In still another aspect, the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, is implemented to perform the method for determining the extent of associative reaction of a target object provided in the foregoing embodiments, and is applied to a terminal device, where the method includes: receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client; acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video; performing feature analysis processing on the multiple target images to obtain motion features of the target objects; and determining the associated reaction degree of the target object according to the motion characteristics.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (7)

1. A method for determining a degree of associative reaction of a target object, applied to a terminal device, the method comprising:
receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client;
acquiring a plurality of corresponding target images according to the target video, wherein the plurality of target images are arranged according to the time sequence in the target video;
performing feature analysis processing on the multiple target images to obtain motion features of the target objects;
determining the associated reaction degree of the target object according to the motion characteristics;
the step of performing feature analysis processing on the multiple target images to obtain motion features of the target object includes:
determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object;
performing joint point identification processing on the multiple target images to obtain joint position features of the target object, wherein the joint position features comprise positions of the multiple joint points to be detected on the multiple target images;
acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics;
The determining the associated reaction degree of the target object according to the motion characteristics comprises the following steps:
acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features;
acquiring motion smoothness, joint reaction score and coupling joint information of the target object according to the first feature matrix;
determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information;
the obtaining, according to the first feature matrix, motion smoothness, associated reaction score and coupled associated information of the target object includes:
according to the first feature matrix, smoothness of each joint point to be detected, smoothness of each region to be detected, feature distance of each joint point to be detected and feature distance of each region to be detected are obtained;
determining the motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected;
determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected;
Coupling the first feature matrix to obtain a coupled second feature matrix;
and determining the coupling association information according to the second feature matrix.
2. The method according to claim 1, wherein the acquiring the joint angle characteristic of the target object according to the joint position characteristic and the region to be detected includes:
determining the to-be-detected joint points included in each to-be-detected area;
determining a joint included angle corresponding to each to-be-detected area according to-be-detected joint points included in each to-be-detected area;
and determining the joint angle characteristics according to the positions of the plurality of joint points to be detected on the plurality of target images and the joint included angles corresponding to the areas to be detected.
3. The method of claim 1, wherein determining the joint reaction score based on the characteristic distance of each of the nodes to be detected and the characteristic distance of each of the areas to be detected comprises:
determining the motion energy value of each to-be-detected joint according to the characteristic distance of each to-be-detected joint and the characteristic distance of each to-be-detected area;
Determining an energy value of a desired motion joint and an energy value of a joint in association with the target object when the target object performs the target action according to the motion energy value;
determining the joint response score based on the energy value of the desired motion joint and the energy value of the joint motion.
4. The method of claim 1, wherein said determining said coupling association information from said second feature matrix comprises:
acquiring time sequence feature vectors of all the nodes to be detected according to the second feature matrix;
acquiring a weight matrix of each node to be detected according to a plurality of sample videos;
and determining the coupling association information according to the time sequence feature vector of each node to be detected and the weight matrix.
5. A device for determining a degree of associative reaction of a target object, applied to a terminal device, the device comprising:
the receiving module is used for receiving a target video sent by a client, wherein the target video is obtained by shooting a process of executing a target action on a target object by the client;
the acquisition module is used for acquiring a plurality of corresponding target images according to the target video, and the plurality of target images are arranged according to the time sequence in the target video;
The processing module is used for carrying out characteristic analysis processing on the multiple target images to obtain the motion characteristics of the target objects;
the determining module is used for determining the associated reaction degree of the target object according to the motion characteristics;
the processing module is specifically configured to:
determining a plurality of joint points to be detected and a plurality of areas to be detected of the target object;
performing joint point identification processing on the multiple target images to obtain joint position features of the target object, wherein the joint position features comprise positions of the multiple joint points to be detected on the multiple target images;
acquiring joint angle characteristics of the target object according to the joint position characteristics and the region to be detected, wherein the motion characteristics comprise the joint position characteristics and the joint angle characteristics;
the determining module is specifically configured to:
acquiring a first feature matrix corresponding to the target object according to the joint position features and the joint angle features;
acquiring motion smoothness, joint reaction score and coupling joint information of the target object according to the first feature matrix;
determining the joint reaction degree according to the motion smoothness, the joint reaction score and the coupling joint information;
The determining module is specifically configured to:
according to the first feature matrix, smoothness of each joint point to be detected, smoothness of each region to be detected, feature distance of each joint point to be detected and feature distance of each region to be detected are obtained;
determining the motion smoothness according to the smoothness of each joint point to be detected and the smoothness of each region to be detected;
determining the joint reaction score according to the characteristic distance of each joint point to be detected and the characteristic distance of each region to be detected;
coupling the first feature matrix to obtain a coupled second feature matrix;
and determining the coupling association information according to the second feature matrix.
6. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of determining the extent of associative reaction of a target object according to any one of claims 1 to 4 when the program is executed.
7. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of determining the extent of associative reaction of a target object according to any one of claims 1 to 4.
CN202310050631.3A 2023-02-01 2023-02-01 Method, device and equipment for determining associated reaction degree of target object Active CN116309699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310050631.3A CN116309699B (en) 2023-02-01 2023-02-01 Method, device and equipment for determining associated reaction degree of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310050631.3A CN116309699B (en) 2023-02-01 2023-02-01 Method, device and equipment for determining associated reaction degree of target object

Publications (2)

Publication Number Publication Date
CN116309699A CN116309699A (en) 2023-06-23
CN116309699B true CN116309699B (en) 2023-11-17

Family

ID=86836796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310050631.3A Active CN116309699B (en) 2023-02-01 2023-02-01 Method, device and equipment for determining associated reaction degree of target object

Country Status (1)

Country Link
CN (1) CN116309699B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657581A (en) * 2018-12-07 2019-04-19 南京高美吉交通科技有限公司 Urban track traffic gate passing control method based on binocular camera behavioral value
CN110738192A (en) * 2019-10-29 2020-01-31 腾讯科技(深圳)有限公司 Human motion function auxiliary evaluation method, device, equipment, system and medium
CN114782497A (en) * 2022-06-20 2022-07-22 中国科学院自动化研究所 Motion function analysis method and electronic device
CN115331776A (en) * 2022-08-09 2022-11-11 康键信息技术(深圳)有限公司 Static motion data statistical method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013192308A1 (en) * 2012-06-22 2013-12-27 Brookhaven Science Associates, Llc Atmospheric radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657581A (en) * 2018-12-07 2019-04-19 南京高美吉交通科技有限公司 Urban track traffic gate passing control method based on binocular camera behavioral value
CN110738192A (en) * 2019-10-29 2020-01-31 腾讯科技(深圳)有限公司 Human motion function auxiliary evaluation method, device, equipment, system and medium
CN114782497A (en) * 2022-06-20 2022-07-22 中国科学院自动化研究所 Motion function analysis method and electronic device
CN115331776A (en) * 2022-08-09 2022-11-11 康键信息技术(深圳)有限公司 Static motion data statistical method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康复机器人的同步主动交互控制与实现;彭亮 等;《自动化学报》;第41卷(第11期);第1837-1846页 *

Also Published As

Publication number Publication date
CN116309699A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111144217B (en) Motion evaluation method based on human body three-dimensional joint point detection
Toshev et al. Deeppose: Human pose estimation via deep neural networks
US11417095B2 (en) Image recognition method and apparatus, electronic device, and readable storage medium using an update on body extraction parameter and alignment parameter
CN110458829B (en) Image quality control method, device, equipment and storage medium based on artificial intelligence
CN110232706B (en) Multi-person follow shooting method, device, equipment and storage medium
CN106127120A (en) Posture estimation method and device, computer system
CN111191599A (en) Gesture recognition method, device, equipment and storage medium
CN112287777B (en) Student state classroom monitoring method based on edge intelligence
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
CN114550027A (en) Vision-based motion video fine analysis method and device
CN111860157A (en) Motion analysis method, device, equipment and storage medium
CN113255479A (en) Lightweight human body posture recognition model training method, action segmentation method and device
CN112149472A (en) Artificial intelligence-based limb action recognition and comparison method
CN115131879A (en) Action evaluation method and device
CN105096304B (en) The method of estimation and equipment of a kind of characteristics of image
CN115035037A (en) Limb rehabilitation training method and system based on image processing and multi-feature fusion
CN116309699B (en) Method, device and equipment for determining associated reaction degree of target object
KR20210054349A (en) Method for predicting clinical functional assessment scale using feature values derived by upper limb movement of patients
CN113763420A (en) Target tracking method, system, storage medium and terminal equipment
JPWO2016021152A1 (en) Posture estimation method and posture estimation apparatus
CN107967455A (en) A kind of transparent learning method of intelligent human-body multidimensional physical feature big data and system
CN117137435A (en) Rehabilitation action recognition method and system based on multi-mode information fusion
CN112257642B (en) Human body continuous motion similarity evaluation method and evaluation device
Ata et al. A robust optimized convolutional neural network model for human activity recognition using sensing devices
JP2021191356A (en) Correction content learning device, operation correction device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant