CN114648810A - Interactive gait recognition method and device and electronic equipment - Google Patents

Interactive gait recognition method and device and electronic equipment Download PDF

Info

Publication number
CN114648810A
CN114648810A CN202210241812.XA CN202210241812A CN114648810A CN 114648810 A CN114648810 A CN 114648810A CN 202210241812 A CN202210241812 A CN 202210241812A CN 114648810 A CN114648810 A CN 114648810A
Authority
CN
China
Prior art keywords
gait
data
motion
image
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210241812.XA
Other languages
Chinese (zh)
Other versions
CN114648810B (en
Inventor
佟良远
朱文成
刘佳玉
冯振
刘岸风
张昊
彭斌
宋浩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Ruiyi Information Technology Co ltd
Original Assignee
Beijing Zhongke Ruiyi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Ruiyi Information Technology Co ltd filed Critical Beijing Zhongke Ruiyi Information Technology Co ltd
Priority to CN202210241812.XA priority Critical patent/CN114648810B/en
Publication of CN114648810A publication Critical patent/CN114648810A/en
Application granted granted Critical
Publication of CN114648810B publication Critical patent/CN114648810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an interactive gait recognition method, an interactive gait recognition device and electronic equipment, wherein the interactive gait recognition method comprises the following steps: acquiring gait data acquired by a depth sensor; processing the gait data through an initial gait recognition model to obtain gait motion marking data; matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.

Description

Interactive gait recognition method and device and electronic equipment
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an interactive gait recognition method, an interactive gait recognition device, and an electronic device.
Background
Cerebrovascular disease, Parkinson's disease, Alzheimer's disease and other nervous system diseases are degenerative diseases and can not be cured. The early stage of the disease is important for early clinical diagnosis of the disease, and the early stage of the disease can be usually taken as clinical guidance information by a gait analysis method.
At present, gait is generally recognized by means of machine learning. In the process of capturing human gait motion, scenes exist, accuracy of human gait data acquisition is affected, and the problem of inaccurate machine learning parameter calculation is caused, so that the finally obtained gait recognition result is inaccurate.
Disclosure of Invention
In view of the above problems, the present invention provides an interactive gait recognition method, an interactive gait recognition device and an electronic device, which improve the accuracy of gait recognition.
In order to achieve the purpose, the invention provides the following technical scheme:
an interactive gait recognition method, the method comprising:
acquiring gait data acquired through a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image;
and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the method further comprises:
processing the gait data acquired by the depth sensor to obtain a gait image and skeleton data;
and respectively storing the gait image and the bone data.
Optionally, the method further comprises:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the method further comprises:
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on gait motion marking data in the visual gait image, and recording operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
An interactive gait recognition device, the device comprising:
the acquiring unit is used for acquiring gait data acquired by the depth sensor;
the processing unit is used for processing the gait data through an initial gait recognition model to obtain gait motion marking data;
the generating unit is used for matching the gait image corresponding to the gait data with the gait action marking data to generate a visual gait image;
and the correcting unit is used for correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the apparatus further comprises:
the data processing unit is used for processing the gait data acquired by the depth sensor to acquire a gait image and bone data;
and the data storage unit is used for respectively storing the gait image and the bone data.
Optionally, the apparatus further comprises:
the system comprises a sample acquisition unit, a data acquisition unit and a data acquisition unit, wherein the sample acquisition unit is used for acquiring a bone data training sample marked with a gait motion label;
and the training unit is used for carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the apparatus further comprises:
and the model adjusting unit is used for adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the generating unit includes:
the processing subunit is used for carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence and displaying the data on a time axis of an interactive interface;
and the information adding subunit is used for adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correction unit comprises:
the operation recording subunit is used for responding to the operation of the target object on the gait motion marking data in the visual gait image and recording the operation content corresponding to the operation;
and the correcting subunit is used for correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
A storage medium storing executable instructions which, when executed by a processor, implement an interactive gait recognition method as claimed in any one of the preceding claims.
An electronic device, comprising:
a memory for storing a program;
a processor configured to execute the program, the program being specifically configured to implement the interactive gait recognition method according to any of the above.
Compared with the prior art, the invention provides an interactive gait recognition method, an interactive gait recognition device and electronic equipment, wherein the interactive gait recognition method comprises the following steps: acquiring gait data acquired through a depth sensor; processing the gait data through an initial gait recognition model to obtain gait motion marking data; matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of an interactive gait recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a time axis for labeling gait movements according to an embodiment of the invention;
fig. 3 is a schematic diagram of an operation option for interactive operation by visualizing a gait image according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an interactive gait recognition apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The terms "first" and "second," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
Referring to fig. 1, a flow chart of an interactive gait recognition method according to an embodiment of the present invention is shown, and the method may include the following steps:
and S101, acquiring gait data acquired through a depth sensor.
The depth sensor can be used to measure the distance between an object to be measured in an environment and the sensor, and its output can be mainly represented in two forms of a depth map and point cloud data. In an application scenario corresponding to the embodiment of the present invention, in which the gait motion of the target object is identified, the gait data acquired by the depth sensor includes a gait image corresponding to the depth map and bone data of the target object corresponding to the point cloud data. In order to be able to obtain gait data, depth sensors can be worn at the skeletal joints of the target object to be measured, such as waist joint points, left and right thigh joint points, left and right knee joint points, left and right toe joint points, left and right ankle joint points. The selection of the specific joint point matches with the actual application scenario, which is not limited by the present invention.
And S102, processing the gait data through an initial gait recognition model to obtain gait motion marking data.
The initial gait recognition model is a model obtained by training through a neural network based on training samples with gait motion. Namely, a skeleton data training sample marked with a gait motion label is obtained, and neural network model training is carried out on the training sample to obtain an initial gait recognition model.
However, the training samples in the initial gait recognition model are obtained by labeling based on normal gait motions, and the influence of environmental factors is not considered, so that the final recognition result may have a deviation. The environmental factors include, but are not limited to, the location of the target object, lighting, the dressing of the target object, etc.
And S103, matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image.
And S104, correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
The visualized gait image is a gait image which can be matched with the gait motion marking data, and the visualized gait image is an image which can be modified by the gait motion marking data, so that the gait motion marking data can be correspondingly operated based on the visualized gait image, the visualized gait image is corrected, and the target gait motion data is obtained.
For example, if a stroke patient lifts the left foot in a looped manner, the sensor will recognize that the adult turns around; after the video data and the bone point data are visualized, the marks (lifting feet, falling feet, turning around and the like) of each action node calculated through the bone point data correspond to a certain time frame in the video, and a user can correct the places with errors by dragging, adding actions such as deleting the marks and the like. Such as adding a "left foot up" marker at the time frame when the left foot is up, deleting the wrong marker, etc. In this way, a perfect match of skeletal point data to gait movements is achieved.
The gait motion can be displayed in an interactive mode, the gait motion can be conveniently modified, the defects of the traditional pure manual marking method are overcome, and the problem of inaccurate machine learning parameter calculation caused by the interference of the environment on motion capture sensor acquisition equipment is solved.
After the gait motion marking data is corrected through the interactive visual image to obtain target gait motion data, the initial gait recognition model can be adjusted based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing gait motions in the gait data. Therefore, the influence of environmental factors can be avoided, and the accuracy of model processing is improved.
In an implementation manner of the embodiment of the present invention, the matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Further, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Wherein the operation content comprises one or more of modification, deletion or addition of gait marking data.
Specifically, gait motion markers (turning on, left foot lifting, left foot falling, right foot lifting, right foot falling, and turning off) can be obtained by storing different motion videos (for example, 30 frames per second) acquired by the depth sensor and corresponding skeleton point information of each frame, and using a machine learning pre-training model to divide gait cycles. The obtained gait motion marks are subjected to image visualization, are arranged according to a time sequence and are displayed on an interactive interface time axis, image information of the current motion can be seen at the position of each time frame, and a user can modify, delete and add the gait motion mark data in an interactive mode to reach the position of a key frame matched with the actual gait motion of a tester.
Wherein the mark refers to an action mark in a gait cycle (comprising key actions in the gait cycle, namely turning on, lifting the left foot, falling the left foot, lifting the right foot, falling the right foot and finishing turning). Each frame corresponds to each frame in the video, which can be understood as recording for 60 seconds in 1 minute and 30 frames per second, each frame corresponds to one picture (image information is 1800 frames and 1800 pictures), and the gait marks generated by the algorithm are adjusted to specific positions, and the frame is the minimum unit. By the method, key actions in a gait cycle are visually imaged on the interface, most gait actions are marked as correct gait actions, the gait action marks partially influenced by the environment are visually displayed on the interface, a user can visually see wrong gait action marks and only needs to make a small part of modification marks, all calibration marks are not needed from beginning to end, and time is saved.
The gait data acquired by the motion capture sensor can be intuitively and graphically displayed in advance, manual repair can be carried out, the repaired data is calculated through a gait algorithm to obtain a correct gait analysis result, the efficiency is improved compared with a pure manual labeling method, the interpretability is stronger compared with machine learning, the accuracy is higher than that of the original acquired data, and the calculated parameters are more accurate after manual calibration. The data after the artificial calibration can also be used as a training model of the gait algorithm, and the accuracy of the gait algorithm is continuously improved.
For example, during the process of recognizing the gait data of the target object by the depth sensor, an image of each frame of the gait data is saved (for example, 30 frames per second, 30 pictures of data). And simultaneously recording the bone data collected by each frame, wherein each frame of picture and each frame of bone data are in one-to-one correspondence.
The machine learning initial identification model is used for carrying out automatic gait cycle identification on the whole gait process, and 6 key points (turning start, left foot lifting, left foot falling, right foot lifting, right foot falling and turning end) can be obtained. The images of each frame recorded between each keypoint and the corresponding keypoint can have a time axis on the annotation interface, see fig. 2, with the minimum unit being frame (30 frames/second). Taking a 1-minute test as an example, 30 image frames per second, for a total of 1800 image frames, 1800 pictures (frames) are converted into time units, and each frame on the time axis represents time and can also find the image of the corresponding frame.
Taking the left foot lifted as an example, the obtained 6 key points are marked with left foot lifted icons at specific frame positions, as shown in fig. 2. When the key point is selected, the interface displays the image of the current frame at the same time, an inspector checks whether the left foot lifting state of the image is consistent with that of the image in actual acquisition, if so, the inspector does not need to adjust, and if not, the inspector can select the current label to delete the image. If the positions of the key points are close to the preset positions, errors still exist, the positions of the marks can be adjusted, or new key point marks are created on the correct image frame positions, and the saved mark files can be used for training a gait analysis model, so that the accuracy of a gait analysis algorithm is improved.
The operation options for performing interactive operation based on the visualized gait image can be shown in fig. 3, wherein the operation of removing all the operations refers to removing all the gait label information on the interface after clicking to remove all the gait labels, and the user can add labels manually from the new gait label. The operation of the AI label is to click the AI label to call the pre-training model from the beginning and generate label information from the beginning. The operation of canceling means that if a certain mark is deleted by mistake, the operation of the previous step can be canceled. The operation of recovering refers to the operation of recovering the last step of the revocation. The deleting operation refers to that after a certain label is selected, the label can be deleted by clicking if the label needs to be deleted.
Compared with gait acquisition without a manual marking function, the gait data acquisition method and device can solve the problem of inaccurate gait data acquisition caused by environmental influence, and can manually calibrate and repair gait motion marks by feeding back the gait data to a graphical interface in advance through visualization, so that the accuracy of the data is achieved. And whether the gait motion data is normal or not is checked in a visual mode without the need of acquiring the gait data from the beginning. According to the method, all gait action marks are displayed on the interface, and a small number of marks influenced by the environment can be modified through operations of dragging, moving, deleting and the like in a graphical interaction mode, so that the time is greatly saved, and the marking efficiency is improved.
Referring to fig. 4, in an embodiment of the present invention, there is further provided an interactive gait recognition apparatus, including:
an acquiring unit 401, configured to acquire gait data acquired by a depth sensor;
the processing unit 402 is configured to process the gait data through an initial gait recognition model to obtain gait motion marking data;
a generating unit 403, configured to match a gait image corresponding to the gait data with the gait motion marking data, and generate a visual gait image;
a correcting unit 404, configured to correct the gait motion marking data based on the visualized gait image, so as to obtain target gait motion data.
Optionally, the apparatus further comprises:
the data processing unit is used for processing the gait data acquired by the depth sensor to acquire a gait image and bone data;
and the data storage unit is used for respectively storing the gait image and the bone data.
Optionally, the apparatus further comprises:
the system comprises a sample acquisition unit, a data acquisition unit and a data acquisition unit, wherein the sample acquisition unit is used for acquiring a bone data training sample marked with a gait motion label;
and the training unit is used for carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the apparatus further comprises:
and the model adjusting unit is used for adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the generating unit includes:
the processing subunit is used for carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence and displaying the data on a time axis of an interactive interface;
and the information adding subunit is used for adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correction unit comprises:
the operation recording subunit is used for responding to the operation of the target object on the gait motion marking data in the visual gait image and recording the operation content corresponding to the operation;
and the correcting subunit is used for correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
The invention provides an interactive gait recognition device, which comprises: the acquiring unit acquires gait data acquired by the depth sensor; the processing unit processes the gait data through an initial gait recognition model to obtain gait motion marking data; the generating unit matches the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and the correction unit corrects the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.
Based on the foregoing embodiments, in another embodiment of the present invention, a storage medium is further provided, where the storage medium stores executable instructions, and the instructions, when executed by a processor, implement the interactive gait recognition method according to any one of the above.
Correspondingly, in another embodiment of the present invention, an electronic device is further provided, including:
a memory for storing a program;
a processor configured to execute the program, the program being specifically configured to implement:
acquiring gait data acquired through a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image;
and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the method further comprises:
processing the gait data acquired by the depth sensor to obtain a gait image and skeleton data;
and respectively storing the gait image and the bone data.
Optionally, the method further comprises:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the method further comprises:
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing gait motions in the gait data.
Optionally, the matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
It should be noted that, in the present embodiment, reference may be made to the corresponding contents in the foregoing, and details are not described here.
The emphasis of each embodiment in the present specification is on the difference from the other embodiments, and the same and similar parts among the various embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An interactive gait recognition method, characterized in that the method comprises:
acquiring gait data acquired through a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image;
and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
2. The method of claim 1, further comprising:
processing the gait data acquired by the depth sensor to obtain a gait image and skeleton data;
and respectively storing the gait image and the bone data.
3. The method of claim 1, further comprising:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain an initial gait recognition model.
4. The method of claim 3, further comprising:
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing the gait motion in the gait data.
5. The method according to claim 1, wherein said matching a gait image corresponding to said gait data with said gait motion signature data to generate a visual gait image comprises:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
6. The method of claim 1, wherein said correcting said gait motion signature data based on said visualized gait image to obtain target gait motion data comprises:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
7. The method of claim 6, wherein the operational content includes one or more of modifications, deletions, or additions to the gait marking data.
8. An interactive gait recognition apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring gait data acquired by the depth sensor;
the processing unit is used for processing the gait data through an initial gait recognition model to obtain gait motion marking data;
the generating unit is used for matching the gait image corresponding to the gait data with the gait action marking data to generate a visual gait image;
and the correcting unit is used for correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
9. A storage medium storing executable instructions which, when executed by a processor, implement the interactive gait recognition method of any of claims 1-7.
10. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program, the program being particularly for implementing the interactive gait recognition method according to any of claims 1 to 7.
CN202210241812.XA 2022-03-11 2022-03-11 Interactive gait recognition method and device and electronic equipment Active CN114648810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210241812.XA CN114648810B (en) 2022-03-11 2022-03-11 Interactive gait recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210241812.XA CN114648810B (en) 2022-03-11 2022-03-11 Interactive gait recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114648810A true CN114648810A (en) 2022-06-21
CN114648810B CN114648810B (en) 2022-10-14

Family

ID=81993741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210241812.XA Active CN114648810B (en) 2022-03-11 2022-03-11 Interactive gait recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114648810B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912947A (en) * 2023-08-25 2023-10-20 东莞市触美电子科技有限公司 Intelligent screen, screen control method, device, equipment and storage medium thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320423A (en) * 2008-06-26 2008-12-10 复旦大学 Low resolution gait recognition method based on high-frequency super-resolution
US20140288680A1 (en) * 2013-03-15 2014-09-25 Nike, Inc Monitoring Fitness Using a Mobile Device
CN107016686A (en) * 2017-04-05 2017-08-04 江苏德长医疗科技有限公司 Three-dimensional gait and motion analysis system
US20190076060A1 (en) * 2016-03-31 2019-03-14 Nec Solution Innovators, Ltd. Gait analyzing device, gait analyzing method, and computer-readable recording medium
US20190347059A1 (en) * 2018-05-14 2019-11-14 Schneider Electric Industries Sas Computer-implemented method and system for generating a mobile application from a desktop application
CN111966724A (en) * 2020-06-29 2020-11-20 北京津发科技股份有限公司 Interactive behavior data acquisition and analysis method and device based on human-computer interaction interface area automatic identification technology
CN112016497A (en) * 2020-09-04 2020-12-01 王海 Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN113052138A (en) * 2021-04-25 2021-06-29 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101320423A (en) * 2008-06-26 2008-12-10 复旦大学 Low resolution gait recognition method based on high-frequency super-resolution
US20140288680A1 (en) * 2013-03-15 2014-09-25 Nike, Inc Monitoring Fitness Using a Mobile Device
US20190076060A1 (en) * 2016-03-31 2019-03-14 Nec Solution Innovators, Ltd. Gait analyzing device, gait analyzing method, and computer-readable recording medium
CN107016686A (en) * 2017-04-05 2017-08-04 江苏德长医疗科技有限公司 Three-dimensional gait and motion analysis system
US20190347059A1 (en) * 2018-05-14 2019-11-14 Schneider Electric Industries Sas Computer-implemented method and system for generating a mobile application from a desktop application
CN111966724A (en) * 2020-06-29 2020-11-20 北京津发科技股份有限公司 Interactive behavior data acquisition and analysis method and device based on human-computer interaction interface area automatic identification technology
CN112016497A (en) * 2020-09-04 2020-12-01 王海 Single-view Taijiquan action analysis and assessment system based on artificial intelligence
CN113052138A (en) * 2021-04-25 2021-06-29 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARIF REZA ANWARY 等: "Gait quantification and visualization for digital healthcare", 《ELSEVIER》 *
罗坚等: "异常步态3维人体建模和可变视角识别", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912947A (en) * 2023-08-25 2023-10-20 东莞市触美电子科技有限公司 Intelligent screen, screen control method, device, equipment and storage medium thereof
CN116912947B (en) * 2023-08-25 2024-03-12 东莞市触美电子科技有限公司 Intelligent screen, screen control method, device, equipment and storage medium thereof

Also Published As

Publication number Publication date
CN114648810B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
CN109557099B (en) Inspection device and inspection system
CN109069097B (en) Dental three-dimensional data processing device and method thereof
US8798346B2 (en) Image registration
US11842511B2 (en) Work analyzing system and work analyzing method
JP2006350577A (en) Operation analyzing device
JP6985856B2 (en) Information processing equipment, control methods and programs for information processing equipment
JP2019046095A (en) Information processing device, and control method and program for information processing device
CN114648810B (en) Interactive gait recognition method and device and electronic equipment
JP2018503410A (en) Assessment of attention deficits
CN113435236A (en) Home old man posture detection method, system, storage medium, equipment and application
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
CN113707279A (en) Auxiliary analysis method and device for medical image picture, computer equipment and medium
JP2010075354A (en) Blood capillary blood flow measurement apparatus, blood capillary blood flow measurement method, and program
US11989928B2 (en) Image processing system
JP2002063579A (en) Device and method for analyzing image
JP2016004354A (en) Determination method of body pose
CN114167993B (en) Information processing method and device
JP2020181467A (en) Meter reading system, meter reading method, and program
US20220087645A1 (en) Guided lung coverage and automated detection using ultrasound devices
JP2003227706A (en) Image measuring device and program therefor
CN114519804A (en) Human body skeleton labeling method and device and electronic equipment
CN113256625A (en) Electronic equipment and recognition device
JP2022185838A5 (en)
JP2018180521A (en) Endoscope device and measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant