CN114648810B - Interactive gait recognition method and device and electronic equipment - Google Patents
Interactive gait recognition method and device and electronic equipment Download PDFInfo
- Publication number
- CN114648810B CN114648810B CN202210241812.XA CN202210241812A CN114648810B CN 114648810 B CN114648810 B CN 114648810B CN 202210241812 A CN202210241812 A CN 202210241812A CN 114648810 B CN114648810 B CN 114648810B
- Authority
- CN
- China
- Prior art keywords
- gait
- data
- motion
- image
- interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses an interactive gait recognition method, an interactive gait recognition device and electronic equipment, wherein the interactive gait recognition method comprises the following steps: acquiring gait data acquired through a depth sensor; processing the gait data through an initial gait recognition model to obtain gait motion marking data; matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.
Description
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an interactive gait recognition method, an interactive gait recognition device, and an electronic device.
Background
Cerebrovascular disease, parkinson's disease, alzheimer's disease and other nervous system diseases are degenerative diseases and can not be cured. The early stage of the disease is important for early clinical diagnosis of the disease, and the early stage of the disease can be usually taken as clinical guidance information by a gait analysis method.
At present, gait is generally recognized by means of machine learning. In the process of capturing human gait motion, scenes exist, accuracy of human gait data acquisition is affected, and the problem of inaccurate machine learning parameter calculation is caused, so that the finally obtained gait recognition result is inaccurate.
Disclosure of Invention
In view of the above problems, the present invention provides an interactive gait recognition method, apparatus and electronic device, which improve the accuracy of gait recognition.
In order to achieve the purpose, the invention provides the following technical scheme:
an interactive gait recognition method, the method comprising:
acquiring gait data acquired through a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image;
and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the method further comprises:
processing the gait data acquired by the depth sensor to obtain a gait image and bone data;
and respectively storing the gait image and the bone data.
Optionally, the method further comprises:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the method further comprises:
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
An interactive gait recognition device, the device comprising:
the acquiring unit is used for acquiring gait data acquired by the depth sensor;
the processing unit is used for processing the gait data through an initial gait recognition model to obtain gait motion marking data;
the generating unit is used for matching the gait image corresponding to the gait data with the gait action marking data to generate a visual gait image;
and the correcting unit is used for correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the apparatus further comprises:
the data processing unit is used for processing the gait data acquired by the depth sensor to acquire a gait image and bone data;
and the data storage unit is used for respectively storing the gait image and the bone data.
Optionally, the apparatus further comprises:
the system comprises a sample acquisition unit, a data acquisition unit and a data acquisition unit, wherein the sample acquisition unit is used for acquiring a bone data training sample marked with a gait motion label;
and the training unit is used for carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the apparatus further comprises:
and the model adjusting unit is used for adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the generating unit includes:
the processing subunit is used for carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence and displaying the data on a time axis of an interactive interface;
and the information adding subunit is used for adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correction unit comprises:
the operation recording subunit is used for responding to the operation of the target object on the gait motion marking data in the visual gait image and recording the operation content corresponding to the operation;
and the correcting subunit is used for correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
A storage medium storing executable instructions which, when executed by a processor, implement an interactive gait recognition method as claimed in any one of the preceding claims.
An electronic device, comprising:
a memory for storing a program;
a processor configured to execute the program, the program being specifically configured to implement the interactive gait recognition method according to any of the above.
Compared with the prior art, the invention provides an interactive gait recognition method, an interactive gait recognition device and electronic equipment, wherein the interactive gait recognition method comprises the following steps: acquiring gait data acquired by a depth sensor; processing the gait data through an initial gait recognition model to obtain gait motion marking data; matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of an interactive gait recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a time axis for labeling gait movements according to an embodiment of the invention;
fig. 3 is a schematic diagram of an operation option for interactive operation of visualization of a gait image according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an interactive gait recognition apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like in the description and claims of the present invention and in the above-described drawings, are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include other steps or elements not listed.
Referring to fig. 1, a flow chart of an interactive gait recognition method according to an embodiment of the present invention is shown, and the method may include the following steps:
and S101, acquiring gait data acquired through a depth sensor.
The depth sensor can be used to measure the distance between an object to be measured in an environment and the sensor, and its output can be mainly represented in two forms of a depth map and point cloud data. In an application scenario corresponding to the embodiment of the present invention, in which the gait motion of the target object is identified, the gait data acquired by the depth sensor includes a gait image corresponding to the depth map and bone data of the target object corresponding to the point cloud data. In order to be able to obtain gait data, depth sensors can be worn at the skeletal joints of the target object to be measured, such as the lumbar joint point, the left and right thigh joint points, the left and right knee joint points, the left and right toe joint points, the left and right ankle joint points. The selection of the specific joint point matches with the actual application scenario, which is not limited in the present invention.
And S102, processing the gait data through an initial gait recognition model to obtain gait motion marking data.
The initial gait recognition model is a model obtained by training through a neural network based on training samples with gait motion. Namely, a skeleton data training sample marked with a gait motion label is obtained, and neural network model training is carried out on the training sample to obtain an initial gait recognition model.
However, the training samples in the initial gait recognition model are obtained by labeling based on normal gait motions, and the influence of environmental factors is not considered, so that the final recognition result may have a deviation. The environmental factors include, but are not limited to, the location of the target object, lighting, the dressing of the target object, etc.
And S103, matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image.
And S104, correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
The visualized gait image is a gait image which can be matched with the gait motion marking data, and the visualized gait image is an image which can be modified by the gait motion marking data, so that the gait motion marking data can be correspondingly operated based on the visualized gait image, the visualized gait image is corrected, and the target gait motion data is obtained.
For example, if a stroke patient lifts the left foot in a looped manner, the sensor will recognize that the adult turns around; after the video data and the bone point data are visualized, the marks (lifting feet, falling feet, turning around and the like) of each action node calculated through the bone point data correspond to a certain time frame in the video, and a user can correct the places with errors by dragging, adding actions such as deleting the marks and the like. Such as adding a "left foot up" marker at the time frame when the left foot is up, deleting the wrong marker, etc. In this way, a perfect match of skeletal point data to gait movements is achieved.
The gait motion can be displayed in an interactive mode, the gait motion can be conveniently modified, the defects of the traditional pure manual marking method are overcome, and the problem of inaccurate machine learning parameter calculation caused by the interference of the environment on motion capture sensor acquisition equipment is solved.
After the gait motion marking data is corrected through the interactive visual image to obtain target gait motion data, the initial gait recognition model can be adjusted based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing gait motions in the gait data. Therefore, the influence of environmental factors can be avoided, and the accuracy of model processing is improved.
In an implementation manner of the embodiment of the present invention, the matching the gait image corresponding to the gait data with the gait motion signature data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Further, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Wherein the operation content comprises one or more of modification, deletion or addition of gait marking data.
Specifically, gait motion markers (turning on, left foot lifting, left foot falling, right foot lifting, right foot falling, and turning off) can be obtained by storing different motion videos (for example, 30 frames per second) acquired by the depth sensor and corresponding skeleton point information of each frame, and using a machine learning pre-training model to divide gait cycles. The obtained gait motion marks are subjected to image visualization, are arranged according to a time sequence and are displayed on an interactive interface time axis, image information of the current motion can be seen at the position of each time frame, and a user can modify, delete and add the gait motion mark data in an interactive mode to reach the position of a key frame matched with the actual gait motion of a tester.
Wherein the mark is an action mark in a gait cycle (comprising key actions in the gait cycle, namely turning start, left foot lift, left foot fall, right foot lift, right foot fall and turning end). Each frame corresponds to each frame in the video, which can be understood as recording for 60 seconds in 1 minute and 30 frames per second, each frame corresponds to one picture (image information is 1800 frames and 1800 pictures), and the gait marks generated by the algorithm are adjusted to specific positions, and the frame is the minimum unit. By the method, key actions in a gait cycle are visually imaged on the interface, most gait actions are marked as correct gait actions, the gait action marks partially influenced by the environment are visually displayed on the interface, a user can visually see wrong gait action marks and only needs to make a small part of modification marks, all calibration marks are not needed from beginning to end, and time is saved.
The gait data acquired by the motion capture sensor can be intuitively and graphically displayed in advance, manual repair can be carried out, the repaired data is calculated through a gait algorithm to obtain a correct gait analysis result, the efficiency is improved compared with a pure manual labeling method, the interpretability is stronger compared with machine learning, the accuracy is higher than that of the original acquired data, and the calculated parameters are more accurate after manual calibration. The data after the artificial calibration can also be used as a training model of the gait algorithm, and the accuracy of the gait algorithm is continuously improved.
For example, during the process of recognizing the gait data of the target object by the depth sensor, an image of each frame of the gait data is saved (for example, 30 frames per second, 30 pictures of data). And simultaneously recording the bone data collected by each frame, wherein each frame of picture and each frame of bone data are in one-to-one correspondence.
The machine learning initial identification model is used for carrying out automatic gait cycle identification on the whole gait process, and 6 key points (turning start, left foot lifting, left foot falling, right foot lifting, right foot falling and turning end) can be obtained. The images of each frame recorded between each keypoint and the corresponding keypoint can have a time axis on the annotation interface, see fig. 2, with the minimum unit being frame (30 frames/second). Taking a 1-minute test as an example, 30 image frames per second, for a total of 1800 image frames, 1800 pictures (frames) are converted into time units, and each frame on the time axis represents time and can also find the image of the corresponding frame.
Taking the left foot lifted as an example, the icons of the 6 obtained key points are marked at the specific frame positions, as shown in fig. 2. When the key point is selected, the interface can simultaneously display the image of the current frame, an inspector checks whether the left foot lifting state of the image is consistent with that of the image in actual acquisition, if so, the inspector does not need to adjust, and if not, the inspector can select the current mark to delete the current mark. If the positions of the key points are close to the preset positions, errors still exist, the positions of the marks can be adjusted, or new key point marks are created on the correct image frame positions, and the saved mark files can be used for training a gait analysis model, so that the accuracy of a gait analysis algorithm is improved.
The operation options for performing interactive operation based on the visualized gait image can be shown in fig. 3, wherein the operation of removing all the operations refers to removing all the gait label information on the interface after clicking to remove all the gait labels, and the user can add labels manually from the new gait label. The operation of the AI label is to click the AI label to call the pre-training model from the beginning and generate label information from the beginning. The operation of canceling means that if a certain label is deleted by mistake, the previous operation can be cancelled. The operation of recovering refers to the operation of recovering the last step of the revocation. The deleting operation refers to that after a certain label is selected, the label can be deleted by clicking if the label needs to be deleted.
Compared with gait acquisition without a manual marking function, the gait data acquisition method and device can solve the problem of inaccurate gait data acquisition caused by environmental influence, and can manually calibrate and repair gait motion marks by feeding back the gait data to a graphical interface in advance through visualization, so that the accuracy of the data is achieved. And whether the gait motion data is normal or not is checked in a visual mode without the need of starting gait acquisition. According to the method, all gait action marks are displayed on the interface, and a small number of marks influenced by the environment can be modified through operations of dragging, moving, deleting and the like in a graphical interaction mode, so that the time is greatly saved, and the marking efficiency is improved.
Referring to fig. 4, in an embodiment of the present invention, there is further provided an interactive gait recognition apparatus, including:
an acquiring unit 401, configured to acquire gait data acquired by a depth sensor;
the processing unit 402 is configured to process the gait data through an initial gait recognition model to obtain gait motion marking data;
a generating unit 403, configured to match a gait image corresponding to the gait data with the gait motion marking data, and generate a visual gait image;
a correcting unit 404, configured to correct the gait motion marking data based on the visualized gait image, so as to obtain target gait motion data.
Optionally, the apparatus further comprises:
the data processing unit is used for processing the gait data acquired by the depth sensor to acquire a gait image and bone data;
and the data storage unit is used for respectively storing the gait image and the bone data.
Optionally, the apparatus further comprises:
the system comprises a sample acquisition unit, a training unit and a training unit, wherein the sample acquisition unit is used for acquiring a bone data training sample marked with a gait motion label;
and the training unit is used for carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the apparatus further comprises:
and the model adjusting unit is used for adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the generating unit includes:
the processing subunit is used for carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence and displaying the data on a time axis of an interactive interface;
and the information adding subunit is used for adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correction unit comprises:
the operation recording subunit is used for responding to the operation of the target object on the gait motion marking data in the visual gait image and recording the operation content corresponding to the operation;
and the correcting subunit is used for correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
The invention provides an interactive gait recognition device, which comprises: the acquiring unit acquires gait data acquired by the depth sensor; the processing unit processes the gait data through an initial gait recognition model to obtain gait motion marking data; the generating unit matches the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image; and the correction unit corrects the gait motion marking data based on the visual gait image to obtain target gait motion data. The gait motion can be displayed in an interactive mode, the gait motion marking data can be conveniently corrected, pure manual marking is not needed, the problem that the initial gait recognition model is inaccurate in recognition due to interference factors such as the environment and the like is solved, and the accuracy of gait motion recognition is improved.
Based on the foregoing embodiments, in another embodiment of the present invention, a storage medium is further provided, where the storage medium stores executable instructions, and the instructions, when executed by a processor, implement the interactive gait recognition method according to any one of the above.
Correspondingly, in another embodiment of the present invention, an electronic device is further provided, including:
a memory for storing a program;
a processor configured to execute the program, the program being specifically configured to implement:
acquiring gait data acquired by a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image;
and correcting the gait motion marking data based on the visual gait image to obtain target gait motion data.
Optionally, the method further comprises:
processing the gait data acquired by the depth sensor to obtain a gait image and skeleton data;
and respectively storing the gait image and the bone data.
Optionally, the method further comprises:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain an initial gait recognition model.
Optionally, the method further comprises:
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing the gait motion in the gait data.
Optionally, the matching the gait image corresponding to the gait data with the gait motion marking data to generate a visual gait image includes:
carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface;
and adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain a visual gait image.
Optionally, the correcting the gait motion marking data based on the visualized gait image to obtain target gait motion data includes:
responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation;
and correcting the gait motion marking data based on the operation content to obtain target gait motion data.
Optionally, the operational content includes one or more of modification, deletion or addition of gait marker data.
It should be noted that, in the present embodiment, reference may be made to the corresponding contents in the foregoing, and details are not described here.
The emphasis of each embodiment in the present specification is on the difference from the other embodiments, and the same and similar parts among the various embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. An interactive gait recognition method, characterized in that the method comprises:
acquiring gait data acquired through a depth sensor;
processing the gait data through an initial gait recognition model to obtain gait motion marking data;
matching the gait image corresponding to the gait data with the gait action marking data to generate a visual gait image, which comprises the following steps: carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface; adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain the visual gait image;
correcting the gait motion marking data based on the visual gait image to obtain target gait motion data, wherein the steps of: responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation; correcting the gait motion marking data based on the operation content to obtain target gait motion data;
and adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, wherein the target gait recognition model is used for recognizing gait motions in the gait data.
2. The method of claim 1, further comprising:
processing the gait data acquired by the depth sensor to obtain the gait image and bone data;
storing the gait image and the bone data respectively.
3. The method of claim 1, further comprising:
acquiring a bone data training sample marked with a gait motion label;
and carrying out neural network model training on the training samples to obtain the initial gait recognition model.
4. The method of claim 1, wherein the operational content includes one or more of modifications, deletions, or additions to gait marker data.
5. An interactive gait recognition apparatus, characterized in that the apparatus comprises:
the acquiring unit is used for acquiring gait data acquired by the depth sensor;
the processing unit is used for processing the gait data through an initial gait recognition model to obtain gait motion marking data;
the generating unit is used for matching the gait image corresponding to the gait data with the gait action marking data to generate a visual gait image, and comprises: carrying out graphic visualization processing on the gait motion marking data, arranging the data after the graphic visualization processing according to an event sequence, and displaying the data on a time axis of an interactive interface; adding image information of the current action in the gait image corresponding to the gait data at the position of each time frame of the time axis of the interactive interface to obtain the visual gait image;
the correction unit is used for correcting the gait motion marking data based on the visual gait image to obtain target gait motion data, and comprises: responding to the operation of a target object on the gait motion marking data in the visual gait image, and recording the operation content corresponding to the operation; correcting the gait motion marking data based on the operation content to obtain target gait motion data;
and the model adjusting unit is used for adjusting the initial gait recognition model based on the target gait motion data to obtain a target gait recognition model, and the target gait recognition model is used for recognizing gait motions in the gait data.
6. A storage medium storing executable instructions which, when executed by a processor, implement the interactive gait recognition method of any of claims 1-4.
7. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program, the program being particularly adapted to implement the interactive gait recognition method according to any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210241812.XA CN114648810B (en) | 2022-03-11 | 2022-03-11 | Interactive gait recognition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210241812.XA CN114648810B (en) | 2022-03-11 | 2022-03-11 | Interactive gait recognition method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114648810A CN114648810A (en) | 2022-06-21 |
CN114648810B true CN114648810B (en) | 2022-10-14 |
Family
ID=81993741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210241812.XA Active CN114648810B (en) | 2022-03-11 | 2022-03-11 | Interactive gait recognition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114648810B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116912947B (en) * | 2023-08-25 | 2024-03-12 | 东莞市触美电子科技有限公司 | Intelligent screen, screen control method, device, equipment and storage medium thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016686A (en) * | 2017-04-05 | 2017-08-04 | 江苏德长医疗科技有限公司 | Three-dimensional gait and motion analysis system |
CN111966724A (en) * | 2020-06-29 | 2020-11-20 | 北京津发科技股份有限公司 | Interactive behavior data acquisition and analysis method and device based on human-computer interaction interface area automatic identification technology |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101320423A (en) * | 2008-06-26 | 2008-12-10 | 复旦大学 | Low resolution gait recognition method based on high-frequency super-resolution |
US9087234B2 (en) * | 2013-03-15 | 2015-07-21 | Nike, Inc. | Monitoring fitness using a mobile device |
JP6662532B2 (en) * | 2016-03-31 | 2020-03-11 | Necソリューションイノベータ株式会社 | Gait analyzer, gait analysis method, and program |
EP3570164B1 (en) * | 2018-05-14 | 2023-04-26 | Schneider Electric Industries SAS | Method and system for generating a mobile application from a desktop application |
CN112016497A (en) * | 2020-09-04 | 2020-12-01 | 王海 | Single-view Taijiquan action analysis and assessment system based on artificial intelligence |
CN113052138B (en) * | 2021-04-25 | 2024-03-15 | 广海艺术科创(深圳)有限公司 | Intelligent contrast correction method for dance and movement actions |
-
2022
- 2022-03-11 CN CN202210241812.XA patent/CN114648810B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016686A (en) * | 2017-04-05 | 2017-08-04 | 江苏德长医疗科技有限公司 | Three-dimensional gait and motion analysis system |
CN111966724A (en) * | 2020-06-29 | 2020-11-20 | 北京津发科技股份有限公司 | Interactive behavior data acquisition and analysis method and device based on human-computer interaction interface area automatic identification technology |
Non-Patent Citations (1)
Title |
---|
异常步态3维人体建模和可变视角识别;罗坚等;《中国图象图形学报》;20200812(第08期);第31-42页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114648810A (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109557099B (en) | Inspection device and inspection system | |
KR102014385B1 (en) | Method and apparatus for learning surgical image and recognizing surgical action based on learning | |
CN109069097B (en) | Dental three-dimensional data processing device and method thereof | |
US11842511B2 (en) | Work analyzing system and work analyzing method | |
JP6262406B1 (en) | Assessment of attention deficits | |
JP6985856B2 (en) | Information processing equipment, control methods and programs for information processing equipment | |
JP2006350577A (en) | Operation analyzing device | |
CN110123257A (en) | A kind of vision testing method, device, sight tester and computer storage medium | |
CN114648810B (en) | Interactive gait recognition method and device and electronic equipment | |
JP2019046095A (en) | Information processing device, and control method and program for information processing device | |
JP2007052575A (en) | Metadata applying device and metadata applying method | |
CN113808125A (en) | Medical image processing method, focus type identification method and related product | |
CN113707279B (en) | Auxiliary analysis method and device for medical image picture, computer equipment and medium | |
JP2010075354A (en) | Blood capillary blood flow measurement apparatus, blood capillary blood flow measurement method, and program | |
US11850090B2 (en) | Guided lung coverage and automated detection using ultrasound devices | |
JP2016004354A (en) | Determination method of body pose | |
CN112347837A (en) | Image processing system | |
CN114167993B (en) | Information processing method and device | |
CN114520044A (en) | Retina fundus image semi-automatic labeling device based on deep learning | |
JP2003227706A (en) | Image measuring device and program therefor | |
JP2013246149A (en) | Work position detection device and work position detection method | |
JP2020181466A (en) | Meter reading system, meter reading method, and program | |
US20220399118A1 (en) | Generation device and generation method | |
US20240135642A1 (en) | Endoscopic examination support apparatus, endoscopic examination support method, and recording medium | |
US20240138651A1 (en) | Endoscopic examination support apparatus, endoscopic examination support method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |