CN111144171A - Abnormal crowd information identification method, system and storage medium - Google Patents

Abnormal crowd information identification method, system and storage medium Download PDF

Info

Publication number
CN111144171A
CN111144171A CN201811303318.1A CN201811303318A CN111144171A CN 111144171 A CN111144171 A CN 111144171A CN 201811303318 A CN201811303318 A CN 201811303318A CN 111144171 A CN111144171 A CN 111144171A
Authority
CN
China
Prior art keywords
gait
data
identification
identity
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811303318.1A
Other languages
Chinese (zh)
Inventor
张曼
黄永祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yinhe Shuidi Technology Ningbo Co ltd
Original Assignee
Watrix Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Watrix Technology Beijing Co Ltd filed Critical Watrix Technology Beijing Co Ltd
Priority to CN201811303318.1A priority Critical patent/CN111144171A/en
Publication of CN111144171A publication Critical patent/CN111144171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention relates to an abnormal crowd information identification method, an abnormal crowd information identification system and a storage medium. The identification method comprises the following steps: acquiring gait preprocessing data of walking of each person in the video image; and obtaining corresponding identity characteristic data; counting the number of the identity characteristic data of each group, the similarity value of which is smaller than a preset threshold value, of the identity characteristic data of each group and other identity characteristic data as an abnormal number; and when any abnormal quantity is larger than the preset quantity, taking the identity characteristic data corresponding to the abnormal quantity as abnormal characteristic data. According to the embodiment of the invention, gait preprocessing data of each person in the video image is acquired, corresponding identity characteristic data is obtained according to the gait preprocessing data, whether the identity characteristic data of different persons are similar or not is judged, so that the persons with the identity characteristic data inconsistent with the identity characteristic data of other persons in the video image are found out, and the identity characteristic data is used as abnormal characteristic data, so that security personnel can be assisted to monitor the video image, and the processing efficiency of emergency events is improved.

Description

Abnormal crowd information identification method, system and storage medium
Technical Field
The invention relates to the technical field of biological identification, in particular to an abnormal crowd information identification method, an abnormal crowd information identification system and a storage medium.
Background
With the need for security level improvement in public places in various countries and the wide spread of video monitoring technologies, intelligent monitoring becomes a very active field in computer vision. In intelligent monitoring, the remote identification of human identity in a monitoring scene is a direction which is full of challenges and has a good application prospect, so that the system has scientific research and commercial values, and has theoretical and practical significance for deep research on the system.
Due to the increasing safety requirements, video surveillance is increasingly used in various public places, such as railway stations, airports, subways, buses, roads, markets, and the like. The security personnel can monitor the emergency or abnormal personnel in real time, so that the safety of public places can be improved, the processing efficiency of the emergency is improved, but the security personnel cannot keep attention for a long time, and false alarm, delayed alarm and missed alarm are easily caused.
Disclosure of Invention
In order to solve the problems in the prior art, at least one embodiment of the present invention provides an abnormal crowd information identification method, system and storage medium.
In a first aspect, an embodiment of the present invention provides an abnormal crowd information identification method, where the identification method includes:
acquiring gait preprocessing data of walking of each person in the video image;
respectively extracting identity characteristics of each gait preprocessing data to respectively obtain corresponding identity characteristic data;
respectively calculating similarity values of each group of the identity characteristic data and other identity characteristic data;
counting the number of the identity characteristic data of each group, the similarity value of which is smaller than a preset threshold value, of the identity characteristic data and other identity characteristic data as an abnormal number;
and when any abnormal quantity is larger than the preset quantity, taking the identity characteristic data corresponding to the abnormal quantity as abnormal characteristic data.
Based on the above technical solutions, the embodiments of the present invention may be further improved as follows.
With reference to the first aspect, in a first embodiment of the first aspect, the separately calculating the similarity value between each group of the identity feature data and other identity feature data specifically includes:
vectorizing all the identity characteristic data respectively to obtain corresponding identity characteristic data vectors;
and respectively calculating cosine values of each group of the identity characteristic data vectors and other identity characteristic data vectors as the similarity values.
With reference to the first aspect, in a second embodiment of the first aspect, the separately calculating the similarity value between each group of the identity feature data and other identity feature data specifically includes:
vectorizing all the identity characteristic data respectively to obtain corresponding identity characteristic data vectors;
and respectively calculating Euclidean distances between each group of the identity characteristic data vectors and other identity characteristic data vectors as the similarity value.
With reference to the first aspect, in a third embodiment of the first aspect, the performing identity feature extraction on each piece of gait preprocessing data to obtain corresponding identity feature data respectively specifically includes:
inputting the gait preprocessing data into a recognition model;
and acquiring the identity characteristic data output after the identification model identifies the gait preprocessing data.
With reference to the third embodiment of the first aspect, in a fourth embodiment of the first aspect, the training method for the recognition model includes:
s11, acquiring first gait preprocessing data of walking of a person in any video image;
s12, acquiring second gait preprocessing data of the walking of the person in the arbitrary video image;
and S13, inputting the second gait preprocessing data and the first gait preprocessing data into a recognition model for identity recognition and data recognition respectively, and performing parameter training iteration on the recognition model according to an identity recognition result and a data recognition result to obtain the recognition model.
In combination with the fourth embodiment of the first aspect, in the fifth embodiment of the first aspect,
the S13 specifically includes:
s21, generating a second step energy map according to the second gait preprocessing data, and generating a first step energy map according to the first gait preprocessing data;
s22, mixing the second step state energy diagram and the first step state energy diagram to obtain a plurality of recognition gait energy diagrams of different action postures, and inputting the recognition gait energy diagrams into the recognition model for identity recognition and data recognition;
s23, respectively obtaining the identification probability distribution of each identification gait energy image as the identification result;
s24, confirming whether the identification of each identification gait energy image of the identification model to the second step energy image or the first step energy image is accurate, and taking the identification as the data identification result;
s25, calculating identification loss according to the identification result of each identification gait energy graph;
s26, calculating data true and false judgment loss according to the data identification result of each identification gait energy diagram;
s27, judging whether the identity recognition loss and the data true and false judgment loss both meet preset conditions;
if yes, outputting the identification model; if not, updating the preset parameters of the recognition model by a loss back propagation method, and carrying out S21-S27.
With reference to the fifth embodiment of the first aspect, in the sixth embodiment of the first aspect,
the calculating the identification loss according to the identification result of each identification gait energy diagram specifically comprises:
calculating the identification loss by the following calculation mode:
Figure BDA0001852940940000031
lIfor the identification loss, n represents the number of the identification gait energy profiles,
Figure BDA0001852940940000041
the identity recognition loss of the ith recognition gait energy diagram, c represents the number of comparison identities, exp is an exponential function with a natural constant as the base, xi[j]The identity recognition probability, x, for the jth comparison identity is the identity recognition of the gait energy mapi[yi]Identifying as the y-th for the identity of the identified gait energy mapiIdentity recognition probability of individual contrasted identities, yiNumbering the real identity corresponding to the video image;
the calculating of the data true and false discrimination loss according to the data identification result of each gait energy identification graph specifically comprises the following steps:
and calculating the data true and false discrimination loss by the following calculation formula:
lD=-(mn×logkn+(1-mn)×log(1-kn));
lDfor judging loss of the data true and false, when the nth gait energy recognition graph is the second step state energy graph, m isnIs 1, when the nth identified gait energy map is the first step state energy map, m isnIs 0;
knfor the nth identified gait energy map, the confidence level of the second step energy map is as knWhen the gait energy is larger than the preset threshold value, the identification gait energy graph is a second step state energy graph, and when k is larger than the preset threshold valuenAnd when the gait energy is less than or equal to the preset threshold value, the identification gait energy map is a first step energy map.
With reference to the first aspect or the first, second, third, fourth, fifth, or sixth embodiment of the first aspect, in a seventh embodiment of the first aspect, the acquiring gait preprocessing data of walking of each person in the video image specifically includes:
acquiring continuous frame images in a video image;
detecting an image area containing human figures from the continuous frame images by using a detection algorithm;
extracting the human figure outline representing the same person in each human figure image area to generate a human figure outline segmentation graph;
aligning the human shape center and the image center of each frame of the human shape contour segmentation image to generate a primary gait silhouette image;
scaling the human-shaped contour in each frame of primary gait silhouette image to a uniform resolution ratio to generate a gait silhouette image sequence;
respectively acquiring silhouette gait features of each frame of gait silhouette image in the gait silhouette image sequence to form silhouette gait preprocessing data;
calculating the correlation similarity of each frame of silhouette gait features in the silhouette gait preprocessing data;
performing weighted fusion on the silhouette gait features according to the correlation similarity to obtain the gait preprocessing data;
therefore, gait preprocessing data corresponding to each person in the video image are obtained respectively.
In a second aspect, an embodiment of the present invention provides an abnormal crowd information identification system, where the abnormal crowd information identification system includes a processor and a memory; the processor is configured to execute the abnormal crowd information identification program stored in the memory to implement the abnormal crowd information identification method according to any one of the first aspect.
In a third aspect, an embodiment of the present invention provides a storage medium, where the storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the method for identifying abnormal crowd information according to any one of the first aspect.
Compared with the prior art, the technical scheme of the invention has the following advantages: according to the embodiment of the invention, gait preprocessing data of each person in the video image is acquired, corresponding identity characteristic data is obtained according to the gait preprocessing data, whether the identity characteristic data of different persons are similar or not is judged, so that the persons with the identity characteristic data inconsistent with the identity characteristic data of other persons in the video image are found out, and the identity characteristic data is used as abnormal characteristic data, so that security personnel can be assisted to monitor the video image, and the processing efficiency of emergency events is improved.
Drawings
Fig. 1 is a schematic flow chart of an abnormal crowd information identification method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an abnormal crowd information identification method according to another embodiment of the present invention;
fig. 3 is a first flowchart illustrating an abnormal crowd information identification method according to another embodiment of the present invention;
fig. 4 is a flowchart illustrating a second method for identifying abnormal crowd information according to another embodiment of the present invention;
fig. 5 is a third schematic flowchart of an abnormal crowd information identification method according to another embodiment of the present invention;
fig. 6 is a fourth schematic flowchart of an abnormal crowd information identification method according to yet another embodiment of the present invention;
fig. 7 is a fifth flowchart illustrating an abnormal crowd information identification method according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of an abnormal crowd information identification system according to yet another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, an abnormal crowd information identification method provided in an embodiment of the present invention includes:
and S11, acquiring gait preprocessing data of walking of each person in the video image.
In this embodiment, the gait preprocessing data in the gait silhouette image obtained from each frame of image in the video image can be respectively obtained from the gait silhouette image obtained from the human body contour line.
As shown in fig. 2, acquiring gait preprocessing data of walking of a person in a video image comprises:
and S21, acquiring continuous frame images in the video images.
A single frame image is a still picture, and a frame is a single picture of the smallest unit in a motion picture, which is equivalent to each frame of a shot on a motion picture film. A single frame is a still picture, and consecutive frames form a moving picture, such as a television image. In this step, successive frame images in the video image are acquired.
S22, an image region including a human figure is detected from the continuous frame image by a detection algorithm.
For example, a moving object in an image may be detected by a background subtraction method, and a contour of the moving object may be detected to confirm a human-shaped image region, or another image detection method may be used to detect a continuous frame image to confirm a human-shaped image region in the continuous frame image.
And S23, extracting the human-shaped outlines representing the same person in each human-shaped image area, and generating a human-shaped outline segmentation graph.
In this embodiment, the human-shaped contour in each human-shaped image region in each frame of image is extracted to obtain a continuous human-shaped contour segmentation map for the video image, the image is subjected to binary segmentation, the binary-segmented image is processed through an expansion filtering operator and a corrosion operator, noise in the image is filtered, a hole in the binary-segmented image is filled, and boundary tracking is performed on the processed image, so that the corresponding human-shaped contour can be obtained.
Specifically, the method of S23 specifically includes: the background frame of the frame image is obtained based on the intermediate value method. And detecting a moving target in the frame image by using a background difference method according to the background frame, and performing binary segmentation on the frame image. Processing the frame image after binary segmentation by using a corrosion operator and an expansion filtering operator, and filling holes in the frame image according to connectivity analysis; and carrying out boundary tracking on the frame image to obtain a human-shaped contour, and generating a human-shaped contour segmentation graph.
And S24, aligning the human figure center and the image center of each human figure outline segmentation image to generate a primary gait silhouette image.
And aligning the human figure center and the image center of the human figure contour segmentation graph to ensure that each human figure contour segmentation graph is adjusted according to the walking video image of the human figure contour segmentation graph to obtain a primary gait silhouette image.
And S25, scaling the human-shaped contour in each frame of primary gait silhouette image to a uniform resolution ratio to generate a gait silhouette image sequence.
And adjusting the resolution of the primary gait silhouette images to enable the resolution of each primary gait silhouette image to be consistent, so as to obtain a gait silhouette image sequence.
And S26, respectively acquiring silhouette gait features of each frame of gait silhouette image in the gait silhouette image sequence to form silhouette gait preprocessing data.
Acquiring silhouette gait features of each frame of gait silhouette image, wherein the gait features refer to features of the force magnitude, direction and action point of the person during walking reflected in the footprints. Is the reflection of the walking habit of the person in the steps of falling feet, rising feet and supporting swing. Generally comprising: the mark can be selected from the group consisting of a bump mark, a tread mark, a push mark, a traveling mark, a sitting mark, a pressing mark, an indentation mark, a twisting mark, a lifting mark, a kicking mark, a digging mark, a scratching mark, a picking mark, an slit mark, a scratch mark, a sweeping mark, a scratch mark and the like.
And S27, calculating the relevance similarity of the silhouette gait features of each frame in the silhouette gait preprocessing data.
The correlation similarity between each silhouette gait feature and other frame silhouette gait features in the same gait sequence is calculated respectively, in this embodiment, the correlation similarity may be distance measurement of two gait features, such as euclidean distance of each frame silhouette gait feature, and the smaller the value of the euclidean distance, the larger the similarity of each frame silhouette gait feature.
And S28, performing weighted fusion on the silhouette gait features according to the correlation similarity to obtain gait preprocessing data.
As shown in fig. 3, the method for obtaining the corresponding gait preprocessing data by fusing the gait features of each silhouette according to the correlation similarity includes:
and S31, calculating the equal error rate of the corresponding silhouette gait characteristics used for the characteristic identification process according to each correlation similarity.
The credibility of different gait features in the fusion process is inconsistent, and the accuracy of feature identification can be generally expressed by Equal Error Rate (EER). The higher the EER value is, the worse the performance of the feature is, the higher the error rate in the step is, the lower the reliability of the feature in the fusion process is, in the step, the greater the correlation similarity is, the more similar the silhouette gait feature and the real gait feature is, the lower the equal error rate corresponding to the gait feature with the greater correlation similarity is, in the embodiment, the correlation similarity is calculated by the Euclidean distance between the silhouette gait feature and the real gait feature, that is, the lower the Euclidean distance is, the higher the correlation similarity is, and the lower the corresponding equal error rate is.
And S32, respectively calculating the weight values of the corresponding silhouette gait characteristics according to each equal error rate.
The method for respectively calculating the weight values of the corresponding silhouette gait features according to the equal error rates of the different silhouette gait features comprises the following steps:
calculating the weight value according to the following calculation formula:
Figure BDA0001852940940000091
wherein, wnWeight value of nth silhouette gait feature, enThe equal error rate of the nth silhouette gait feature is shown, and M is the total number of the silhouette gait features.
And S33, performing weighted fusion on all silhouette gait features according to corresponding weight values to obtain gait preprocessing data.
And performing weighted fusion on all silhouette gait features according to corresponding weight values, so that the weighting proportion of the silhouette gait features with smaller weight values in the fusion process is smaller, the error of the final gait preprocessing data is reduced, and the accuracy of the gait preprocessing data is improved.
And S29, respectively obtaining gait preprocessing data corresponding to each person in the video image.
Through the steps, each person in the video image is processed respectively, and gait preprocessing data corresponding to each person is obtained.
And S12, respectively extracting identity characteristics of each gait preprocessing data to respectively obtain corresponding identity characteristic data.
In this embodiment, S12 specifically includes:
and inputting the generated gait preprocessing data into the recognition model, and carrying out identity recognition on the gait preprocessing data through the recognition model to obtain corresponding identity characteristic data. And carrying out identity recognition on the gait preprocessing data through the recognition model to obtain identity characteristic data corresponding to gait characteristics.
And S13, respectively calculating similarity values of each group of the identity characteristic data and other identity characteristic data.
In this embodiment, similarity values are respectively calculated for the recognized identity characteristic data and the pre-stored abnormal characteristic data to determine whether the recognized identity characteristic data matches with the pre-stored abnormal characteristic data, and whether the identity characteristic data is abnormal is determined, so that people in the video image are monitored, and when the identity characteristic data is abnormal, the monitoring is timely performed.
As shown in fig. 4, calculating the similarity value between the identity feature data and the abnormal feature data includes:
and S41, vectorizing all the identity characteristic data respectively to obtain identity characteristic data vectors.
In this embodiment, the identity feature data is digitized to form a corresponding identity feature data vector, for example, data such as the strength, direction, and action point of a person walking in the identity feature are respectively digitized, the action point digitization may be represented by pixel coordinates in a corresponding picture, each identity feature data includes walking features of the person in different postures, and other data may also be used as parameters of the vector.
And S42, respectively calculating cosine values of each group of the identity characteristic data vectors and other identity characteristic data vectors as the similarity values.
In this embodiment, the cosine distance, also called cosine similarity value, is a measure for measuring the difference between two individuals by using the cosine value of the included angle between two vectors in the vector space. Cosine similarity values are one method of calculating similarity values. The method firstly maps individual index data to vector space, and secondly measures the similarity between two individual vectors by measuring the cosine value of an included angle of an inner product space between the two individual vectors. The closer the included angle of the two individual vectors is to 0 degree, namely the larger the cosine value of the included angle is, the higher the similarity value of the two individuals is; on the contrary, the closer the included angle of the two individuals is to 180 degrees, the smaller the cosine value of the included angle is, and the lower the similarity value is. Of course, in this embodiment, the hamming distance equidistance measurement can also be used as the similarity value, but the gait preprocessing data of different people cannot be completely consistent, so the error of calculating the similarity value by the hamming distance is large.
As shown in fig. 5, calculating the similarity value between the identity feature data and the abnormal feature data further includes:
and S51, vectorizing all the identity characteristic data respectively to obtain corresponding identity characteristic data vectors.
In this embodiment, the identity feature data is digitized to form a corresponding identity feature data vector, for example, data such as the strength, direction, and action point of a person walking in the identity feature are respectively digitized, the action point digitization may be represented by pixel coordinates in a corresponding picture, each identity feature data includes walking features of the person in different postures, and other data may also be used as parameters of the vector.
And S52, respectively calculating Euclidean distances between each group of the identity characteristic data vectors and other identity characteristic data vectors to serve as the similarity value.
Euclidean distance, a commonly used definition of distance, refers to the true distance between two points in an m-dimensional space, or the natural length of a vector. The euclidean distance between two points in the two-dimensional space and the three-dimensional space is the actual distance between the two points, and in this embodiment, the euclidean distance between each group of the identity feature data vectors is calculated, and the identity feature data vectors need to be normalized and then calculated, so as to reduce the gait feature errors caused by the physical reasons of different people.
And S14, counting the number of the identity characteristic data of each group, the similarity value of which with other identity characteristic data is smaller than a preset threshold value, as an abnormal number.
In this embodiment, when the similarity between the identity feature data and other identity feature data is smaller than the preset similarity value, the two identity feature data are not similar, and the number of the identity feature data that are not similar to the other identity feature data is counted, that is, the abnormal number.
And S15, when any abnormal quantity is larger than the preset quantity, taking the identity characteristic data corresponding to the abnormal quantity as abnormal characteristic data.
In this embodiment, when the number of the identity characteristic data that is dissimilar to other identity characteristic data is greater than the preset number, the identity characteristic data is not consistent with the identity characteristic data of other people in the same scene, that is, the identity characteristic data can be judged to be abnormal characteristic data, for example, in a station environment, the identity of a person who is likely to become abnormal characteristic data is generally a thief, the gait characteristic of the thief is obviously inconsistent with the gait characteristic of a normal passenger, in this embodiment, the finally obtained identity characteristic is also inconsistent, the number of the normal passengers is higher than the number of the thief, the abnormal characteristic data can also be the identity characteristic data of a disabled person, the disabled person is a person who needs social care, an accident is likely to occur, and the gait characteristic of the disabled person is greatly different from the gait characteristic of the normal person, therefore, the identity characteristic data of the disabled is used as abnormal data, so that the security personnel can be assisted to pay attention to the disabled to reduce accidents, and moreover, the gait preprocessing data of people with problems in body can be obviously different from the gait preprocessing data of the ordinary people, such as people with heart diseases, pregnant women and the like.
As shown in fig. 6, an embodiment of the present invention further provides an abnormal population information identification method, and compared with the establishment method shown in fig. 1, the difference is that the training method of the identification model includes:
and S61, acquiring first gait preprocessing data of the walking of the person in any video image.
In this embodiment, the first gait preprocessing data may be obtained by user input, or obtained by processing a video image by using another trained generative model, or obtained by scanning according to the walking characteristics of a person in the video image; the method can also be used for acquiring first-step preprocessing data from a video image through a gait feature extraction technology in the prior art. The first gait preprocessing data refers to the posture and behavior characteristics of the human body when walking, and the human body moves along a certain direction through a series of continuous activities of the hip, the knee, the ankle and the toes. Gait involves factors such as behavioral habits, occupation, education, age and sex, and is also affected by various diseases. Control of walking is complex, including central commands, body balance and coordinated control, involving coordinated movements of the joints and muscles of the lower extremities, as well as associated with the posture of the upper extremities and the trunk. Misadjustment of any link may affect gait, and abnormalities may be compensated or masked. Normal gait has stability, periodicity and rhythmicity, directionality, coordination, and individual variability, however, these gait characteristics will change significantly when a person is ill. Gait analysis is an inspection method for studying walking rules, and aims to disclose key links and influencing factors of gait abnormalities through biomechanical and kinematic means so as to guide rehabilitation assessment and treatment and contribute to clinical diagnosis, curative effect assessment, mechanism research and the like. In gait analysis, whether gait is normal or not is often described by some special parameters, which generally include the following categories: gait cycle, kinematic parameters, kinetic parameters, electromyographic activity parameters, energy metabolism parameters and the like.
And S62, acquiring second gait preprocessing data of the walking of the person in any video image.
In this embodiment, second gait preprocessing data of the video image, such as a gait silhouette image obtained according to a human body contour line, is obtained, verification gait data in the gait silhouette image obtained from each frame of image in the video image is respectively obtained, and a corresponding second step energy map is generated according to the second step preprocessing data.
In this step, the method for acquiring the second-step preprocessing data includes:
successive frame images in the video image are acquired.
A single frame image is a still picture, and a frame is a single picture of the smallest unit in a motion picture, which is equivalent to each frame of a shot on a motion picture film. A single frame is a still picture, and consecutive frames form a moving picture, such as a television image. In this step, successive frame images in the video image are acquired.
An image region including a human figure is detected from the continuous frame images by a detection algorithm.
For example, a moving object in an image may be detected by a background subtraction method, and a contour of the moving object may be detected to confirm a human-shaped image region, or another image detection method may be used to detect a continuous frame image to confirm a human-shaped image region in the continuous frame image.
And extracting the human-shaped outline in the human-shaped image area to generate a human-shaped outline segmentation graph.
In this embodiment, the human-shaped contour in each human-shaped image region in each frame of image is extracted to obtain a continuous human-shaped contour segmentation map for the video image, the image is subjected to binary segmentation, the binary-segmented image is processed through an expansion filtering operator and a corrosion operator, noise in the image is filtered, a hole in the binary-segmented image is filled, and boundary tracking is performed on the processed image, so that the corresponding human-shaped contour can be obtained.
Specifically, the method for extracting the human-shaped profile in the human-shaped image region and generating the human-shaped profile segmentation graph specifically comprises the following steps: the background frame of the frame image is obtained based on the intermediate value method. And detecting a moving target in the frame image by using a background difference method according to the background frame, and performing binary segmentation on the frame image. Processing the frame image after binary segmentation by using a corrosion operator and an expansion filtering operator, and filling holes in the frame image according to connectivity analysis; and carrying out boundary tracking on the frame image to obtain a human-shaped contour, and generating a human-shaped contour segmentation graph.
In this embodiment, a human-shaped contour in a human-shaped image region may also be extracted through a semantic analysis algorithm to generate a human-shaped contour segmentation map, specifically, each pixel point in the picture is subjected to secondary classification through a neural network, and then adjusted by some post-processing methods, such as edge smoothing, threshold filtering, and the like, to finally generate a segmentation image.
And aligning the human figure center and the image center of each frame of the human figure contour segmentation image to generate a primary gait silhouette image.
And aligning the human figure center and the image center of the human figure contour segmentation graph to ensure that each human figure contour segmentation graph is adjusted according to the walking video image of the human figure contour segmentation graph to obtain a primary gait silhouette image.
And (4) scaling the human-shaped contour in each frame of primary gait silhouette image to a uniform resolution ratio to generate a gait silhouette image sequence.
And adjusting the resolution of the primary gait silhouette images to enable the resolution of each primary gait silhouette image to be consistent, so as to obtain a gait silhouette image sequence.
And respectively acquiring silhouette gait features of each frame of gait silhouette image in the gait silhouette image sequence to form silhouette gait preprocessing data.
Acquiring silhouette gait features of each frame of gait silhouette image, wherein the gait features refer to features of the force magnitude, direction and action point of the person during walking reflected in the footprints. Is the reflection of the walking habit of the person in the steps of falling feet, rising feet and supporting swing. Generally comprising: the mark can be selected from the group consisting of a bump mark, a tread mark, a push mark, a traveling mark, a sitting mark, a pressing mark, an indentation mark, a twisting mark, a lifting mark, a kicking mark, a digging mark, a scratching mark, a picking mark, an slit mark, a scratch mark, a sweeping mark, a scratch mark and the like.
And calculating the relevance similarity value of each frame of silhouette gait features in the silhouette gait preprocessing data.
The correlation similarity value between each silhouette gait feature and other frame silhouette gait features in the same gait sequence is calculated respectively, in this embodiment, the correlation similarity value may be distance measurement of two gait features, such as euclidean distance of each frame silhouette gait feature, and the smaller the value of the euclidean distance, the larger the similarity of each frame silhouette gait feature.
And performing weighted fusion on the silhouette gait features according to the correlation similarity value to obtain second-step state preprocessing data.
Through the steps, the finally obtained second-step state preprocessing data better conform to the first-step state preprocessing data, and the accuracy of gait recognition through the second-step state energy diagram is improved.
And S63, inputting the second-step state preprocessing data and the first-step state preprocessing data into the recognition model for identity recognition and data recognition respectively, and performing parameter training iteration on the recognition model according to the identity recognition result and the data recognition result to obtain the recognition model.
As shown in fig. 7, in this embodiment, S63 specifically includes:
s71, generating a second-step energy map according to the second-step preprocessing data, and generating a first-step energy map according to the first-step preprocessing data;
and S72, mixing the second step state energy diagram and the first step state energy diagram to obtain a plurality of identification gait energy diagrams of different action postures, and inputting the identification gait energy diagrams into an identification model for identity identification and data identification.
In the step, the second step state energy diagram and the first step state energy diagram are respectively split to obtain a plurality of identification gait energy diagrams of different action postures in each gait energy diagram, and all the identification gait energy diagrams are input into an identification training model to respectively carry out identity identification and data identification.
For example, the second-step energy diagram and the first-step energy diagram are expressed as 1: 1 to obtain a plurality of gait energy recognition graphs with different action postures.
And S73, respectively obtaining the identification probability distribution of each identification gait energy graph as an identification result.
And obtaining the recognition probability of each recognition gait energy image corresponding to different people, wherein the person with the highest recognition probability is the recognition result of the recognition gait energy image, if the recognition gait energy image which is finally larger than the preset threshold value can accurately recognize the identity, the parameters of the recognition training model are accurately trained, the model parameters of the recognition training model can not be adjusted, otherwise, the parameters of the recognition training model need to be adjusted.
And S74, confirming whether the identification of each identification gait energy image belongs to the second step energy image or the first step energy image by the identification model is accurate or not, and taking the identification result as a data identification result.
And identifying the attribution of each identified gait energy image, determining whether the identification training model can accurately identify the second-step energy image or the first-step energy image in the gait energy image, if the identification training model can be accurately distinguished, adjusting the parameters of the identification training model to regenerate the second-step preprocessed data, otherwise, not adjusting the identification model.
And S75, calculating the identification loss according to the identification result of each identification gait energy graph.
Based on the above embodiment, the identification is performed by identifying the gait energy maps, and the identification loss of all the identification gait energy maps is calculated according to the identification result, for example, the accuracy of identity recognition can be calculated as the loss of identity recognition, and the greater the loss of identity recognition, the more the recognition training model needs to adjust the parameters to perform identity recognition again, for example, when the accuracy of identity recognition is greater than or equal to 75 percent, the identity recognition of the person in the video can be accurately recognized by performing identity recognition through the recognition training model by using the second-step energy diagram, when the identity recognition accuracy is lower than 75 percent, the parameters for identifying and generating the training model need to be adjusted, and the generation of the second-step state energy diagram, the data identification and the identity identification are carried out again, so that the lower the identity identification accuracy is, the larger the parameter adjustment range for identifying and generating the training model is.
For example, the identification loss is calculated by the following calculation method:
Figure BDA0001852940940000161
lIfor identification loss, n represents the number of identification gait energy patterns,
Figure BDA0001852940940000162
identifying the identity loss of the ith gait energy image, c represents the number of comparison identities, exp is an exponential function with a natural constant as the base, xi[j]Identification probability, x, for identifying the identity of the gait energy profile as the jth comparison identityi[yi]Identification as the y-th for identifying the gait energy profileiIdentity recognition probability of individual contrasted identities, yiThe number of the corresponding real identity of the video image is obtained.
And S76, calculating the data true and false judgment loss according to the data identification result of each identification gait energy graph.
And judging whether each recognition gait energy image is a first step energy image or a second step energy image, calculating true and false discrimination loss according to the recognition result, and confirming whether the recognition training model can accurately distinguish whether the recognition gait energy image is the second step energy image or the first step energy image according to the true and false discrimination loss.
For example, the data true and false discrimination loss is calculated by the following calculation formula:
lD=-(mn×logkn+(1-mn)×log(1-kn));
lDfor judging loss of data true and false, when the nth gait energy recognition image is the second gait energyWhen measuring the figure, mnIs 1, when the nth identification gait energy diagram is the first step state energy diagram, mnIs 0;
knfor the nth identified gait energy map, the confidence coefficient of the second step energy map is as knWhen the gait energy is larger than the preset threshold value, the gait energy graph is identified as a second step state energy graph, and when k is larger than the preset threshold valuenAnd when the gait energy map is smaller than or equal to the preset threshold, identifying the gait energy map as a first step energy map.
And S77, judging whether the identity recognition loss and the data true and false judgment loss both meet preset conditions.
For example, comparing the identification loss with a first preset threshold range, comparing the data true and false judgment loss with a second preset threshold range, and judging whether the identification loss meets the first preset threshold range; and judging whether the data true and false judging loss meets a second preset threshold range.
If yes, outputting the identification model; if not, updating the preset parameters of the recognition model by a loss back propagation method, and carrying out S71-S77.
If the accuracy of the identification result is greater than the preset threshold value, and the accuracy of the data identification result is less than the preset threshold value, it can be determined that the identification training model cannot accurately identify the second-step energy map and the first-step energy map, and the identity of the person corresponding to the video can be accurately identified through the second-step energy map, at this time, the parameters of the identification training model are trained, and corresponding parameters can be output to obtain the identification model, otherwise, the preset parameters of the identification training model need to be updated, the parameters in the identification model can be updated through a stochastic gradient descent algorithm, or the parameters in the identification model can be updated through a loss back propagation method, and S71 is performed again.
In this embodiment, it may also be determined whether the second-step energy map is qualified by calculating the mean square error loss of the second-step energy map and the first-step energy map to determine the second-step energy map.
In this step, the mean square error loss of the second-step energy diagram and the first-step energy diagram is calculated, the mean square error is the average of the sum of squared distances of each data from the true value, the deviation degree of the second-step energy diagram from the first-step energy diagram can be obtained by calculating the mean square error loss of the second-step energy diagram and the first-step energy diagram, if the deviation degree of the second-step energy diagram from the first-step energy diagram is too large, it can be determined that the second-step energy diagram obtained by generating the training model is not qualified, therefore, if the mean square error loss is too large, the parameters for generating the training model need to be adjusted, and a new second-step energy diagram is generated again.
For example, the mean square error loss is calculated by:
Figure BDA0001852940940000181
lgfor mean square error loss, t is the number of pixels in the second-step state energy diagram or the number of pixels in the first-step state energy diagram, pxiIs the gray value, py, of the 0-mean normalization of the ith pixel point in the second-step energy diagramiAnd normalizing the gray value of the 0 mean value of the ith pixel point in the first step state energy diagram.
In a specific embodiment, the embodiment of the present invention provides an abnormal crowd information identification method, which is different from the establishment method shown in fig. 1 in that,
and optimizing gait information identification:
s1: and sequentially inputting the gait silhouette image sequences (multi-frame images) p1, p2, … and pn into a basic feature extraction part for generating the network to respectively obtain feature sequences f1, f2, … and fn.
And S2, inputting the characteristics F1, F2, … and fn of the multi-frame input image into the encoder network part of the generation network, and carrying out characteristic weighting fusion to obtain the characteristic F of the whole gait silhouette image sequence.
S3, inputting the characteristics F of the whole gait silhouette image sequence into the decoding network part of the generation network to generate a frame of gait energy image GEI with the same resolution as the input silhouette imagef
S4 GEIfAnd true GEI in the training data setrAccording to the following steps:the proportional mix of 1 is then input to the recognition network. Wherein the true GEIrIs a GEI generated by selecting those sums from a set of training datafData belonging to the same target and having a walking angle of 54 degrees and being close-fitting to the body.
S5: identifying GEI of network pair inputfAnd GEIrAnd identifying which data in the input data belong to the generated data and which data belong to the real data.
S6: calculating loss of identity recognition based on the result of identity recognition and the result of discrimination of true and false data_ILoss of sum true and false discriminationD. Based on total loss LD=α×loss_I+β×loss_DAnd updating the parameters of the judgment network by using a random gradient descent algorithm.
S7, calculating GEI of each group of same identity objectsfAnd GEIrLoss of mean square error (loss)gBased on the total loss LG=λ×lossg+μ×lossDAnd updating the parameters of the generated network by using a random gradient descent algorithm.
S8, repeating S1-S7 until reaching the model overall convergence condition, namely that the judgment network can hardly distinguish the GEI any morefAnd GEIrWhile correctly recognizing GEIfAnd GEIrThe identity of the corresponding target, so as to obtain the well-learned generative confrontation model M.
As shown in fig. 8, an embodiment of the present invention provides an abnormal crowd information identification system, which includes a processor, a memory; the processor is configured to execute the abnormal crowd information identification program stored in the memory, so as to implement the abnormal crowd information identification method provided in any of the above embodiments.
The storage medium for recording the program code of the software program that can realize the functions of the above-described embodiments is provided to the system or apparatus in the above-described embodiments, and the program code stored in the storage medium is read and executed by the computer (or CPU or MPU) of the system or apparatus.
In this case, the program code itself read out from the storage medium performs the functions of the above-described embodiments, and the storage medium storing the program code constitutes an embodiment of the present invention.
As a storage medium for supplying the program code, for example, a flexible disk, hard disk, optical disk, magneto-optical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, and the like can be used.
The functions of the above-described embodiments may be realized not only by executing the readout program code by the computer, but also by some or all of actual processing operations executed by an OS (operating system) running on the computer according to instructions of the program code.
Further, the embodiments of the present invention also include a case where after the program code read out from the storage medium is written into a function expansion card inserted into the computer or into a memory provided in a function expansion unit connected to the computer, a CPU or the like included in the function expansion card or the function expansion unit performs a part of or the whole of the processing in accordance with the command of the program code, thereby realizing the functions of the above-described embodiments.
The embodiment of the invention provides a storage medium, wherein the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the abnormal crowd information identification method provided by any one of the above embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An abnormal crowd information identification method, characterized in that the identification method comprises:
acquiring gait preprocessing data of walking of each person in the video image;
respectively extracting identity characteristics of each gait preprocessing data to respectively obtain corresponding identity characteristic data;
respectively calculating similarity values of each group of the identity characteristic data and other identity characteristic data;
counting the number of the identity characteristic data of each group, the similarity value of which is smaller than a preset threshold value, of the identity characteristic data and other identity characteristic data as an abnormal number;
and when any abnormal quantity is larger than the preset quantity, taking the identity characteristic data corresponding to the abnormal quantity as abnormal characteristic data.
2. The method for identifying abnormal population information according to claim 1, wherein the calculating the similarity value between each group of the identity feature data and other identity feature data comprises:
vectorizing all the identity characteristic data respectively to obtain corresponding identity characteristic data vectors;
and respectively calculating cosine values of each group of the identity characteristic data vectors and other identity characteristic data vectors as the similarity values.
3. The method for identifying abnormal population information according to claim 1, wherein the calculating the similarity value between each group of the identity feature data and other identity feature data comprises:
vectorizing all the identity characteristic data respectively to obtain corresponding identity characteristic data vectors;
and respectively calculating Euclidean distances between each group of the identity characteristic data vectors and other identity characteristic data vectors as the similarity value.
4. The method for identifying abnormal crowd information according to claim 1, wherein the step of performing identity feature extraction on each gait preprocessing data to obtain corresponding identity feature data respectively comprises:
inputting the gait preprocessing data into a recognition model;
and acquiring the identity characteristic data output after the identification model identifies the gait preprocessing data.
5. The abnormal crowd information identification method according to claim 4, wherein the training method of the identification model comprises:
s11, acquiring first gait preprocessing data of walking of a person in any video image;
s12, acquiring second gait preprocessing data of the walking of the person in the arbitrary video image;
and S13, inputting the second gait preprocessing data and the first gait preprocessing data into a recognition model for identity recognition and data recognition respectively, and performing parameter training iteration on the recognition model according to an identity recognition result and a data recognition result to obtain the recognition model.
6. The abnormal crowd information identifying method according to claim 5,
the S13 specifically includes:
s21, generating a second step energy map according to the second gait preprocessing data, and generating a first step energy map according to the first gait preprocessing data;
s22, mixing the second step state energy diagram and the first step state energy diagram to obtain a plurality of recognition gait energy diagrams of different action postures, and inputting the recognition gait energy diagrams into the recognition model for identity recognition and data recognition;
s23, respectively obtaining the identification probability distribution of each identification gait energy image as the identification result;
s24, confirming whether the identification of each identification gait energy image of the identification model to the second step energy image or the first step energy image is accurate, and taking the identification as the data identification result;
s25, calculating identification loss according to the identification result of each identification gait energy graph;
s26, calculating data true and false judgment loss according to the data identification result of each identification gait energy diagram;
s27, judging whether the identity recognition loss and the data true and false judgment loss both meet preset conditions;
if yes, outputting the identification model; if not, updating the preset parameters of the recognition model by a loss back propagation method, and carrying out S21-S27.
7. The method of identifying abnormal crowd information according to claim 6,
the calculating the identification loss according to the identification result of each identification gait energy diagram specifically comprises:
calculating the identification loss by the following calculation mode:
Figure FDA0001852940930000031
lIfor the identification loss, n represents the number of the identification gait energy profiles,
Figure FDA0001852940930000032
for the identification loss of the identification gait energy diagram of the ith, c represents the number of comparison identities, exp is an exponential function with a natural constant as the base, xi[j]The identity recognition probability, x, for the jth comparison identity is the identity recognition of the gait energy mapi[yi]Identifying as the y-th for the identity of the identified gait energy mapiIdentity recognition probability of individual contrasted identities, yiNumbering the real identity corresponding to the video image;
the calculating of the data true and false discrimination loss according to the data identification result of each gait energy identification graph specifically comprises the following steps:
and calculating the data true and false discrimination loss by the following calculation formula:
lD=-(mn×logkn+(1-mn)×log(1-kn));
lDfor judging loss of the data true and false, when the nth gait energy recognition graph is the second step state energy graph, m isnIs 1, when the nth identified gait energy map is the first step state energy map, m isnIs 0;
knfor the nth identified gait energy map, the confidence level of the second step energy map is as knWhen the gait energy is larger than the preset threshold value, the identification gait energy graph is a second step state energy graph, and when k is larger than the preset threshold valuenAnd when the gait energy is less than or equal to the preset threshold value, the identification gait energy map is a first step energy map.
8. The method for identifying the abnormal crowd information according to any one of claims 1 to 7, wherein the acquiring of the gait preprocessing data of each person walking in the video image specifically comprises:
acquiring continuous frame images in a video image;
detecting an image area containing human figures from the continuous frame images by using a detection algorithm;
extracting the human figure outline representing the same person in each human figure image area to generate a human figure outline segmentation graph;
aligning the human shape center and the image center of each frame of the human shape contour segmentation image to generate a primary gait silhouette image;
scaling the human-shaped contour in each frame of primary gait silhouette image to a uniform resolution ratio to generate a gait silhouette image sequence;
respectively acquiring silhouette gait features of each frame of gait silhouette image in the gait silhouette image sequence to form silhouette gait preprocessing data;
calculating the correlation similarity of each frame of silhouette gait features in the silhouette gait preprocessing data;
performing weighted fusion on the silhouette gait features according to the correlation similarity to obtain the gait preprocessing data;
therefore, gait preprocessing data corresponding to each person in the video image are obtained respectively.
9. An abnormal crowd information identification system is characterized by comprising a processor and a memory; the processor is used for executing the abnormal crowd information identification program stored in the memory so as to realize the abnormal crowd information identification method of any one of claims 1 to 8.
10. A storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the method for identifying abnormal crowd information according to any one of claims 1 to 8.
CN201811303318.1A 2018-11-02 2018-11-02 Abnormal crowd information identification method, system and storage medium Pending CN111144171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811303318.1A CN111144171A (en) 2018-11-02 2018-11-02 Abnormal crowd information identification method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811303318.1A CN111144171A (en) 2018-11-02 2018-11-02 Abnormal crowd information identification method, system and storage medium

Publications (1)

Publication Number Publication Date
CN111144171A true CN111144171A (en) 2020-05-12

Family

ID=70515465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811303318.1A Pending CN111144171A (en) 2018-11-02 2018-11-02 Abnormal crowd information identification method, system and storage medium

Country Status (1)

Country Link
CN (1) CN111144171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022166A (en) * 2020-08-08 2020-12-04 司法鉴定科学研究院 Human body identity recognition method and system based on medical movement disorder feature recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN106778688A (en) * 2017-01-13 2017-05-31 辽宁工程技术大学 The detection method of crowd's throat floater event in a kind of crowd scene monitor video
CN107085716A (en) * 2017-05-24 2017-08-22 复旦大学 Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN108520216A (en) * 2018-03-28 2018-09-11 电子科技大学 A kind of personal identification method based on gait image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN106778688A (en) * 2017-01-13 2017-05-31 辽宁工程技术大学 The detection method of crowd's throat floater event in a kind of crowd scene monitor video
CN107085716A (en) * 2017-05-24 2017-08-22 复旦大学 Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN108520216A (en) * 2018-03-28 2018-09-11 电子科技大学 A kind of personal identification method based on gait image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨路明 等: "基于特征融合的步态识别算法研究", 《计算机应用研究》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022166A (en) * 2020-08-08 2020-12-04 司法鉴定科学研究院 Human body identity recognition method and system based on medical movement disorder feature recognition

Similar Documents

Publication Publication Date Title
Singh et al. Vision-based gait recognition: A survey
CN106384093B (en) A kind of human motion recognition method based on noise reduction autocoder and particle filter
CN111144165B (en) Gait information identification method, system and storage medium
KR101815975B1 (en) Apparatus and Method for Detecting Object Pose
CN111382679B (en) Method, system and equipment for evaluating severity of gait dyskinesia of Parkinson's disease
CN114724241A (en) Motion recognition method, device, equipment and storage medium based on skeleton point distance
Chaaraoui et al. Abnormal gait detection with RGB-D devices using joint motion history features
US20060269145A1 (en) Method and system for determining object pose from images
Ghazal et al. Human posture classification using skeleton information
CN114067358A (en) Human body posture recognition method and system based on key point detection technology
JP7185805B2 (en) Fall risk assessment system
CN110046675A (en) A kind of the exercise ability of lower limbs appraisal procedure based on improved convolutional neural networks
CN111144167A (en) Gait information identification optimization method, system and storage medium
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
Arif et al. Human pose estimation and object interaction for sports behaviour
Krzeszowski et al. Gait recognition based on marker-less 3D motion capture
CN112989889A (en) Gait recognition method based on posture guidance
JP2005351814A (en) Detector and detecting method
CN111144171A (en) Abnormal crowd information identification method, system and storage medium
Arigbabu et al. Estimating body related soft biometric traits in video frames
CN111144166A (en) Method, system and storage medium for establishing abnormal crowd information base
CN108416325B (en) Gait recognition method combining visual angle conversion model and hidden Markov model
CN111144170A (en) Gait information registration method, system and storage medium
Jinnovart et al. Abnormal gait recognition in real-time using recurrent neural networks
JP2005074075A (en) Walking cycle measuring device, periodic image obtaining device, compression processor for moving object outline, and moving object discrimination system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210127

Address after: 315016 9-3, building 91, 16 Buzheng lane, Haishu District, Ningbo City, Zhejiang Province

Applicant after: Yinhe shuidi Technology (Ningbo) Co.,Ltd.

Address before: Room 501-1753, Development Zone office building, 8 Xingsheng South Road, Miyun District Economic Development Zone, Beijing

Applicant before: Watrix Technology (Beijing) Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512

RJ01 Rejection of invention patent application after publication