CN112633222B - Gait recognition method, device, equipment and medium based on countermeasure network - Google Patents

Gait recognition method, device, equipment and medium based on countermeasure network Download PDF

Info

Publication number
CN112633222B
CN112633222B CN202011615027.3A CN202011615027A CN112633222B CN 112633222 B CN112633222 B CN 112633222B CN 202011615027 A CN202011615027 A CN 202011615027A CN 112633222 B CN112633222 B CN 112633222B
Authority
CN
China
Prior art keywords
gait
gait energy
atlas
target
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011615027.3A
Other languages
Chinese (zh)
Other versions
CN112633222A (en
Inventor
张平
曹铁
邵黎明
甄军平
李海博
周科杰
张立波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation Electronic Technology Co ltd
Second Research Institute of CAAC
Original Assignee
Civil Aviation Electronic Technology Co ltd
Second Research Institute of CAAC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation Electronic Technology Co ltd, Second Research Institute of CAAC filed Critical Civil Aviation Electronic Technology Co ltd
Priority to CN202011615027.3A priority Critical patent/CN112633222B/en
Publication of CN112633222A publication Critical patent/CN112633222A/en
Application granted granted Critical
Publication of CN112633222B publication Critical patent/CN112633222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a gait recognition method, device, equipment and medium based on an antagonism network, wherein the method comprises the following steps: acquiring video data of an object to be detected; carrying out segmentation and extraction processing on the video data by adopting a preset algorithm, and determining a gait energy atlas of a target to be detected, wherein the gait energy atlas comprises gait energy maps of any angle; inputting the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas of a target view angle; inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle; and performing gait recognition processing on the target to be detected based on the similarity. The method can accurately perform gait recognition on the target to be detected, improves the efficiency of identity recognition, and can bring rapid and accurate data support for applications such as pedestrian information inquiry, identity tracking and inquiry.

Description

Gait recognition method, device, equipment and medium based on countermeasure network
Technical Field
The invention relates to the technical field of computer vision and machine learning, in particular to a gait recognition method, device, equipment and medium based on an countermeasure network.
Background
Along with the rapid development of artificial intelligence technology, the intelligent security platform can be combined with detection sensing technology, biological feature recognition technology and the like, so that the intelligent security platform is constructed, and the functions of intelligent alarming, intelligent control, intelligent crime investigation, intelligent anti-terrorism and the like in public places are realized, so that the safety of people is ensured. Among these, image video recognition is an important step in describing the trajectory of pedestrians and retrieving certain specific persons, especially gait feature recognition. Gait features are features of body contour, posture, limb change and the like of a person in the walking process, and the identity is identified through the walking posture and the body shape of the person, so that the gait feature has high application value.
At present, in the prior art, the target object is identified and tracked by collecting information such as a face, a fingerprint, an iris, a human body shape, clothes, carried objects and the like, however, when the target object is carried with a cap, a mask or camouflaged, the accuracy of identifying the target object is low.
Disclosure of Invention
In view of the foregoing, the present invention provides a gait recognition method, device, apparatus and medium based on an countermeasure network, which at least partially solve the problems in the prior art.
In a first aspect, the present application provides a gait recognition method based on an countermeasure network, the method comprising:
acquiring video data of an object to be detected;
dividing, extracting and processing the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy maps of any angle;
inputting the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas of a target view angle;
inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle;
and performing gait recognition processing on the target to be detected based on the similarity.
In one embodiment, the segmenting and extracting the video data by using a preset algorithm, and determining the gait energy atlas of the object to be detected includes:
Performing segmentation extraction processing on the video data to obtain a gait binary image;
determining the centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the mass center as a center, and performing cutting processing by taking the distance between the mass center and the boundary point corresponding to each azimuth as a cutting distance to obtain a gait wheel binary image sequence;
and (3) carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy image set.
In one embodiment, the gait perspective conversion network is constructed by:
acquiring a historical gait energy image set of any visual angle and a historical gait energy image of a target visual angle;
mapping the historical gait energy atlas of any view angle and the historical gait energy atlas of the target view angle through an initial generator to obtain a mapped gait energy atlas of any view angle and a mapped gait energy atlas of the target view angle;
constructing a loss function through an initial discriminator based on the historical gait energy atlas of any view angle, the historical gait energy atlas of the target view angle, the mapped gait energy atlas of any view angle and the mapped gait energy atlas of the target view angle;
And training the initial discriminator and the initial generator according to the minimization of the loss function to obtain a gait visual angle conversion network.
In one embodiment, the twin neural network includes two feature extraction networks and a discrimination network, the gait energy atlas of the target visual angle is input into a pre-trained twin neural network model to be processed and calculated, and the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle includes;
respectively inputting the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle into corresponding feature extraction networks to perform feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas of the target visual angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas of the standard angle;
and judging the similarity of the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through a judging network for the first image feature vector and the second image feature vector.
In one embodiment, inputting the gait energy diagram of the target view angle and the gait energy diagram set of the standard angle into a corresponding feature extraction network to perform feature extraction, so as to obtain a first image feature vector and a second image feature vector, which includes:
and carrying out feature extraction on the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through convolution, pooling and normalization in sequence to obtain a first image feature vector and a second image feature vector.
In one embodiment, based on the similarity, performing gait recognition processing on the target to be detected includes:
when the similarity is larger than a preset threshold, determining the gait energy atlas of the target visual angle as the gait energy atlas of the standard gait visual angle;
and extracting the characteristic vector of the gait energy atlas of the standard visual angle, and carrying out gait recognition processing.
In one embodiment, extracting feature vectors of the gait energy graph for the standard viewing angle includes:
vectorizing the gait energy diagram of the standard visual angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining a feature vector of the gait energy diagram of the standard visual angle.
In a second aspect, the present application provides a gait recognition device based on an countermeasure network, the device comprising:
the acquisition module is used for acquiring video data of the target to be detected;
the determining module is used for carrying out segmentation extraction processing on the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy charts of any angle;
the first processing module is used for inputting the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas of a target view angle;
the second processing module is used for inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle;
and the gait recognition module is used for carrying out gait recognition processing on the target to be detected based on the similarity.
In a third aspect, embodiments of the present application provide an apparatus comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of network-based gait recognition as described in the first aspect above when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program for implementing the method of countermeasure network-based gait recognition as described in the first aspect above.
According to the gait recognition method, device, equipment and medium based on the countermeasure network, video data of a target to be detected are obtained, a preset algorithm is adopted to conduct segmentation extraction processing on the video data, a gait energy image set of the target to be detected is determined, the gait energy image set comprises gait energy images of any angle, then the gait energy image set of any view angle is input into a pre-trained gait view angle conversion network to be processed, the gait energy image set of the target view angle is obtained, the gait energy image set of the target view angle is input into a pre-trained twin neural network model to be processed, the similarity between the gait energy image set of the target view angle and the gait energy image set of the standard view angle is calculated, and recognition processing is conducted on the target to be detected based on the similarity. Compared with the related art, the gait energy diagram of any view angle can be converted into the gait energy diagram set of the target view angle through the gait view angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately performed, the efficiency of the identity recognition of the target to be detected is further improved, and rapid and accurate data support can be brought to applications such as pedestrian information inquiry, identity tracking and inquiry.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a real-time environmental architecture diagram of a gait recognition method based on an countermeasure network according to an embodiment of the present application;
fig. 2 is a flow chart of a gait recognition method based on an countermeasure network according to an embodiment of the present application;
fig. 3 is a flow chart of a gait recognition method based on an countermeasure network according to an embodiment of the present application;
fig. 4 is a schematic diagram of a gait wheel binary image sequence provided in an embodiment of the present application;
FIG. 5 is a schematic illustration of a gait energy diagram provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a gait perspective conversion network according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a gait angle conversion process according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a twin neural network model network according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a gait recognition device based on an countermeasure network according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be noted that, without conflict, the following embodiments and features in the embodiments may be combined with each other; and, based on the embodiments in this disclosure, all other embodiments that may be made by one of ordinary skill in the art without inventive effort are within the scope of the present disclosure.
As mentioned in the background art, image video recognition is an important step of describing a pedestrian track and searching for certain specific people, and at present, in the related art, a target object is recognized and tracked by collecting information such as a face, a fingerprint, an iris, a body shape of a person, clothes, a carried object and the like, however, when the target object is carried on a cap, a mask or camouflage is carried out, the accuracy of recognizing the target object is low; the method is characterized in that the gait features are extracted by modeling features such as limbs, joint angles and angular speeds of a person, but the modeling process often needs to manually mark the gait features, and the modeling process needs to rely on higher video resolution and clear feature key points, so that the method has higher computational complexity, is difficult to meet the requirements of general gait recognition, and an algorithm based on appearance matching constructs an identification model according to original features such as appearance contours, walking gestures and the like of human gait, thereby reducing the requirements of video definition. Under the normal condition, the cameras are in fixed states, when pedestrians enter the shooting acquisition area from different directions, the problem of multiple visual angles can be caused, under different visual angles, the human gestures are different, and the identified gait features are also different, so that the problem that how to accurately identify the gait features under different visual angles is needed to be solved.
Based on the defects, compared with the related art, the gait recognition method, device, equipment and medium based on the countermeasure network can convert the gait energy diagram of any view angle into the gait energy diagram set of the target view angle through the gait view angle conversion network, and calculate the similarity through the twin neural network model, so that the gait recognition of the target to be detected can be accurately performed, the efficiency of the identification of the target to be detected is further improved, and rapid and accurate data support can be brought to applications such as pedestrian information inquiry, identity tracking and inquiry.
It can be appreciated that the gait recognition method based on the countermeasure network can be applied to places such as airports, customs, stations, public inspection institutions, large-scale activity sites and the like. When the target to be detected is a passenger, the identity of the passenger can be authenticated and tracked by performing gait recognition on the target to be detected.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative.
Fig. 1 is a schematic diagram of an implementation environment of a gait recognition method based on an countermeasure network according to an embodiment of the present application. As shown in fig. 1, the implementation environment architecture includes: a terminal 100 and a server 200.
The terminal 100 may be a terminal device in various AI application scenarios. For example, the terminal 100 may be an intelligent home device such as an intelligent television, an intelligent television set-top box, or the terminal 100 may be a mobile portable terminal such as a smart phone, a tablet computer, and an electronic book reader, or the terminal 100 may be an intelligent wearable device such as an intelligent glasses, an intelligent watch, and the embodiment is not limited in this way.
The server 200 may be a server, or may be a server cluster formed by a plurality of servers, or the server 200 may include one or more virtualization platforms, or the server 200 may be a cloud computing service center.
A communication connection is established between the terminal 100 and the server 200 through a wired or wireless network. Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the Internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), a mobile, wired or wireless network, a private network, or any combination of virtual private networks.
In the process of providing the AI application service, the AI application system can process through the gait visual angle conversion network and the twin neural network model and is used for performing gait recognition processing on the target to be detected. Wherein, the gait perspective conversion network and the twin neural network model can be arranged in the server 200, trained and applied by the server; alternatively, the gait angle conversion network and the twin neural network model described above may be provided in the terminal 100, and trained and updated by the server 200.
For ease of understanding and description, the gait recognition method, apparatus, device and medium based on the countermeasure network provided in the embodiments of the present application are described in detail below with reference to fig. 2 to 10.
Fig. 2 is a flow chart illustrating a gait recognition method based on the countermeasure network according to an embodiment of the present application, and the method may be performed by a computer device, which may be the server 200 or the terminal 100 in the system shown in fig. 1, or the computer device may be a combination of the terminal 100 and the server 200. As shown in fig. 2, the method includes:
s101, acquiring video data of an object to be detected.
Specifically, the target to be detected may be a passenger at an airport, or may be other objects at the airport, and the number of the targets to be detected may be one or more. Optionally, the computer device may collect image data of the object to be detected through the verification gate, or may be provided with a plurality of monitoring devices at different positions of the airport, so that the computer device collects video data of the object to be detected through each monitoring device.
Taking the object to be detected as a passenger as an example, the image data can comprise morphological characteristic information of the passenger in the walking process, and the morphological characteristic information can comprise information such as body contour, gesture, limbs and the like of the passenger.
S102, segmenting and extracting the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy charts of any angle.
In this embodiment, when the target to be detected is a pedestrian, after the video data of the pedestrian is obtained, the gait profile of the pedestrian is extracted, as shown in fig. 3, and the step S102 may include the following steps:
s201, performing segmentation and extraction processing on the video data to obtain a gait binary image.
S202, determining the centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions.
S203, cutting the center of mass, wherein the distance between the center of mass and the boundary point corresponding to each azimuth is the cutting distance, so as to obtain a gait wheel binary image sequence.
S204, carrying out average value processing on the gait wheel binary image sequence in each period, and determining a gait energy atlas.
Specifically, the current input image and the background image may be determined from video data, and the difference area between the current input image and the background image may be obtained by subtracting the pixels in the current input image from the pixels in the background image, where the pixel area with the larger difference is the target motion area, and the area with the smaller difference is the background area, and then the current input image and the background image are converted into gait binary images by setting a specific segmentation threshold T. The gait binary image can be calculated by the following formula:
Figure BDA0002876330800000081
Wherein I is t (x, y) is the image of the current frame, B t (x, y) image of background frame, BD t (x, y) is a gait binary image, and T is a segmentation threshold.
The above-described setting of the segmentation threshold T has a great influence on the extraction of the gait contour. In an actual scene, the value of the segmentation threshold T is not fixed due to the influence of illumination intensity. In order to "dynamically" calculate the segmentation threshold T, the effect of the brightness and contrast of the image can be assumed to be that the pixel duty ratio of the background image element is w 1 The pixel ratio of the foreground image element (pedestrian) is w 2 The average gray value of the background image element is u 0 The average gray level of the foreground image element is u 1 The maximum interval method is set as
δ=w 1 ·(u-u 1 ) 2 +w 2 ·(u-u 2 ) 2 (1)
Where u is the average gray level of the whole image, u=w 1 ·u 1 +w 2 u 2 ,u=w 1 w 2 (u 1 -u 2 ) 2 By continuously calculating w 1 And w 2 When u 1 -u 2 When the difference between the background and the foreground is larger, the probability of erroneous segmentation is smallest, and the foreground element and the background element can be better segmented, so that the open operation processing is carried out on the gait two-image to obtain a complete gait two-value image.
After the gait binary image is obtained, the human body centroid can be determined in the gait binary image, image pixel points are respectively found by traversing the image pixel points, boundary points corresponding to the upper, lower, left and right directions are respectively found from the image, then the centroid is taken as the center, the distance between the centroid and the boundary point corresponding to each direction is taken as the clipping distance, the minimum rectangle where the target contour is located is clipped, and the gait wheel binary image sequence is obtained, and can be shown by referring to fig. 4.
After the gait wheel two-image sequence is obtained, normalization processing can be carried out on the gait wheel two-image sequence, and then average operation is carried out on the normalized gait two-value image sequence in one period, so that a gait energy diagram is obtained, and the gait energy diagram can be realized through the following formula:
Figure BDA0002876330800000091
wherein N is gait cycle, I t (x, y) is the gray value of the normalized gait binary image at time t, G (x, y) is a gait energy diagram, which can be seen in fig. 5, so that the gait energy diagram set is obtained by performing an average processing on the gait wheel binary image sequence in each cycle.
S103, inputting the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing, and obtaining the gait energy atlas of the target view angle.
Specifically, since the characteristics of the targets to be detected photographed by the camera at different angles are different, the obtained gait characteristics are also different. When the shooting visual angle is 90 degrees, namely, the walking direction of the object to be detected and the shooting direction of the camera are 90 degrees, the information retention degree of gait characteristics is the highest, so that an antagonistic neural network is provided, and the network structure can convert the characteristics of different angles, different wearing and different carrying objects into 90 degrees. The countermeasure network consists of a discriminator and a generator, wherein the role of the discriminator is to discriminate whether the image is from a real sample set or a false sample set, if the input sample set is from the real sample set, a value close to 1 is output, otherwise, if the input sample set is from the false sample set, a value close to 0 is output. The generator is used for enabling the generated sample to be as close as possible to the sample in the real sample set, and the approach is such that the discriminator cannot better identify the source of the sample. Thus, the role of the arbiter and the generator is reversed, and is therefore called an antagonism network.
It should be noted that, as shown in fig. 6, the gait angle conversion network may be constructed by the following steps: firstly, acquiring a historical gait energy image set of any view angle and a historical gait energy image of a target view angle, then mapping the historical gait energy image set of any view angle and the historical gait energy image of the target view angle through an initial generator to obtain a mapped gait energy image set of any view angle and a mapped gait energy image set of the target view angle, constructing a loss function through an initial discriminator based on the historical gait energy image set of any view angle, the historical gait energy image set of the target view angle, the mapped gait energy image set of any view angle and the mapped gait energy image set of the target view angle, and then training the initial discriminator and the initial generator according to minimization of the loss function, and obtaining a gait view angle conversion network.
Specifically, the countermeasure network is two F, G initial generators and D respectively x 、D y Two initial discriminators. Assume that there are two data fields X in the countermeasure network that are gait energy atlases for arbitrary perspectives, and Y is the gait energy atlas for the target perspective. Assume that the initial generator satisfies the formula:
y′=G(x) (3)
x′=F(y) (4)
Where X is an element in the X set and Y is an element in the Y set, and the generator maps X to Y' through G.
The initial arbiter is assumed to satisfy the following formula:
Loss x =D x (x,x′) (5)
Loss y =D y (y,y′) (6)
let the difference between x and x' be Loss x Provided that y and y' areDifference between Loss y
The element X in the X data field is given by formula (3) to obtain Y ', and the difference Loss between the elements Y and Y' in the Y data field is given by formula (6) y Generating y 'into x' by the formula (4), and obtaining Loss by the formula (5) x . By training the network, make Loss x +Loss y The gait visual angle conversion network is continuously reduced in the continuous iterative process, and finally reaches balance, so that the gait visual angle conversion network is obtained through training.
After the gait visual angle conversion network is constructed, the gait energy image set of any visual angle can be input into the pre-trained gait visual angle conversion network for processing, so that the gait energy image set of the target visual angle is obtained, namely all elements in X data are changed into Y.
Referring to fig. 7, in gait recognition, the data in X is a gait energy atlas of the same pedestrian at an arbitrary view angle, the data in Y is a gait energy atlas of the same pedestrian at a target view angle, wherein the first behavior is the gait energy atlas of the arbitrary view angle, the second behavior is converted to the gait energy atlas of the target view angle through the gait view angle conversion network, and the third behavior is an actual gait energy atlas of the standard view angle.
S104, inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle.
Specifically, the twin neural network model is a network structure based on distance measurement learning, and the similarity between the two images is judged by calculating the distance between the two images. The twin neural network model extracts image feature vectors through network learning, and performs distance calculation on the feature vectors at the rear end of the network, so that dimension specifications of images which are difficult to directly measure distances in original space are realized, and the images are easy to distinguish and identify.
It should be noted that, the twin neural network structure may be shown in fig. 8, and includes two parallel feature extraction networks and a discrimination network, where parameters may be weights, offsets, and the like, are shared between the networks. If sample a and sample B belong to the same dataset, they are referred to as positive pairs of samples, and if they do not belong to the same dataset, they are referred to as negative pairs of samples.
The twin convolutional neural network can be expressed by the following formula:
E w (X 1 ,X 2 )=||G w (X 1 )-G w (X 2 )||
wherein G is w The feature extractor is trained according to the minimization of the loss function, so that the trained twin convolutional neural network can be obtained.
In this embodiment, the sample a may be a gait energy atlas of the target view angle, the sample B may be a gait energy atlas of the standard angle, the gait energy atlas of the target view angle and the gait energy atlas of the standard angle may be respectively input into a corresponding feature extraction network to perform feature extraction, so as to obtain a first image feature vector and a second image feature vector, and feature extraction may be sequentially performed through convolution, pooling and normalization, where the first image feature vector is an image feature vector corresponding to the gait energy atlas of the target view angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas of the standard angle. And then judging the similarity of the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through a judging network by the first image feature vector and the second image feature vector.
S105, performing gait recognition processing on the target to be detected based on the similarity.
Specifically, after the similarity is determined, whether the similarity is larger than a preset threshold value or not is judged, if so, the gait energy image set of the target visual angle is determined to be the gait energy image set of the standard gait visual angle, then vectorization processing operation is carried out on the gait energy image of the standard visual angle to obtain a processed gait energy image, feature dimension reduction processing is carried out on the processed gait energy image, the feature vector of the gait energy image of the standard visual angle is determined, gait recognition processing is carried out on the target to be detected, and the gait information of the target to be detected is obtained.
Further, taking the object to be detected as the passenger as an example, gait information of the object to be detected and identity information of the passenger can be fused, and identity authentication and tracking can be performed on the passenger.
Specific non-contact performance is identified through gait, and robustness and safety are improved. The gait data can be obtained only by the person to be identified walking through the monitoring area without matching the person to be identified. The process does not need to be matched with the identified person intentionally, and gait characteristics are obtained more easily. And gait features have good effects in both low definition and long range recognition. The method has the advantages of long-distance acquisition, low requirement on image quality and high identification accuracy both indoors and outdoors. Meanwhile, gait characteristics are objective reflection of the individual shape and long-term walking habit of the individual. Research indicates that different people have unique gait characteristics, different from information such as clothes, carried objects and the like, the gait is subconscious behavior, is difficult to hide and disguise and the like, is not easy to disguise under normal conditions, and can be used as the characteristics to match pedestrians within a certain time interval.
According to the gait recognition method based on the countermeasure network, video data of a target to be detected are obtained, a preset algorithm is adopted to conduct segmentation extraction processing on the video data, a gait energy image set of the target to be detected is determined, the gait energy image set comprises a gait energy image of any angle, then the gait energy image set of any angle is input into a pre-trained gait view angle conversion network to be processed, the gait energy image set of the target view angle is obtained, the gait energy image set of the target view angle is input into a pre-trained twin neural network model to be processed, the similarity of the gait energy image set of the target view angle and the gait energy image set of the standard view angle is calculated, and recognition processing is conducted on the target to be detected based on the similarity. Compared with the related art, the gait energy diagram of any view angle can be converted into the gait energy diagram set of the target view angle through the gait view angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately performed, the efficiency of the identity recognition of the target to be detected is further improved, and rapid and accurate data support can be brought to applications such as pedestrian information inquiry, identity tracking and inquiry.
On the other hand, fig. 9 is a schematic structural diagram of a gait recognition device based on an countermeasure network according to an embodiment of the present application. The device may be a device within a terminal or a server, as shown in fig. 9, the device 700 includes:
an acquisition module 710, configured to acquire video data of an object to be detected;
the determining module 720 is configured to perform segmentation extraction processing on the video data by using a preset algorithm, and determine a gait energy atlas of the object to be detected, where the gait energy atlas includes gait energy maps of any angle;
a first processing module 730, configured to input the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing, so as to obtain a gait energy atlas of the target view angle;
the second processing module 740 is configured to input the gait energy atlas of the target view angle into a pre-trained twin neural network model for processing and calculating a similarity between the gait energy atlas of the target view angle and the gait energy atlas of the standard view angle;
the gait recognition module 750 is configured to perform gait recognition processing on the target to be detected based on the similarity.
Optionally, the determining module 720 is configured to:
carrying out segmentation extraction processing on the video data to obtain a gait binary image;
Determining the centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the mass center as the center, and cutting the distance between the mass center and the boundary point corresponding to each azimuth as the cutting distance to obtain a gait wheel binary image sequence;
and (3) carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy image set.
Optionally, the gait perspective conversion network is constructed by:
acquiring a historical gait energy image set of any visual angle and a historical gait energy image of a target visual angle;
mapping the historical gait energy atlas of any view angle and the historical gait energy image of the target view angle through an initial generator to obtain a mapped gait energy atlas of any view angle and a mapped gait energy atlas of the target view angle;
constructing a loss function through an initial discriminator based on the historical gait energy atlas of any view angle, the historical gait energy atlas of the target view angle, the mapped gait energy atlas of any view angle and the mapped gait energy atlas of the target view angle;
and training the initial discriminator and the initial generator according to the minimization of the loss function to obtain the gait visual angle conversion network.
Optionally, the second processing module 740 is specifically configured to:
respectively inputting the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle into corresponding feature extraction networks to perform feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas of the target visual angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas of the standard angle;
and judging the similarity of the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through a judging network for the first image feature vector and the second image feature vector.
Optionally, the second processing module 740 is specifically configured to:
and carrying out feature extraction on the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through convolution, pooling and normalization in sequence to obtain a first image feature vector and a second image feature vector.
Optionally, the gait recognition module 750 is specifically configured to:
when the similarity is larger than a preset threshold, determining the gait energy atlas of the target visual angle as the gait energy atlas of the standard gait visual angle;
And extracting the characteristic vector of the gait energy atlas of the standard visual angle, and performing gait recognition processing.
Optionally, the gait recognition module 750 is specifically configured to:
vectorizing the gait energy diagram of the standard visual angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining feature vectors of the gait energy diagram of the standard visual angle.
It can be understood that the functions of each functional module of the gait recognition device based on the countermeasure network in this embodiment may be specifically implemented according to the method in the above method embodiment, and the specific implementation process thereof may refer to the related description of the above method embodiment and will not be repeated herein.
In another aspect, an apparatus provided by an embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and executable on the processor, which when executed implements a method for gait recognition based on an countermeasure network as described above.
Referring now to fig. 10, fig. 10 is a schematic structural diagram of a computer system of a terminal device according to an embodiment of the present application.
As shown in fig. 10, the computer system 800 includes a Central Processing Unit (CPU) 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 803 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from the network through the communication portion 303, and/or installed from the removable medium 811. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, for example, as: a processor, comprising: the device comprises an acquisition module, a determination module, a first processing module, a second processing module and a determination module. The names of these units or modules do not constitute limitations on the unit or module itself in some cases, and the acquisition module may also be described as "video data for acquiring an object to be detected", for example.
As another aspect, the present application also provides a computer-readable storage medium that may be included in the electronic device described in the above embodiments; or may be present alone without being incorporated into the electronic device. The computer readable storage medium stores one or more programs that when executed by one or more processors perform the challenge network based gait recognition method described in the present application:
acquiring video data of an object to be detected;
dividing, extracting and processing the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy maps of any angle;
inputting the gait energy atlas of any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas of a target view angle;
inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle;
and performing gait recognition processing on the target to be detected based on the similarity.
In summary, according to the gait recognition method, device, equipment and medium based on the countermeasure network provided in the embodiments of the present application, by acquiring video data of a target to be detected, and performing segmentation extraction processing on the video data by adopting a preset algorithm, a gait energy atlas of the target to be detected is determined, the gait energy atlas includes a gait energy image of any angle, then the gait energy atlas of any view angle is input into a pre-trained gait view angle conversion network to be processed, a gait energy atlas of a target view angle is obtained, and the gait energy atlas of the target view angle is input into a pre-trained twin neural network model to be processed and calculate the similarity between the gait energy atlas of the target view angle and the gait energy atlas of a standard view angle, and based on the similarity, the gait recognition processing is performed on the target to be detected. Compared with the related art, the gait energy diagram of any view angle can be converted into the gait energy diagram set of the target view angle through the gait view angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately performed, the efficiency of the identity recognition of the target to be detected is further improved, and rapid and accurate data support can be brought to applications such as pedestrian information inquiry, identity tracking and inquiry.
Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (9)

1. A gait recognition method based on an countermeasure network, comprising:
Acquiring video data of an object to be detected;
dividing, extracting and processing the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy maps of any angle;
inputting the gait energy atlas of any angle into a pre-trained gait visual angle conversion network for processing to obtain a gait energy atlas of a target visual angle;
inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle;
performing gait recognition processing on the target to be detected based on the similarity;
the video data is subjected to segmentation and extraction processing by adopting a preset algorithm, and the gait energy atlas of the target to be detected is determined, which comprises the following steps:
performing segmentation extraction processing on the video data to obtain a gait binary image;
determining the centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the mass center as a center, and performing cutting processing by taking the distance between the mass center and the boundary point corresponding to each azimuth as a cutting distance to obtain a gait wheel binary image sequence;
And (3) carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy image set.
2. The method according to claim 1, wherein the gait perspective conversion network is constructed by:
acquiring a historical gait energy image set of any visual angle and a historical gait energy image of a target visual angle;
mapping the historical gait energy atlas of any view angle and the historical gait energy atlas of the target view angle through an initial generator to obtain a mapped gait energy atlas of any view angle and a mapped gait energy atlas of the target view angle;
constructing a loss function through an initial discriminator based on the historical gait energy atlas of any view angle, the historical gait energy atlas of the target view angle, the mapped gait energy atlas of any view angle and the mapped gait energy atlas of the target view angle;
and training the initial discriminator and the initial generator according to the minimization of the loss function to obtain a gait visual angle conversion network.
3. The method of claim 1, wherein the twin neural network comprises two feature extraction networks and a discrimination network, wherein inputting the gait energy atlas of the target view angle into a pre-trained twin neural network model is performed to calculate the similarity of the gait energy atlas of the target view angle and the gait energy atlas of the standard view angle, comprising;
Respectively inputting the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle into corresponding feature extraction networks to perform feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas of the target visual angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas of the standard angle;
and judging the similarity of the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through a judging network for the first image feature vector and the second image feature vector.
4. The method of claim 3, wherein inputting the gait energy graph of the target view angle and the gait energy graph set of the standard angle into a corresponding feature extraction network for feature extraction to obtain the first image feature vector and the second image feature vector, comprising:
and carrying out feature extraction on the gait energy atlas of the target visual angle and the gait energy atlas of the standard angle through convolution, pooling and normalization in sequence to obtain a first image feature vector and a second image feature vector.
5. The method according to claim 1, wherein performing gait recognition processing on the object to be detected based on the similarity comprises:
when the similarity is larger than a preset threshold, determining the gait energy atlas of the target visual angle as the gait energy atlas of the standard gait visual angle;
and extracting the characteristic vector of the gait energy atlas of the standard visual angle, and carrying out gait recognition processing.
6. The method of claim 5, wherein extracting feature vectors of the standard view gait energy pattern comprises:
vectorizing the gait energy diagram of the standard visual angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining a feature vector of the gait energy diagram of the standard visual angle.
7. A gait recognition device based on an countermeasure network, the device comprising:
the acquisition module is used for acquiring video data of the target to be detected;
the determining module is used for carrying out segmentation extraction processing on the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises gait energy charts of any angle;
The first processing module is used for inputting the gait energy atlas of any angle into a pre-trained gait visual angle conversion network for processing to obtain a gait energy atlas of a target visual angle;
the second processing module is used for inputting the gait energy atlas of the target visual angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas of the target visual angle and the gait energy atlas of the standard visual angle;
the gait recognition module is used for carrying out gait recognition processing on the target to be detected based on the similarity;
the determining module is specifically configured to: performing segmentation extraction processing on the video data to obtain a gait binary image;
determining the centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the mass center as a center, and performing cutting processing by taking the distance between the mass center and the boundary point corresponding to each azimuth as a cutting distance to obtain a gait wheel binary image sequence;
and (3) carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy image set.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor for implementing the network-based gait recognition method according to any one of claims 1-6 when the program is executed.
9. A computer readable storage medium having stored thereon a computer program for implementing the network-based gait recognition method of any one of claims 1-6.
CN202011615027.3A 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network Active CN112633222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011615027.3A CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011615027.3A CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Publications (2)

Publication Number Publication Date
CN112633222A CN112633222A (en) 2021-04-09
CN112633222B true CN112633222B (en) 2023-04-28

Family

ID=75286989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011615027.3A Active CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Country Status (1)

Country Link
CN (1) CN112633222B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420737B (en) * 2021-08-23 2022-01-25 成都飞机工业(集团)有限责任公司 3D printing pattern recognition method based on convolutional neural network
CN114565970A (en) * 2022-01-27 2022-05-31 内蒙古工业大学 High-precision multi-angle behavior recognition method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219865A (en) * 2006-02-17 2007-08-30 Hitachi Ltd Abnormal behavior detection device
CN108681774A (en) * 2018-05-11 2018-10-19 电子科技大学 Based on the human body target tracking method for generating confrontation network negative sample enhancing
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device
CN110765925A (en) * 2019-10-18 2020-02-07 河南大学 Carrier detection and gait recognition method based on improved twin neural network
CN111310668A (en) * 2020-02-18 2020-06-19 大连海事大学 Gait recognition method based on skeleton information

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106580350A (en) * 2016-12-07 2017-04-26 中国民用航空总局第二研究所 Fatigue condition monitoring method and device
US20200017124A1 (en) * 2018-07-12 2020-01-16 Sf Motors, Inc. Adaptive driver monitoring for advanced driver-assistance systems
CN109726654A (en) * 2018-12-19 2019-05-07 河海大学 A kind of gait recognition method based on generation confrontation network
CN109886141B (en) * 2019-01-28 2023-06-06 同济大学 Pedestrian re-identification method based on uncertainty optimization
CN110276739B (en) * 2019-07-24 2021-05-07 中国科学技术大学 Video jitter removal method based on deep learning
CN110570490B (en) * 2019-09-06 2021-07-30 北京航空航天大学 Saliency image generation method and equipment
CN111462173B (en) * 2020-02-28 2023-11-17 大连理工大学人工智能大连研究院 Visual tracking method based on twin network discrimination feature learning
CN111639580B (en) * 2020-05-25 2023-07-18 浙江工商大学 Gait recognition method combining feature separation model and visual angle conversion model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219865A (en) * 2006-02-17 2007-08-30 Hitachi Ltd Abnormal behavior detection device
CN108681774A (en) * 2018-05-11 2018-10-19 电子科技大学 Based on the human body target tracking method for generating confrontation network negative sample enhancing
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device
CN110765925A (en) * 2019-10-18 2020-02-07 河南大学 Carrier detection and gait recognition method based on improved twin neural network
CN111310668A (en) * 2020-02-18 2020-06-19 大连海事大学 Gait recognition method based on skeleton information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Xueru Bai 等.Radar-Based Human Gait Recognition Using Dual-Channel Deep Convolutional Neural Network.《IEEE Transactions on Geoscience &amp Remote Sensing》.2019,第57卷(第12期),9767-9778. *
朱应钊 等.步态识别现状及发展趋势.《电信科学》.2020,第36卷(第8期),130-138. *

Also Published As

Publication number Publication date
CN112633222A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
US20200364443A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
CN110070010B (en) Face attribute association method based on pedestrian re-recognition
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
RU2431190C2 (en) Facial prominence recognition method and device
CN112052831B (en) Method, device and computer storage medium for face detection
CN111723611A (en) Pedestrian re-identification method and device and storage medium
CN109711416B (en) Target identification method and device, computer equipment and storage medium
CN112446270A (en) Training method of pedestrian re-identification network, and pedestrian re-identification method and device
Holte et al. Fusion of range and intensity information for view invariant gesture recognition
CN110222572B (en) Tracking method, tracking device, electronic equipment and storage medium
CN112633222B (en) Gait recognition method, device, equipment and medium based on countermeasure network
CN112052830B (en) Method, device and computer storage medium for face detection
CN113569598A (en) Image processing method and image processing apparatus
CN113033519A (en) Living body detection method, estimation network processing method, device and computer equipment
CN108875497B (en) Living body detection method, living body detection device and computer storage medium
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
CN112052832A (en) Face detection method, device and computer storage medium
Liu et al. Iris recognition in visible spectrum based on multi-layer analogous convolution and collaborative representation
CN113378790B (en) Viewpoint positioning method, apparatus, electronic device, and computer-readable storage medium
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN113706550A (en) Image scene recognition and model training method and device and computer equipment
Huang et al. Multi‐class obstacle detection and classification using stereovision and improved active contour models
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
Huang et al. Whole-body detection, recognition and identification at altitude and range
Barman et al. Person re-identification using overhead view fisheye lens cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant