CN112633222A - Gait recognition method, device, equipment and medium based on confrontation network - Google Patents

Gait recognition method, device, equipment and medium based on confrontation network Download PDF

Info

Publication number
CN112633222A
CN112633222A CN202011615027.3A CN202011615027A CN112633222A CN 112633222 A CN112633222 A CN 112633222A CN 202011615027 A CN202011615027 A CN 202011615027A CN 112633222 A CN112633222 A CN 112633222A
Authority
CN
China
Prior art keywords
gait
view angle
gait energy
atlas
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011615027.3A
Other languages
Chinese (zh)
Other versions
CN112633222B (en
Inventor
张平
曹铁
邵黎明
甄军平
李海博
周科杰
张立波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation Electronic Technology Co ltd
Second Research Institute of CAAC
Original Assignee
Civil Aviation Electronic Technology Co ltd
Second Research Institute of CAAC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation Electronic Technology Co ltd, Second Research Institute of CAAC filed Critical Civil Aviation Electronic Technology Co ltd
Priority to CN202011615027.3A priority Critical patent/CN112633222B/en
Publication of CN112633222A publication Critical patent/CN112633222A/en
Application granted granted Critical
Publication of CN112633222B publication Critical patent/CN112633222B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a gait recognition method, a device, equipment and a medium based on a confrontation network, wherein the method comprises the following steps: acquiring video data of a target to be detected; the method comprises the steps that a preset algorithm is adopted to conduct segmentation extraction processing on video data, and a gait energy atlas of a target to be detected is determined, wherein the gait energy atlas comprises a gait energy map at any angle; inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle; inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard view angle; and based on the similarity, carrying out gait recognition processing on the target to be detected. The method can accurately identify the gait of the target to be detected, improves the efficiency of identity identification, and can bring quick and accurate data support for applications such as pedestrian information inquiry, identity tracking and inquiry.

Description

Gait recognition method, device, equipment and medium based on confrontation network
Technical Field
The invention relates to the technical field of computer vision and machine learning, in particular to a gait recognition method, a device, equipment and a medium based on a confrontation network.
Background
Along with the rapid development of the artificial intelligence technology, the artificial intelligence technology can be combined with a detection sensing technology, a biological characteristic identification technology and the like, so that an intelligent safety platform is constructed, the functions of intelligent alarming, intelligent control, intelligent crime investigation, intelligent anti-terrorism and the like in public places are realized, and the safety of people is guaranteed. The image video identification is an important step for describing the pedestrian track and searching certain specific persons, and particularly for identifying gait features. The gait characteristics are the characteristics of body contour, posture, limb change and the like of a person in the walking process, and the gait characteristics are used for identity recognition through the walking posture and body type of the person, so that the gait recognition system has very high application value.
At present, in the prior art, a target object is identified and tracked by collecting information such as a human face, a fingerprint, an iris, a human body type, clothing, a carried object and the like, however, when the target object is worn on a hat or a mask or is camouflaged, the accuracy of identifying the target object is low.
Disclosure of Invention
In view of the above, the present invention provides a gait recognition method, apparatus, device and medium based on an antagonistic network, which at least partially solve the problems in the prior art.
In a first aspect, the present application provides a gait recognition method based on a countermeasure network, including:
acquiring video data of a target to be detected;
adopting a preset algorithm to carry out segmentation extraction processing on the video data, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle;
inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle;
and carrying out gait recognition processing on the target to be detected based on the similarity.
In one embodiment, the segmenting and extracting the video data by using a preset algorithm to determine the gait energy atlas of the target to be detected includes:
carrying out segmentation extraction processing on the video data to obtain a gait binary image;
determining a centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the center of mass as a center, and taking the distance between the center of mass and the boundary point corresponding to each direction as a cutting distance to perform cutting processing to obtain a gait wheel binary image sequence;
and carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy atlas.
In one embodiment, the gait perspective conversion network is constructed by the following steps:
acquiring a historical gait energy atlas at any view angle and a historical gait energy atlas at a target view angle;
mapping the historical gait energy atlas at any view angle and the historical gait energy atlas at a target view angle through an initial generator to obtain a mapped gait energy atlas at any view angle and a mapped gait energy atlas at a target view angle;
constructing a loss function through an initial discriminator based on the historical gait energy atlas at any view angle, the historical gait energy atlas at the target view angle, the mapped gait energy atlas at any view angle and the mapped gait energy atlas at the target view angle;
and training the initial discriminator and the initial generator according to the minimization of the loss function to obtain a gait view angle conversion network.
In one embodiment, the twin neural network comprises two feature extraction networks and a discrimination network, and the gait energy atlas at the target view angle is input into a pre-trained twin neural network model to be processed and calculated to calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle, including;
respectively inputting the gait energy atlas at the target view angle and the gait energy atlas at the standard angle into corresponding feature extraction networks for feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas at the target view angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas at the standard angle;
and judging the similarity between the gait energy image set of the target view angle and the gait energy image set of the standard angle through a discrimination network for the first image feature vector and the second image feature vector.
In one embodiment, inputting the gait energy map of the target view angle and the gait energy map set of the standard angle into a corresponding feature extraction network for feature extraction to obtain a first image feature vector and a second image feature vector, including:
and performing feature extraction on the gait energy atlas at the target view angle and the gait energy atlas at the standard angle sequentially through convolution, pooling and normalization to obtain a first image feature vector and a second image feature vector.
In one embodiment, the gait recognition processing of the target to be detected based on the similarity includes:
when the similarity is larger than a preset threshold value, determining that the gait energy atlas at the target view angle is a gait energy atlas at a standard gait view angle;
and extracting the characteristic vector of the gait energy atlas at the standard view angle, and carrying out gait recognition processing.
In one embodiment, extracting the feature vector of the gait energy map from the standard view comprises:
vectorizing the gait energy diagram at the standard view angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining the feature vector of the gait energy diagram with the standard view angle.
In a second aspect, the present application provides a gait recognition device based on a countermeasure network, the device comprising:
the acquisition module is used for acquiring video data of a target to be detected;
the determining module is used for carrying out segmentation extraction processing on the video data by adopting a preset algorithm to determine a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle;
the first processing module is used for inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
the second processing module is used for inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle;
and the gait recognition module is used for carrying out gait recognition processing on the target to be detected based on the similarity.
In a third aspect, an embodiment of the present application provides an apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the method for gait recognition based on an anti-network as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, the computer program being used for implementing the counter network-based gait recognition method according to the first aspect.
According to the gait recognition method, device, equipment and medium based on the confrontation network, provided by the embodiment of the application, the video data of the target to be detected is obtained, the video data is segmented and extracted through the preset algorithm, the gait energy atlas of the target to be detected is determined, the gait energy atlas comprises a gait energy image at any angle, then the gait energy atlas at any angle is input into a pre-trained gait view angle conversion network to be processed, the gait energy atlas at the target view angle is obtained, the gait energy atlas at the target view angle is input into a pre-trained twin neural network model to be processed, the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle is calculated, and the gait recognition processing is carried out on the target to be detected based on the similarity. Compared with the prior art, the gait energy image at any visual angle can be converted into the gait energy image set at the target visual angle through the gait visual angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately carried out, the efficiency of the identity recognition of the target to be detected is further improved, and quick and accurate data support can be brought to the applications of pedestrian information query, identity tracking, query and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a diagram of a real-time environment architecture for a confrontation network-based gait recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a gait recognition method based on a countermeasure network according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a gait recognition method based on a countermeasure network according to an embodiment of the present application;
fig. 4 is a schematic diagram of a gait wheel binary image sequence provided in an embodiment of the present application;
FIG. 5 is a schematic view of a gait energy profile provided by an embodiment of the application;
fig. 6 is a schematic structural diagram of a gait view angle conversion network according to an embodiment of the present application;
fig. 7 is a schematic structural diagram illustrating a gait perspective transformation process according to another embodiment of the present application;
FIG. 8 is a schematic structural diagram of a twin neural network model network provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a gait recognition device based on a countermeasure network according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be noted that, in the case of no conflict, the features in the following embodiments and examples may be combined with each other; moreover, all other embodiments that can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort fall within the scope of the present disclosure.
As mentioned in the background art, image video identification is an important step for describing pedestrian tracks and retrieving certain specific persons, and currently, in the related art, a target object is identified and tracked by collecting information such as human faces, fingerprints, irises, human body types, clothes, carried objects and the like, however, when the target object is worn on a hat or a mask or camouflaged, the accuracy rate of identifying the target object is low; the identification method based on the model matching is used for modeling the characteristics of human limbs, joint angles, angular velocities and the like to extract gait characteristics, but the gait characteristics are often manually marked in the modeling process and need to depend on high video resolution and clear characteristic key points, so that the method is high in computational complexity and difficult to meet the requirement of universal gait identification, and the identification model is constructed according to the appearance contour, walking posture and other original characteristics of human gait based on the algorithm based on the appearance matching, so that the requirement on video definition is reduced. Under the common condition, the cameras are in a fixed state, when pedestrians enter a camera shooting and collecting area from different directions, the problem of multiple visual angles is caused, and under different visual angles, the postures of the pedestrians are different, and recognized gait characteristics are also different, so that the problem of how to accurately recognize the gait characteristics under different visual angles is to be solved.
Based on the defects, the scheme can convert the gait energy image at any visual angle into the gait energy image set at the target visual angle through the gait visual angle conversion network and calculate the similarity through the twin neural network model, so that the gait recognition of the target to be detected can be accurately carried out, the identification efficiency of the target to be detected is further improved, and quick and accurate data support can be brought to the applications of pedestrian information inquiry, identity tracking, inquiry and the like.
It can be understood that the gait recognition method based on the countermeasure network can be applied to occasions such as airports, customs, stations, public inspection institutions, large-scale activity sites and the like. When the target to be detected is a passenger, the passenger can be authenticated and tracked by carrying out gait recognition on the target to be detected.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative.
Fig. 1 is an implementation environment architecture diagram of a gait recognition method based on a countermeasure network according to an embodiment of the present application. As shown in fig. 1, the implementation environment architecture includes: a terminal 100 and a server 200.
The terminal 100 may be a terminal device in various AI application scenarios. For example, the terminal 100 may be a smart home device such as a smart television and a smart television set-top box, or the terminal 100 may be a mobile portable terminal such as a smart phone, a tablet computer, and an e-book reader, or the terminal 100 may be a smart wearable device such as smart glasses and a smart watch, which is not limited in this embodiment.
The server 200 may be a server, or may be a server cluster composed of several servers, or the server 200 may include one or more virtualization platforms, or the server 200 may be a cloud computing service center.
The terminal 100 and the server 200 establish a communication connection therebetween through a wired or wireless network. Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks.
The AI application system can process through a gait view angle conversion network and a twin neural network model in the process of providing AI application service, and is used for carrying out gait recognition processing on a target to be detected. The gait visual angle conversion network and the twin neural network model can be arranged in the server 200 and trained and applied by the server; alternatively, the gait perspective transformation network and the twin neural network model may be provided in the terminal 100 and trained and updated by the server 200.
For convenience of understanding and explanation, the method, apparatus, device and medium for identifying gait based on a countermeasure network provided by the embodiments of the present application are described in detail below with reference to fig. 2 to 10.
Fig. 2 is a flowchart illustrating a gait recognition method based on an anti-confrontation network according to an embodiment of the present invention, where the method may be executed by a computer device, and the computer device may be the server 200 or the terminal 100 in the system shown in fig. 1, or the computer device may be a combination of the terminal 100 and the server 200. As shown in fig. 2, the method includes:
s101, video data of a target to be detected are obtained.
Specifically, the target to be detected may be a passenger at an airport or other objects at the airport, and the number of the targets to be detected may be one or more. Optionally, the computer device may acquire image data of the target to be detected through the verification gate, or may be provided with a plurality of monitoring devices at different positions of the airport, so that the computer device acquires video data of the target to be detected through each monitoring device.
Taking the target to be detected as the passenger as an example, the image data may include morphological feature information of the passenger in the walking process, and the morphological feature information may include information of the passenger such as body contour, posture, limb, and the like.
S102, carrying out segmentation extraction processing on the video data by adopting a preset algorithm, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle.
In this embodiment of the application, when the target to be detected is a pedestrian, after the video data of the pedestrian is acquired, the gait profile of the pedestrian is extracted, as shown in fig. 3, the step S102 may include the following steps:
s201, carrying out segmentation and extraction processing on the video data to obtain a gait binary image.
S202, determining the center of mass and boundary points corresponding to the upper, lower, left and right directions of the gait binary image.
And S203, taking the center of mass as a center, and taking the distance between the center of mass and the boundary point corresponding to each direction as a cutting distance to perform cutting processing to obtain a gait wheel binary image sequence.
And S204, carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy atlas.
Specifically, the current input image and the background image may be determined from the video data, and a difference region between the current input image and the background image is obtained by subtracting pixels in the current input image and pixels in the background image, where a pixel region with a larger difference is a target motion region and a region with a smaller difference is a background region, and then the difference region is converted into a gait binary image by setting a specific segmentation threshold T. The gait binary image can be obtained by the following formula:
Figure BDA0002876330800000081
wherein, It(x, y) is the image of the current frame, Bt(x, y) is an image of a background frame, BDt(x, y) is a gait binary image, and T is a segmentation threshold.
The setting of the above-described division threshold T has a great influence on the extraction of the gait contour. In an actual scene, due to the influence of the illumination intensity, the value of the segmentation threshold T is not fixed. In order to be able to calculate the segmentation threshold T "dynamically", making the effect of the image brightness and contrast, it may be assumed that the pixel fraction of the background image elements is w1The pixel proportion of the foreground image element (pedestrian) is w2The average gray value of the background picture elements is u0Average gray level of the foreground image elements is u1Let the maximum class method be
δ=w1·(u-u1)2+w2·(u-u2)2 (1)
Where u is the average gray of the entire image, and u is w1·u1+w2u2,u=w1w2(u1-u2)2By continuously calculating w1And w2When u is1-u2And when the distance between the foreground and the background is the largest, the maximum inter-class variance is obtained, which shows that the difference between the two parts forming the image is larger, the probability of wrong segmentation is the minimum, and the foreground element and the background element can be better segmented, so that the complete gait binary image is obtained by performing open operation processing on the gait binary image.
After the gait binary image is obtained, the human body centroid can be determined in the gait binary image, image pixel points are found respectively by traversing the image pixel points, boundary points corresponding to an upper direction, a lower direction, a left direction and a right direction are found respectively from the image, then the centroid is taken as a center, the distance between the centroid and the boundary points corresponding to each direction is taken as a cutting distance, the minimum rectangle where the target contour is located is cut, and a gait wheel binary image sequence is obtained, which can be shown in fig. 4.
After the gait round binary image sequence is obtained, normalization processing can be carried out on the gait round binary image sequence, then averaging operation is carried out on the normalized gait binary image sequence in one period, and therefore a gait energy map is obtained, and the gait energy map can be achieved through the following formula:
Figure BDA0002876330800000091
wherein N is the gait cycle, ItAnd (x, y) is a gray value of the normalized gait binary image at the time t, and G (x, y) is a gait energy map which can be seen from the graph shown in FIG. 5, so that the gait energy map set is obtained by carrying out average value processing on the gait wheel binary image sequence in each period.
S103, inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle.
Specifically, the gait characteristics obtained are different because the characteristics of the target to be detected, which are shot by the camera at different angles, are different. When the shooting visual angle is 90 degrees, namely the walking direction of the target to be detected and the shooting direction of the camera are 90 degrees, the information retention degree of the gait characteristics is the highest, so that the confrontation neural network is provided, and the network structure can convert the characteristics of different angles, different wearing and different carrying objects into 90 degrees. The countermeasure network is composed of a discriminator and a generator, the discriminator is used for discriminating whether the image is from a real sample set or a false sample set, if the input sample set is from the real sample set, a value close to 1 is output, otherwise, if the input sample set is from the false sample set, a value close to 0 is output. The generator is used for enabling the generated samples to be close to the samples in the real sample set as much as possible, so that the source of the samples cannot be identified better by the discriminator. The role of the arbiter and generator are therefore reversed, and are therefore referred to as a countermeasure network.
It should be noted that, as shown in fig. 6, the gait view angle conversion network can be constructed by the following steps: the method comprises the steps of firstly, acquiring a historical gait energy atlas at any visual angle and a historical gait energy map at a target visual angle, then mapping the historical gait energy atlas at any visual angle and the historical gait energy map at the target visual angle through an initial generator to obtain a mapped gait energy atlas at any visual angle and a mapped gait energy atlas at the target visual angle, constructing a loss function through an initial discriminator on the basis of the historical gait energy atlas at any visual angle, the historical gait energy atlas at the target visual angle, the mapped gait energy atlas at any visual angle and the mapped gait energy atlas at the target visual angle, and then training the initial discriminator and the initial generator according to the minimization of the loss function to obtain a gait visual angle conversion network.
Specifically, the countermeasure network is two F, G initial generators and D respectivelyx、DyTwo initial discriminators. Suppose that there are two data fields in the countermeasure network, X being the gait energy atlas at any view angle, and Y being the gait energy atlas at the target view angle. Assuming that the initial generator satisfies the formula:
y′=G(x) (3)
x′=F(y) (4)
where X is an element in the X set and Y is an element in the Y set, the generator maps X to Y' by G.
Assume that the initial arbiter satisfies the following equation:
Lossx=Dx(x,x′) (5)
Lossy=Dy(y,y′) (6)
let the difference between x and x' be LossxLet the difference Loss between y and yy
The element X in the X data field is given Y ' by formula (3), Y ' by formula (6), and the difference Loss between the elements Y and Y ' in the Y data fieldyThen y 'is generated into x' by the formula (4), and Loss is obtained by the formula (5)x. By training the network, make Lossx+LossyAnd the gait view angle conversion network is trained to be obtained by continuously reducing in the continuous iteration process and finally achieving balance.
After the gait view angle conversion network is constructed, a gait energy atlas at any view angle can be input into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle, namely, all elements in X data are changed into Y.
Referring to fig. 7, in gait recognition, data in X is a gait energy atlas of the same pedestrian at any view angle, data in Y is a gait energy atlas of the same pedestrian at a target view angle, wherein a first behavior is a gait energy atlas at any view angle, a second behavior is converted to the gait energy atlas at the target view angle through a gait view angle conversion network, and a third behavior is a gait energy atlas at an actual standard view angle.
And S104, inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model for processing and calculating the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard view angle.
Specifically, the twin neural network model is a network structure based on distance metric learning, and the similarity between the two is determined by calculating the distance between the images. The twin neural network model extracts image characteristic vectors through network learning, and performs distance calculation on the characteristic vectors at the rear end of the network, so that the dimension specification of the image which is difficult to directly measure the distance in the original space originally is realized, and the image is easy to distinguish and identify.
It should be noted that, the twin neural network structure may be as shown in fig. 8, and includes two parallel feature extraction networks and a discrimination network, and parameters are shared between the networks, where the parameters may be weights, offsets, and the like. If the sample A and the sample B belong to the same dataset, they are called positive sample pairs, and if they do not belong to the same dataset, they are called negative sample pairs.
The twin convolutional neural network described above can be represented by the following formula:
Ew(X1,X2)=||Gw(X1)-Gw(X2)||
wherein G iswAnd (4) training the feature extractor according to the loss function minimization to obtain the trained twin convolutional neural network.
In this embodiment, the sample a may be a gait energy atlas at a target view angle, the sample B may be a gait energy atlas at a standard angle, the gait energy atlas at the target view angle and the gait energy atlas at the standard angle may be respectively input into corresponding feature extraction networks to perform feature extraction, so as to obtain a first image feature vector and a second image feature vector, and the feature extraction may be performed sequentially through convolution, pooling, and normalization, where the first image feature vector is an image feature vector corresponding to the gait energy atlas at the target view angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas at the standard angle. And then, judging the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard angle through the first image characteristic vector and the second image characteristic vector through a discrimination network.
And S105, performing gait recognition processing on the target to be detected based on the similarity.
Specifically, after the similarity is determined, whether the similarity is greater than a preset threshold value is judged, if so, the gait energy atlas at the target view angle is determined to be the gait energy atlas at the standard view angle, then vectorization processing operation is carried out on the gait energy atlas at the standard view angle to obtain a processed gait energy atlas, feature dimension reduction processing is carried out on the processed gait energy atlas, a feature vector of the gait energy atlas at the standard view angle is determined, gait recognition processing is carried out on the target to be detected, and gait information of the target to be detected is acquired.
Further, taking the target to be detected as the passenger as an example, the gait information of the target to be detected and the identity information of the passenger can be fused, and the identity authentication and tracking of the passenger can be performed.
The non-contact performance is identified through the gait, and the robustness and the safety are improved. The gait data can be acquired as long as the identified person walks through the monitoring area without the cooperation of the identified person. The process does not need the conscious coordination of the identified person, and the gait characteristics are more easily obtained. And the gait characteristics have good effects in low definition and long-distance identification. The method has the advantages of remote acquisition, low image quality requirement and high identification accuracy both indoors and outdoors. Meanwhile, the gait characteristics are the objective reflection of the individual body and the long-term walking habit of the person. Research shows that different people have unique gait characteristics, different from information such as clothes, carried objects and the like, the gait is subconscious, and the gait is difficult to hide and disguise.
The gait recognition method based on the countermeasure network provided by the embodiment of the application comprises the steps of obtaining video data of a target to be detected, adopting a preset algorithm to carry out segmentation extraction processing on the video data, determining a gait energy atlas of the target to be detected, inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing, obtaining the gait energy atlas at the target view angle, inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model for processing, calculating the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle, and carrying out gait recognition processing on the target to be detected based on the similarity. Compared with the prior art, the gait energy image at any visual angle can be converted into the gait energy image set at the target visual angle through the gait visual angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately carried out, the efficiency of the identity recognition of the target to be detected is further improved, and quick and accurate data support can be brought to the applications of pedestrian information query, identity tracking, query and the like.
On the other hand, fig. 9 is a schematic structural diagram of a gait recognition device based on a countermeasure network according to an embodiment of the present application. The apparatus may be an apparatus in a terminal or a server, as shown in fig. 9, the apparatus 700 includes:
an obtaining module 710, configured to obtain video data of a target to be detected;
the determining module 720 is configured to perform segmentation and extraction processing on the video data by using a preset algorithm, and determine a gait energy atlas of the target to be detected, where the gait energy atlas includes a gait energy map at any angle;
the first processing module 730 is configured to input the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
the second processing module 740 is configured to input the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard view angle;
and the gait recognition module 750 is configured to perform gait recognition processing on the target to be detected based on the similarity.
Optionally, the determining module 720 is configured to:
carrying out segmentation extraction processing on the video data to obtain a gait binary image;
determining a centroid and boundary points corresponding to the upper, lower, left and right directions of the gait binary image;
cutting by taking the center of mass as a center and taking the distance between the center of mass and the boundary point corresponding to each direction as a cutting distance to obtain a gait wheel binary image sequence;
and carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy atlas.
Optionally, the gait perspective conversion network is constructed by the following steps:
acquiring a historical gait energy atlas at any view angle and a historical gait energy atlas at a target view angle;
mapping the historical gait energy atlas at any view angle and the historical gait energy atlas at a target view angle through an initial generator to obtain a mapped gait energy atlas at any view angle and a mapped gait energy atlas at a target view angle;
constructing a loss function through an initial discriminator based on a historical gait energy atlas at any view angle, a historical gait energy atlas at a target view angle, a gait energy atlas at any mapped view angle and a gait energy atlas at a target mapped view angle;
and training the initial discriminator and the initial generator according to the minimization of the loss function to obtain the gait visual angle conversion network.
Optionally, the second processing module 740 is specifically configured to:
respectively inputting the gait energy atlas at the target view angle and the gait energy atlas at the standard angle into corresponding feature extraction networks for feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas at the target view angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas at the standard angle;
and judging the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard angle through a discrimination network for the first image characteristic vector and the second image characteristic vector.
Optionally, the second processing module 740 is specifically configured to:
and performing feature extraction on the gait energy atlas at the target view angle and the gait energy atlas at the standard angle sequentially through convolution, pooling and normalization to obtain a first image feature vector and a second image feature vector.
Optionally, the gait recognition module 750 is specifically configured to:
when the similarity is larger than a preset threshold value, determining the gait energy atlas at the target view angle as the gait energy atlas at the standard gait view angle;
and extracting the characteristic vector of the gait energy atlas at the standard view angle, and carrying out gait recognition processing.
Optionally, the gait recognition module 750 is specifically configured to:
vectorizing the gait energy diagram at the standard view angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining the feature vector of the gait energy diagram with the standard view angle.
It can be understood that the functions of the functional modules of the gait recognition device based on the confrontation network of this embodiment can be specifically implemented according to the method in the above method embodiment, and the specific implementation process thereof can refer to the related description of the above method embodiment, and will not be described herein again.
In another aspect, an apparatus provided in this embodiment includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the method for gait recognition based on an anti-network as described above.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a computer system of a terminal device according to an embodiment of the present application.
As shown in fig. 10, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 803 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 303, and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, and may be described as: a processor, comprising: the device comprises an acquisition module, a determination module, a first processing module, a second processing module and a determination module. The names of these units or modules do not in some cases constitute a limitation on the units or modules themselves, and for example, the acquisition module may also be described as "for acquiring video data of an object to be detected".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may be separate and not incorporated into the electronic device. The computer readable storage medium stores one or more programs which, when executed by one or more processors, perform the method for counter network based gait recognition described herein:
acquiring video data of a target to be detected;
adopting a preset algorithm to carry out segmentation extraction processing on the video data, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle;
inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle;
and carrying out gait recognition processing on the target to be detected based on the similarity.
To sum up, in the gait recognition method, apparatus, device and medium based on the countermeasure network provided in the embodiment of the present application, the video data of the target to be detected is acquired, and the video data is segmented and extracted by using the preset algorithm to determine the gait energy atlas of the target to be detected, the gait energy atlas includes a gait energy image at any angle, then the gait energy atlas at any angle is input into the pre-trained gait view angle conversion network for processing, so as to obtain the gait energy atlas at the target view angle, and the gait energy atlas at the target view angle is input into the pre-trained twin neural network model for processing to calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at the standard view angle, and based on the similarity, the target to be detected is subjected to gait recognition processing. Compared with the prior art, the gait energy image at any visual angle can be converted into the gait energy image set at the target visual angle through the gait visual angle conversion network, and the similarity is calculated through the twin neural network model, so that the gait recognition of the target to be detected can be accurately carried out, the efficiency of the identity recognition of the target to be detected is further improved, and quick and accurate data support can be brought to the applications of pedestrian information query, identity tracking, query and the like.
Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A gait recognition method based on a confrontation network is characterized by comprising the following steps:
acquiring video data of a target to be detected;
adopting a preset algorithm to carry out segmentation extraction processing on the video data, and determining a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle;
inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle;
and carrying out gait recognition processing on the target to be detected based on the similarity.
2. The method according to claim 1, wherein the step of determining the gait energy atlas of the target to be detected by performing segmentation and extraction processing on the video data by using a preset algorithm comprises:
carrying out segmentation extraction processing on the video data to obtain a gait binary image;
determining a centroid of the gait binary image and boundary points corresponding to the upper, lower, left and right directions;
taking the center of mass as a center, and taking the distance between the center of mass and the boundary point corresponding to each direction as a cutting distance to perform cutting processing to obtain a gait wheel binary image sequence;
and carrying out average value processing on the gait wheel binary image sequence in each period to determine a gait energy atlas.
3. The method of claim 1, wherein the gait perspective conversion network is constructed by:
acquiring a historical gait energy atlas at any view angle and a historical gait energy atlas at a target view angle;
mapping the historical gait energy atlas at any view angle and the historical gait energy atlas at a target view angle through an initial generator to obtain a mapped gait energy atlas at any view angle and a mapped gait energy atlas at a target view angle;
constructing a loss function through an initial discriminator based on the historical gait energy atlas at any view angle, the historical gait energy atlas at the target view angle, the mapped gait energy atlas at any view angle and the mapped gait energy atlas at the target view angle;
and training the initial discriminator and the initial generator according to the minimization of the loss function to obtain a gait view angle conversion network.
4. The method according to claim 1, wherein the twin neural network comprises two feature extraction networks and a discrimination network, the gait energy atlas at the target view angle is input into a pre-trained twin neural network model to be processed to calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle, including;
respectively inputting the gait energy atlas at the target view angle and the gait energy atlas at the standard angle into corresponding feature extraction networks for feature extraction to obtain a first image feature vector and a second image feature vector, wherein the first image feature vector is an image feature vector corresponding to the gait energy atlas at the target view angle, and the second image feature vector is an image feature vector corresponding to the gait energy atlas at the standard angle;
and judging the similarity between the gait energy image set of the target view angle and the gait energy image set of the standard angle through a discrimination network for the first image feature vector and the second image feature vector.
5. The method according to claim 4, wherein inputting the gait energy map of the target view and the gait energy map set of the standard angle into a corresponding feature extraction network for feature extraction to obtain a first image feature vector and a second image feature vector, comprises:
and performing feature extraction on the gait energy atlas at the target view angle and the gait energy atlas at the standard angle sequentially through convolution, pooling and normalization to obtain a first image feature vector and a second image feature vector.
6. The method according to claim 1, wherein the gait recognition processing is performed on the target to be detected based on the similarity, and the gait recognition processing comprises:
when the similarity is larger than a preset threshold value, determining that the gait energy atlas at the target view angle is a gait energy atlas at a standard gait view angle;
and extracting the characteristic vector of the gait energy atlas at the standard view angle, and carrying out gait recognition processing.
7. The method of claim 6, wherein extracting feature vectors of the gait energy map from the standard perspective comprises:
vectorizing the gait energy diagram at the standard view angle to obtain a processed gait energy diagram;
and performing feature dimension reduction processing on the processed gait energy diagram, and determining the feature vector of the gait energy diagram with the standard view angle.
8. A gait recognition device based on a countermeasure network, characterized in that the device comprises:
the acquisition module is used for acquiring video data of a target to be detected;
the determining module is used for carrying out segmentation extraction processing on the video data by adopting a preset algorithm to determine a gait energy atlas of the target to be detected, wherein the gait energy atlas comprises a gait energy map at any angle;
the first processing module is used for inputting the gait energy atlas at any view angle into a pre-trained gait view angle conversion network for processing to obtain a gait energy atlas at a target view angle;
the second processing module is used for inputting the gait energy atlas at the target view angle into a pre-trained twin neural network model to process and calculate the similarity between the gait energy atlas at the target view angle and the gait energy atlas at a standard view angle;
and the gait recognition module is used for carrying out gait recognition processing on the target to be detected based on the similarity.
9. An electronic device, wherein the terminal device comprises a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor is configured to implement the method of counter network based gait recognition according to any of claims 1-7 when executing the program.
10. A computer-readable storage medium having stored thereon a computer program for implementing the countermeasure network-based gait recognition method according to any one of claims 1 to 7.
CN202011615027.3A 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network Active CN112633222B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011615027.3A CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011615027.3A CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Publications (2)

Publication Number Publication Date
CN112633222A true CN112633222A (en) 2021-04-09
CN112633222B CN112633222B (en) 2023-04-28

Family

ID=75286989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011615027.3A Active CN112633222B (en) 2020-12-30 2020-12-30 Gait recognition method, device, equipment and medium based on countermeasure network

Country Status (1)

Country Link
CN (1) CN112633222B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420737A (en) * 2021-08-23 2021-09-21 成都飞机工业(集团)有限责任公司 3D printing pattern recognition method based on convolutional neural network
CN114565970A (en) * 2022-01-27 2022-05-31 内蒙古工业大学 High-precision multi-angle behavior recognition method based on deep learning

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219865A (en) * 2006-02-17 2007-08-30 Hitachi Ltd Abnormal behavior detection device
CN106580350A (en) * 2016-12-07 2017-04-26 中国民用航空总局第二研究所 Fatigue condition monitoring method and device
CN108681774A (en) * 2018-05-11 2018-10-19 电子科技大学 Based on the human body target tracking method for generating confrontation network negative sample enhancing
CN109726654A (en) * 2018-12-19 2019-05-07 河海大学 A kind of gait recognition method based on generation confrontation network
CN109886141A (en) * 2019-01-28 2019-06-14 同济大学 A kind of pedestrian based on uncertainty optimization discrimination method again
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device
CN110276739A (en) * 2019-07-24 2019-09-24 中国科学技术大学 A kind of video jitter removing method based on deep learning
CN110570490A (en) * 2019-09-06 2019-12-13 北京航空航天大学 saliency image generation method and equipment
US20200017124A1 (en) * 2018-07-12 2020-01-16 Sf Motors, Inc. Adaptive driver monitoring for advanced driver-assistance systems
CN110765925A (en) * 2019-10-18 2020-02-07 河南大学 Carrier detection and gait recognition method based on improved twin neural network
CN111310668A (en) * 2020-02-18 2020-06-19 大连海事大学 Gait recognition method based on skeleton information
CN111462173A (en) * 2020-02-28 2020-07-28 大连理工大学人工智能大连研究院 Visual tracking method based on twin network discriminant feature learning
CN111639580A (en) * 2020-05-25 2020-09-08 浙江工商大学 Gait recognition method combining feature separation model and visual angle conversion model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219865A (en) * 2006-02-17 2007-08-30 Hitachi Ltd Abnormal behavior detection device
CN106580350A (en) * 2016-12-07 2017-04-26 中国民用航空总局第二研究所 Fatigue condition monitoring method and device
CN108681774A (en) * 2018-05-11 2018-10-19 电子科技大学 Based on the human body target tracking method for generating confrontation network negative sample enhancing
US20200017124A1 (en) * 2018-07-12 2020-01-16 Sf Motors, Inc. Adaptive driver monitoring for advanced driver-assistance systems
CN109726654A (en) * 2018-12-19 2019-05-07 河海大学 A kind of gait recognition method based on generation confrontation network
CN109886141A (en) * 2019-01-28 2019-06-14 同济大学 A kind of pedestrian based on uncertainty optimization discrimination method again
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device
CN110276739A (en) * 2019-07-24 2019-09-24 中国科学技术大学 A kind of video jitter removing method based on deep learning
CN110570490A (en) * 2019-09-06 2019-12-13 北京航空航天大学 saliency image generation method and equipment
CN110765925A (en) * 2019-10-18 2020-02-07 河南大学 Carrier detection and gait recognition method based on improved twin neural network
CN111310668A (en) * 2020-02-18 2020-06-19 大连海事大学 Gait recognition method based on skeleton information
CN111462173A (en) * 2020-02-28 2020-07-28 大连理工大学人工智能大连研究院 Visual tracking method based on twin network discriminant feature learning
CN111639580A (en) * 2020-05-25 2020-09-08 浙江工商大学 Gait recognition method combining feature separation model and visual angle conversion model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUERU BAI 等: "Radar-Based Human Gait Recognition Using Dual-Channel Deep Convolutional Neural Network" *
朱应钊 等: "步态识别现状及发展趋势" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420737A (en) * 2021-08-23 2021-09-21 成都飞机工业(集团)有限责任公司 3D printing pattern recognition method based on convolutional neural network
CN113420737B (en) * 2021-08-23 2022-01-25 成都飞机工业(集团)有限责任公司 3D printing pattern recognition method based on convolutional neural network
CN114565970A (en) * 2022-01-27 2022-05-31 内蒙古工业大学 High-precision multi-angle behavior recognition method based on deep learning

Also Published As

Publication number Publication date
CN112633222B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
RU2431190C2 (en) Facial prominence recognition method and device
CN109711416B (en) Target identification method and device, computer equipment and storage medium
CN104599287B (en) Method for tracing object and device, object identifying method and device
CN112052831B (en) Method, device and computer storage medium for face detection
CN106133752A (en) Eye gaze is followed the tracks of
CN111062328B (en) Image processing method and device and intelligent robot
CN112633222B (en) Gait recognition method, device, equipment and medium based on countermeasure network
CN113569598A (en) Image processing method and image processing apparatus
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
CN112464690A (en) Living body identification method, living body identification device, electronic equipment and readable storage medium
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
Velasco-Mata et al. Using human pose information for handgun detection
CN114140880A (en) Gait recognition method and device
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
Liu et al. Iris recognition in visible spectrum based on multi-layer analogous convolution and collaborative representation
CN111259700B (en) Method and apparatus for generating gait recognition model
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN111814760B (en) Face recognition method and system
CN113706550A (en) Image scene recognition and model training method and device and computer equipment
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
CN116311400A (en) Palm print image processing method, electronic device and storage medium
Barman et al. Person re-identification using overhead view fisheye lens cameras
Muddamsetty et al. Spatio-temporal saliency detection in dynamic scenes using local binary patterns
KR102299250B1 (en) Counting device and method using composite image data
CN116434316B (en) Identity recognition method, device, equipment and medium based on X86 industrial control main board

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant