CN117671297A - Pedestrian re-recognition method integrating interaction attributes - Google Patents

Pedestrian re-recognition method integrating interaction attributes Download PDF

Info

Publication number
CN117671297A
CN117671297A CN202410146124.4A CN202410146124A CN117671297A CN 117671297 A CN117671297 A CN 117671297A CN 202410146124 A CN202410146124 A CN 202410146124A CN 117671297 A CN117671297 A CN 117671297A
Authority
CN
China
Prior art keywords
pedestrian
pedestrians
interaction
attribute
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410146124.4A
Other languages
Chinese (zh)
Inventor
涂宏斌
农欣悦
洪泉生
曾烁璇
魏铮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Jiaotong University
Original Assignee
East China Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Jiaotong University filed Critical East China Jiaotong University
Priority to CN202410146124.4A priority Critical patent/CN117671297A/en
Publication of CN117671297A publication Critical patent/CN117671297A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian re-identification method integrating interaction attributes, which specifically comprises the following steps: constructing a pedestrian image database, and dividing small groups of pedestrian images in the database to identify the small groups of pedestrian images; extracting skeleton key points of pedestrians in the pedestrian image, and carrying out interactive connection on the skeleton key points of the pedestrians in the small group so as to integrate and obtain an action skeleton map with interactive information; the invention fully extracts the relation information among pedestrians under the lens, performs small group division, uses the information for interactive behavior recognition, further digs the interactive attribute of the pedestrians under the lens, and blends the interactive attribute of the pedestrians into a pedestrian re-recognition method, thereby improving the efficiency and accuracy of the re-recognition.

Description

Pedestrian re-recognition method integrating interaction attributes
Technical Field
The invention relates to the technical field of computer vision, in particular to a pedestrian re-identification method integrating interaction attributes.
Background
Pedestrian re-recognition (Person-identification), also known as pedestrian re-recognition, is a technique that uses computer vision techniques to determine whether a particular pedestrian is present in an image or video sequence. Pedestrian attribute identification based on the traditional method mainly depends on manually designed characteristics and excellent classifiers, however, the evaluation result on a large data set shows that the performance of the pedestrian attribute identification can not meet the requirements of practical application far. Most of pedestrian attribute recognition methods in recent years adopt a deep learning method, and have made great progress with the aid of the deep learning method. However, the relation among pedestrians is ignored in the above way, the mutual interaction attribute of the pedestrians is ignored, and only the appearance, walking gesture characteristics and the like of the pedestrians are emphasized.
In pedestrian attribute identification, before pedestrian attribute identification, pedestrian identification is performed, an image of a single pedestrian is buckled, attribute identification is performed, the sequence completely ignores the problem of interaction attribute among pedestrians, the attribute of the pedestrians is naturally important, but the probability that two or more pedestrians interacted with each other appear on the same screen again is higher than the probability of no related person, and therefore the pedestrian re-identification method integrating interaction attribute is provided for improvement.
Disclosure of Invention
The invention aims to improve and innovate defects and problems existing in the background technology, and provides a pedestrian re-identification method integrating interaction attributes.
According to a first aspect of the present invention, there is provided a pedestrian re-recognition method integrating interaction attributes, specifically including the following steps:
constructing a pedestrian image database, and dividing small groups of pedestrian images in the database to identify the small groups of pedestrian images;
extracting skeleton key points of pedestrians in the pedestrian image, and carrying out interactive connection on the skeleton key points of the pedestrians in the small group so as to integrate and obtain an action skeleton map with interactive information;
extracting interaction information of an action skeleton diagram through a diagram convolution network to extract and obtain interaction attributes of pedestrians in a pedestrian image database, and extracting and obtaining global pedestrian identities and appearance attributes of the pedestrians in the pedestrian image database through a pre-trained depth residual error network;
inputting the pedestrian image to be identified into the pre-trained depth residual error network to extract and obtain the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image to be identified; performing small group division and interaction attribute extraction on pedestrian images to be identified;
comparing the global pedestrian identity and the appearance attribute of the pedestrians in the pedestrian image to be identified with the global pedestrian identity and the appearance attribute of the pedestrian image in the database to obtain a first target pedestrian in each small group in a matching way;
and re-identifying pedestrians in the pedestrian image to be identified according to the interaction attribute of other pedestrians in the small group and the first target pedestrian, the overall pedestrian identity and appearance attribute.
The step of classifying the small groups of the pedestrian images in the database to identify the small groups of the pedestrian images specifically comprises the following steps:
acquiring distance correlation between pedestrians
Acquiring direction of motion correlation between pedestrians
Acquiring speed of movement between pedestriansCorrelation of
Combining the distance correlation, the movement direction correlation and the movement speed correlation among pedestrians to obtain the intimacy degree among pedestrians
Judging whether the intimacy degree between pedestrians is smaller than a first preset threshold value or not; if yes, the pedestrians are small groups in the pedestrian image.
Further, the distance correlation,/>Distance between center points of character boundary boxes representing pedestrians, +.>Is the sum of the widths of the bounding boxes of pedestrians; the motion direction dependence,/>An angle representing a speed direction between pedestrians; the motion velocity dependence,/>,/>Representing the speed of movement of the pedestrian; the intimacy degree between pedestrians
,a,/>,/>The distance between pedestrians, the direction of movement between pedestrians, and the speed of movement between pedestrians, respectively.
The method for obtaining the action skeleton map with interaction information comprises the following steps of extracting skeleton key points of pedestrians in pedestrian images, and carrying out interactive connection on the skeleton key points of the pedestrians in small groups to integrate the skeleton key points to obtain the action skeleton map with interaction information:
extracting skeleton key points of pedestrians in the pedestrian image through a yolov5 network;
the skeleton key points of the skeleton interconnection are connected by line segments, and the line segment connection of the skeleton key points comprises intra-frame single person connection, intra-frame interaction connection and inter-frame connection so as to integrate and obtain an action skeleton diagram with interaction information;
wherein for intra-frame inter-connection, only skeletal keypoints between pedestrians in a small population are interconnected.
Further, the depth residual network is a Resnet50.
The step of comparing the global pedestrian identity and the appearance attribute of the pedestrian in the pedestrian image to be identified with the global pedestrian identity and the appearance attribute of the pedestrian image in the database to obtain a first target pedestrian in each small group by matching specifically comprises the following steps:
constructing a first loss function, wherein the first loss function comprises global identity loss and appearance attribute loss, and calculating to obtain the total loss of the first loss function according to the global pedestrian identity and appearance attribute of pedestrians in pedestrian images and pedestrian images to be identified in a database;
and judging whether the total loss of the first loss function is smaller than a second preset threshold value, if so, determining that the corresponding pedestrian in the pedestrian image to be identified is the first target pedestrian.
According to a further scheme, the step of re-identifying pedestrians in the pedestrian image to be identified according to the interaction attribute of other pedestrians in the small group and the first target pedestrians and the global identity and appearance attribute of the pedestrians specifically comprises the following steps:
constructing a second loss function, wherein the second loss function comprises identity loss, appearance attribute loss and interaction attribute loss, and calculating to obtain the total loss of the second loss function according to the global pedestrian identity and appearance attribute of pedestrians in pedestrian images in a database and pedestrian images to be identified and the interaction attribute between the pedestrian images and the first target pedestrians;
judging whether the total loss of the second loss function is smaller than a third preset threshold value, if so, identifying other artificial second target pedestrians in the pedestrian image small group.
Further, the total loss of the first loss function
Total loss of the second loss function
Wherein beta is an identity loss weight coefficient, M is the number of appearance attributes, N is the number of interaction attributes, wherein
In the method, in the process of the invention,for global identity loss, < >>Loss of appearance attributes->Loss for interaction properties->Global pedestrian identity for individual pedestrians of pedestrian images in the database +.>For the appearance attributes of individual pedestrians of the pedestrian images in the database,for the interactive attribute of pedestrians in the small group of pedestrian images in the database and the first target pedestrians,/for the pedestrians in the small group of pedestrian images in the database>Global pedestrian identity for individual pedestrians of the pedestrian image to be identified,/->For the appearance properties of the individual pedestrians of the pedestrian image to be identified, +.>And for the interaction attribute of other pedestrians in the small group of the pedestrian images to be identified and the first target pedestrian, the parameter gamma and the parameter alpha are constants.
According to a second aspect of the present invention, there is provided an electronic device comprising a processor and a memory, the processor being arranged to implement the steps of a pedestrian re-recognition method incorporating interaction attributes as defined in any one of the preceding claims when executing a computer program stored in the memory.
According to a third aspect of the present invention, a readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a pedestrian re-recognition method incorporating interaction attributes as set forth in any one of the above.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a pedestrian re-recognition method integrating interaction attributes, which is characterized in that pedestrian images in a database and pedestrian images to be recognized are subjected to small group division to recognize small groups in the pedestrian images, and in intra-frame interaction connection, the intra-frame interaction connection is only needed between two target pedestrians in the same small group, so that unnecessary calculation amount of the interaction behavior recognition is reduced, meanwhile, the accuracy of the interaction behavior recognition is improved, after the interaction attributes among pedestrians in the small groups are extracted, the interaction attributes among pedestrians in the small groups are finally integrated to carry out the pedestrian re-recognition in the pedestrian images to be recognized, and the efficiency and accuracy of re-recognition are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a pedestrian re-recognition method integrating interaction attributes;
fig. 2 is an overall route diagram of a pedestrian re-recognition method integrating interaction attributes.
Detailed Description
In order that the objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Example 1
Referring to fig. 1, the invention provides a pedestrian re-recognition method integrating interaction attributes, which specifically includes the following steps:
s1, constructing a pedestrian image database, and dividing small groups of pedestrian images in the database to identify the small groups of the pedestrian images;
specifically, the pedestrian image related in the step and the pedestrian image to be identified related later are obtained by photographing through a lens, the pedestrian image database is constructed through a plurality of pedestrian images, the pedestrian images in the database are used as a comparison group for comparing with the pedestrian image to be identified, and the pedestrian in the pedestrian image to be identified is re-identified.
Social behavioral analysis suggests that pedestrians tend to unconsciously configure the space around the organization specifically with varying degrees of intimacy. For example, the shorter the distance between two people, the higher the intimacy, and the slower the relative speed between pedestrians of a small population. Therefore, the pedestrian images in the database can be subjected to small group division through the distance between pedestrians, the pedestrian movement speed and the pedestrian movement direction so as to identify the small groups in the pedestrian images. The method for dividing the pedestrian images in the database into small groups specifically comprises the following steps:
step S11, obtaining the distance correlation between pedestrians;
wherein the distance correlation between pedestriansThe following formula is adopted for expression:
,/>distance of center point of character boundary box representing pedestrian,/->Is a pedestrianThe sum of the widths of the bounding boxes; the video or picture data obtained by shot shooting is input into a yolov5 network, a character boundary box can be generated, and the distance of the center point of the character boundary box and the width of the boundary box are calculated.
In this embodiment, the distance between center points can be calculated using Euclidean distanceOf course, other methods may be used to calculate the distance between the center points, which are not specifically limited in this application and are within the scope of this application.
Step S12, obtaining the correlation of the movement directions among pedestrians;
wherein the correlation of the movement direction between pedestriansThe following formula is adopted for expression:
in the formula +.>An angle representing a speed direction between pedestrians; in this embodiment, the angle between pedestrians in the speed direction may be the angle between the connecting lines of the skeleton key points at the center of gravity of the pedestrians between the connected frames.
Step S13, obtaining the correlation of the movement speeds among pedestrians;
wherein the correlation of the movement speed between pedestriansThe following formula is adopted for expression:
wherein,,/>representing the speed of movement of a pedestrian, in this embodiment, the speed of movement of a pedestrian may be represented by the ratio of the displacement s between skeletal keypoints at the center of gravity of the pedestrian to the inter-frame spacing time t between consecutive frames, and the displacement between skeletal keypoints at the center of gravity of the pedestrian between consecutive frames is also calculated using the euclidean distance.
Step S14, integrating the distance correlation, the movement direction correlation and the movement speed correlation among pedestrians to judge whether the pedestrians are the same small group or not;
the distance correlation, the movement direction correlation and the movement speed correlation among pedestrians are integrated, and a discrimination formula for judging whether the pedestrians are the same small group can be obtained:
wherein,for the intimacy degree between pedestrians, +.>For distance dependence between pedestrians, +.>For the direction of movement correlation between pedestrians, +.>For the correlation of the speed of movement between pedestrians, a, a #>,/>The distance between pedestrians, the direction of movement between pedestrians, and the speed of movement between pedestrians, respectively. In the small group division with the affinity as a reference, the distance between pedestrians is the factor which affects the most,an included angle representing the movement direction between pedestrians>The significance of the affinity between pedestrians is reflected by a small degree, and the relative speed between pedestrians has a certain reference value for dividing small groups. Therefore, by assigning weighting factors, the small population division discriminant is more reasonably calculated.
Further, judging whether the intimacy degree between pedestrians is smaller than a first preset threshold value or not; if yes, the pedestrians are small groups in the pedestrian image.
S2, extracting skeleton key points of pedestrians in the pedestrian image, and carrying out interactive connection on the skeleton key points of the pedestrians in the small group to integrate and obtain an action skeleton map with interactive information;
specifically, after a small group in a pedestrian image is identified, skeleton key points of pedestrians in the pedestrian image are extracted through a yolov5 network, skeleton key points connected with each other through line segments, and line segment connection of the skeleton key points comprises intra-frame single person connection, intra-frame interaction connection and inter-frame connection, so that action skeleton diagrams with interaction information are integrated.
(1) Intra single person connection:
the intra-frame single person connection is to extract skeleton key points of pedestrians firstly, the key points of skeleton interconnection are connected by line segments, not all key points which are not connected, and as the interaction between hands and heads is key in motion recognition, an external connection is also established between the hands and the heads, and the external connection is also established between the same hands and feet. Therefore, the connection relationship between the skeletal key points is:
wherein,parameters representing bone key points m, n, < ->Representing the interconnections between skeletal key points, i.e. the actual connections,/->Representing external connections between skeletal keypoints, is set for convenience in recording more details of the action.
(2) Intra-frame interconnect:
in the step S1, the pedestrian image is divided into small groups, so as to reduce unnecessary calculation amount of the interactive behavior recognition and improve accuracy of the interactive behavior recognition, and only the interactive behaviors in the small groups are considered in the interactive behavior recognition; therefore, for intra-frame interactive connection, the method only interconnects skeleton key points of pedestrians in a small group so as to integrate action skeleton diagrams with interactive information, and the interactive information of actions is extracted through a graph convolution network to extract interactive attributes among the pedestrians.
(3) Inter-frame connection:
since each joint is broken in the time domain, in this embodiment, frames are allowed to be transmittedBone key points in (2) and the frame connected to it in the previous frame +.>And the next frame->The corresponding adjacent time domain skeletal keypoints are connected, so that the receptive field is enlarged by using more adjacent joints, thereby helping to learn the time domain change information. It should be noted that the connection between frames only involves a single connection, including an actual connection of a single connection and an external connection.
In addition, according to the above-mentioned obtainingThe action skeleton diagram can be formed by connecting the skeleton key points and the skeleton key points, and the diagram convolution network carries out convolution operation on the Laplacian matrix of the action skeleton diagram so as to extract and obtain the interaction attribute between pedestrians. Wherein the Laplace matrixD is the degree matrix of the action skeleton map, and A is the adjacency matrix of the action skeleton map.
It can be understood that the intra-frame interactive connection is similar to the external connection of the intra-frame single person connection, the interaction between skeleton key points when the interactive behavior occurs is recorded, the conventional interactive behavior recognition is to record the behavior actions of individuals respectively, then integrate the information of the two parties, and the distance factors of a plurality of interactive behaviors are ignored, and are also one of key factors under the normal condition. Because of the existence of intra-frame interactive connection, two independent skeleton diagrams are connected through joints to be integrated into an action skeleton diagram with interactive information, and the interactive information of actions can be extracted through a diagram rolling network. According to the method, in the intra-frame interactive connection, according to the result of small group division, only two target pedestrians in the same small group need to be subjected to intra-frame interactive connection, so that unnecessary calculation amount of interactive behavior recognition is reduced, and meanwhile, the accuracy of the interactive behavior recognition is improved.
S3, extracting interaction information of the action skeleton diagram through a diagram convolution network to extract interaction attributes of pedestrians in a pedestrian image database, and extracting global pedestrian identities and appearance attributes of the pedestrians in the pedestrian image database through a pre-trained depth residual error network;
as described above, after the action skeleton diagram with the interaction information is integrated, in this embodiment, the interaction information of the action skeleton diagram is further extracted through the graph convolution network, so as to extract and obtain the interaction attribute of the pedestrian in the pedestrian image database. The graph convolution network extends convolution operations in the European structured image data to non-European structured image data relative to conventional convolutional neural networks.
In addition, the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image database are obtained through extraction of the pre-trained depth residual error network, and the global pedestrian identity and appearance attribute is the global pedestrian identity and appearance attribute corresponding to the maximum probability value output after passing through different full connection layers of the depth residual error network and the activation function softmax. The appearance attribute includes whether the pedestrian has a cap, whether the pedestrian wears a long sleeve or a short sleeve, whether the pedestrian does not have a sunglasses, a long and short hair, and the like, and in the implementation, the corresponding appearance attribute can be valued as 1 or 0 according to the long sleeve or the short sleeve, the long and short hair, and the like, and the global pedestrian identity of each pedestrian in the pedestrian image can be valued as 1 or 0 according to the target pedestrian to be identified.
In this embodiment, the depth residual network for feature extraction is Resnet50.
S4, inputting the pedestrian image to be identified into the pre-trained depth residual error network to extract and obtain the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image to be identified; performing small group division and interaction attribute extraction on pedestrian images to be identified;
as described above, the pedestrian image and the pedestrian image to be identified are obtained by taking a photograph through a lens, the pedestrian image is input into a depth residual error network trained in advance, and the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image are extracted; therefore, the same method can be adopted to input the pedestrian image to be identified into a pre-trained depth residual error network, and the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image to be identified are extracted. And the same method can also be adopted to divide the small groups of the pedestrian images to be identified, and based on intra-frame interactive connection of skeleton key points corresponding to the small group division result, the obtained action skeleton diagram is input into a diagram convolution network to extract the interaction attribute between pedestrians in the small groups of the pedestrian images to be identified, and the detailed operation process is not repeated here.
S5, comparing the global pedestrian identity and appearance attribute of the pedestrians in the pedestrian image to be identified with the global pedestrian identity and appearance attribute of the pedestrian image in the database to obtain first target pedestrians in each small group in a matching mode;
it should be noted that, at this time, only the interaction attribute between pedestrians in the pedestrian image in the database and the interaction attribute between pedestrians in the pedestrian image to be identified are clear, but it is not clear whether any pedestrian in the pedestrian image in the database has the interaction attribute with the pedestrians in each small group in the pedestrian image to be identified, so the application compares the global pedestrian identity and the appearance attribute to obtain the first target pedestrian in each small group in a matching way.
Specifically, the method specifically comprises the following steps:
step S51, constructing a first loss function, wherein the first loss function comprises identity loss and appearance attribute loss, and calculating to obtain the total loss of the first loss function according to the global pedestrian identity and appearance attribute of pedestrians in pedestrian images in a database and pedestrian images to be identified;
wherein the total loss of the first loss function
Wherein the method comprises the steps ofFor the identity loss weight coefficient, M is the number of appearance attributes, where
In the method, in the process of the invention,for global identity loss, < >>Loss of appearance attributes->Global row for each pedestrian of the pedestrian image in the databasePerson identity (I)>For the appearance properties of individual pedestrians of the pedestrian images in the database +.>Global pedestrian identity for individual pedestrians of the pedestrian image to be identified,/->The parameters γ and α are constants for the appearance attributes of the individual pedestrians of the pedestrian image to be identified.
And S52, judging whether the total loss of the first loss function is smaller than a second preset threshold, if so, determining that the corresponding pedestrian in the pedestrian image to be identified is the first target pedestrian.
S6, re-identifying pedestrians in the image of the pedestrians to be identified according to the interaction attribute of other pedestrians in the small group and the first target pedestrians, the overall identity and appearance attribute of the pedestrians;
it should be noted that, after the first target pedestrians in each small group in the pedestrian image to be identified are determined, the mutual interaction attribute of pedestrians in the small groups in the pedestrian image to be identified is clear; therefore, the interaction attribute of other pedestrians in the small group in the pedestrian image to be identified and the first target pedestrians is clear, meanwhile, the first target pedestrians are matched with pedestrians in the pedestrian image in the database, and therefore the interaction attribute of the pedestrians in the pedestrian image in the database and the first target pedestrians is clear, and therefore the step is to compare the identity, the appearance attribute and the interaction attribute of the other pedestrians in the small group, and to re-identify the other pedestrians in the small group in the pedestrian image to be identified, so that the re-identification of all pedestrians in the pedestrian image to be identified is completed.
Specifically, the method specifically comprises the following steps:
step S61, constructing a second loss function, wherein the second loss function comprises identity loss, appearance attribute loss and interaction attribute loss, and calculating to obtain the total loss of the second loss function according to the global pedestrian identity and appearance attribute of pedestrians in the pedestrian images and the pedestrian images to be identified in the database and the interaction attribute between the pedestrian images and the first target pedestrian;
wherein the total loss of the second loss function
Wherein beta is an identity loss weight coefficient, M is the number of appearance attributes, N is the number of interaction attributes, wherein
In the method, in the process of the invention,for global identity loss, < >>Loss of appearance attributes->Loss for interaction properties->Global pedestrian identity for individual pedestrians of pedestrian images in the database +.>For the appearance attributes of individual pedestrians of the pedestrian images in the database,for the interactive attribute of pedestrians in the small group of pedestrian images in the database and the first target pedestrians,/for the pedestrians in the small group of pedestrian images in the database>Global pedestrian identity for individual pedestrians of the pedestrian image to be identified,/->For the appearance properties of the individual pedestrians of the pedestrian image to be identified, +.>And for the interaction attribute of other pedestrians in the small group of the pedestrian images to be identified and the first target pedestrian, the parameter gamma and the parameter alpha are constants.
Step S62, judging whether the total loss of the second loss function is smaller than a third preset threshold value, if yes, then the other pedestrian in the small group of pedestrian images to be identified act as a second target pedestrian;
it should be noted that, the first target pedestrian and the second target pedestrian are target pedestrians, only the order of completing the re-recognition is different, the first target pedestrian is the first target pedestrian re-recognized in the small group, the first target pedestrian is re-recognized first, and then the second target pedestrian is re-recognized based on the first target pedestrian, so that the re-recognition of the pedestrian in the pedestrian image to be recognized is completed. It can be understood that the interactive attribute is fused with other pedestrians in the small group to conduct pedestrian re-recognition, and efficiency and accuracy of re-recognition are improved.
In summary, the invention provides a pedestrian re-recognition method integrating interaction attributes, which is characterized in that small groups in pedestrian images are recognized by dividing pedestrian images and pedestrian images to be recognized in a database, and in intra-frame interactive connection, the intra-frame interactive connection is only needed between two target pedestrians in the same small group, so that unnecessary calculation amount of the interaction behavior recognition is reduced, meanwhile, the accuracy of the interaction behavior recognition is improved, after the interaction attributes among pedestrians in the small groups are extracted, the interaction attributes among pedestrians in the small groups are finally integrated to carry out the pedestrian re-recognition in the pedestrian images to be recognized, and the efficiency and accuracy of re-recognition are improved.
Example 2
The present invention provides an electronic device comprising a processor and a memory, the processor being configured to implement the steps of a pedestrian re-recognition method incorporating interaction attributes as described in embodiment 1 when executing a computer program stored in the memory.
Example 3
The present invention provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a pedestrian re-recognition method incorporating interaction attributes as described in embodiment 1.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the invention.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples.
It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application for the embodiment. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. The pedestrian re-identification method integrating the interaction attribute is characterized by comprising the following steps of:
constructing a pedestrian image database, and dividing small groups of pedestrian images in the database to identify the small groups of pedestrian images;
extracting skeleton key points of pedestrians in the pedestrian image, and carrying out interactive connection on the skeleton key points of the pedestrians in the small group so as to integrate and obtain an action skeleton map with interactive information;
extracting interaction information of an action skeleton diagram through a diagram convolution network to extract and obtain interaction attributes of pedestrians in a pedestrian image database, and extracting and obtaining global pedestrian identities and appearance attributes of the pedestrians in the pedestrian image database through a pre-trained depth residual error network;
inputting the pedestrian image to be identified into the pre-trained depth residual error network to extract and obtain the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image to be identified; performing small group division and interaction attribute extraction on pedestrian images to be identified;
comparing the global pedestrian identity and the appearance attribute of the pedestrians in the pedestrian image to be identified with the global pedestrian identity and the appearance attribute of the pedestrian image in the database to obtain a first target pedestrian in each small group in a matching way;
and re-identifying pedestrians in the pedestrian image to be identified according to the interaction attribute of other pedestrians in the small group and the first target pedestrian, the overall pedestrian identity and appearance attribute.
2. The pedestrian re-recognition method with the interaction attribute integrated according to claim 1, wherein the step of performing small group division on the pedestrian images in the database to recognize the small groups in the pedestrian images specifically includes:
acquiring distance correlation between pedestrians
Acquiring direction of motion correlation between pedestrians
Acquiring motion velocity correlation between pedestrians
Combining the distance correlation, the movement direction correlation and the movement speed correlation among pedestrians to obtain the intimacy degree among pedestrians
Judging whether the intimacy degree between pedestrians is smaller than a first preset threshold value or not; if yes, the pedestrians are small groups in the pedestrian image.
3. The pedestrian re-recognition method integrating interaction attributes as claimed in claim 2, wherein: the distance correlation,/>Distance between center points of character boundary boxes representing pedestrians,/>Is the sum of the widths of the bounding boxes of pedestrians; said movement direction dependency->,/>An angle representing a speed direction between pedestrians; said movement velocity dependence->,/>,/>Representing the speed of movement of the pedestrian; the intimacy degree between pedestrians
,a,/>,/>The distance between pedestrians, the direction of movement between pedestrians, and the speed of movement between pedestrians, respectively.
4. The pedestrian re-recognition method based on the fusion interaction attribute of claim 1, wherein the step of extracting skeleton key points of pedestrians in the pedestrian image and performing interactive connection on the skeleton key points of pedestrians in the small group to integrate the action skeleton map with the interaction information comprises the following steps:
extracting skeleton key points of pedestrians in the pedestrian image through a yolov5 network;
the skeleton key points of the skeleton interconnection are connected by line segments, and the line segment connection of the skeleton key points comprises intra-frame single person connection, intra-frame interaction connection and inter-frame connection so as to integrate and obtain an action skeleton diagram with interaction information;
wherein for intra-frame inter-connection, only skeletal keypoints between pedestrians in a small population are interconnected.
5. The pedestrian re-recognition method integrating interaction attributes as claimed in claim 1, wherein: the depth residual network is Resnet50.
6. The pedestrian re-recognition method integrating interaction attributes as claimed in claim 1, wherein: the step of comparing the global pedestrian identity and appearance attribute of the pedestrian in the pedestrian image to be identified with the global pedestrian identity and appearance attribute of the pedestrian image in the database to obtain a first target pedestrian in each small group by matching specifically comprises the following steps:
constructing a first loss function, wherein the first loss function comprises global identity loss and appearance attribute loss, and calculating to obtain the total loss of the first loss function according to the global pedestrian identity and appearance attribute of pedestrians in pedestrian images and pedestrian images to be identified in a database;
and judging whether the total loss of the first loss function is smaller than a second preset threshold value, if so, determining that the corresponding pedestrian in the pedestrian image to be identified is the first target pedestrian.
7. The pedestrian re-recognition method with the interaction attribute integrated as in claim 6, wherein: the step of re-identifying pedestrians in the image of the pedestrians to be identified according to the interaction attribute of other pedestrians in the small group and the first target pedestrians and the overall identity and appearance attribute of the pedestrians specifically comprises the following steps:
constructing a second loss function, wherein the second loss function comprises identity loss, appearance attribute loss and interaction attribute loss, and calculating to obtain the total loss of the second loss function according to the global pedestrian identity and appearance attribute of pedestrians in pedestrian images in a database and pedestrian images to be identified and the interaction attribute between the pedestrian images and the first target pedestrians;
judging whether the total loss of the second loss function is smaller than a third preset threshold value, if so, identifying other artificial second target pedestrians in the pedestrian image small group.
8. The pedestrian re-recognition method with the interaction attribute integrated as in claim 7, wherein:
total loss of the first loss function
Total loss of the second loss function
Wherein beta is an identity loss weight coefficient, M is the number of appearance attributes, N is the number of interaction attributes, wherein
In the method, in the process of the invention,for global identity loss, < >>Loss of appearance attributes->Loss for interaction properties->Global pedestrian identity for individual pedestrians of pedestrian images in the database +.>For the appearance properties of individual pedestrians of the pedestrian images in the database +.>For the interactive attribute of pedestrians in the small group of pedestrian images in the database and the first target pedestrians,/for the pedestrians in the small group of pedestrian images in the database>Global pedestrian identity for individual pedestrians of the pedestrian image to be identified,/->For the appearance properties of the individual pedestrians of the pedestrian image to be identified, +.>And for the interaction attribute of other pedestrians in the small group of the pedestrian images to be identified and the first target pedestrian, the parameter gamma and the parameter alpha are constants.
9. An electronic device comprising a processor and a memory, the processor being configured to implement the steps of a pedestrian re-recognition method incorporating interaction attributes as claimed in any one of claims 1 to 8 when executing a computer program stored in the memory.
10. A readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of a pedestrian re-recognition method incorporating interaction properties according to any one of claims 1 to 8.
CN202410146124.4A 2024-02-02 2024-02-02 Pedestrian re-recognition method integrating interaction attributes Pending CN117671297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410146124.4A CN117671297A (en) 2024-02-02 2024-02-02 Pedestrian re-recognition method integrating interaction attributes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410146124.4A CN117671297A (en) 2024-02-02 2024-02-02 Pedestrian re-recognition method integrating interaction attributes

Publications (1)

Publication Number Publication Date
CN117671297A true CN117671297A (en) 2024-03-08

Family

ID=90086698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410146124.4A Pending CN117671297A (en) 2024-02-02 2024-02-02 Pedestrian re-recognition method integrating interaction attributes

Country Status (1)

Country Link
CN (1) CN117671297A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210035694A1 (en) * 2019-07-29 2021-02-04 Case Western Reserve University Population-specific prediction of prostate cancer recurrence based on stromal morphology features
CN113221770A (en) * 2021-05-18 2021-08-06 青岛根尖智能科技有限公司 Cross-domain pedestrian re-identification method and system based on multi-feature hybrid learning
CN113963374A (en) * 2021-10-19 2022-01-21 中国石油大学(华东) Pedestrian attribute identification method based on multi-level features and identity information assistance
CN114511870A (en) * 2020-10-27 2022-05-17 天津科技大学 Pedestrian attribute information extraction and re-identification method combined with graph convolution neural network
US20220319209A1 (en) * 2019-09-29 2022-10-06 Shenzhen Yuntianlifei Technolog Co., Ltd. Method and apparatus for labeling human body completeness data, and terminal device
CN115359566A (en) * 2022-08-23 2022-11-18 深圳市赛为智能股份有限公司 Human behavior identification method, device and equipment based on key points and optical flow
US20230162522A1 (en) * 2022-07-29 2023-05-25 Nanjing University Of Posts And Telecommunications Person re-identification method of integrating global features and ladder-shaped local features and device thereof
CN116311377A (en) * 2023-03-29 2023-06-23 河南大学 Method and system for re-identifying clothing changing pedestrians based on relationship between images
CN117031462A (en) * 2023-08-07 2023-11-10 立讯精密工业股份有限公司 Object-based distance monitoring method, device, system and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210035694A1 (en) * 2019-07-29 2021-02-04 Case Western Reserve University Population-specific prediction of prostate cancer recurrence based on stromal morphology features
US20220319209A1 (en) * 2019-09-29 2022-10-06 Shenzhen Yuntianlifei Technolog Co., Ltd. Method and apparatus for labeling human body completeness data, and terminal device
CN114511870A (en) * 2020-10-27 2022-05-17 天津科技大学 Pedestrian attribute information extraction and re-identification method combined with graph convolution neural network
CN113221770A (en) * 2021-05-18 2021-08-06 青岛根尖智能科技有限公司 Cross-domain pedestrian re-identification method and system based on multi-feature hybrid learning
CN113963374A (en) * 2021-10-19 2022-01-21 中国石油大学(华东) Pedestrian attribute identification method based on multi-level features and identity information assistance
US20230162522A1 (en) * 2022-07-29 2023-05-25 Nanjing University Of Posts And Telecommunications Person re-identification method of integrating global features and ladder-shaped local features and device thereof
CN115359566A (en) * 2022-08-23 2022-11-18 深圳市赛为智能股份有限公司 Human behavior identification method, device and equipment based on key points and optical flow
CN116311377A (en) * 2023-03-29 2023-06-23 河南大学 Method and system for re-identifying clothing changing pedestrians based on relationship between images
CN117031462A (en) * 2023-08-07 2023-11-10 立讯精密工业股份有限公司 Object-based distance monitoring method, device, system and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FANTA CAMARA 等: "Pedestrian Models for Autonomous Driving Part II:High-Level Models of Human Behavior", IEEE, 28 July 2020 (2020-07-28), pages 1 - 28 *
贾熹滨 等: "行人再识别中的多尺度特征融合网络", 北京工业大学学报, no. 07, 10 July 2020 (2020-07-10), pages 74 - 80 *

Similar Documents

Publication Publication Date Title
US11195051B2 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
Wang et al. Detect globally, refine locally: A novel approach to saliency detection
EP3602494B1 (en) Robust mesh tracking and fusion by using part-based key frames and priori model
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Gould et al. Decomposing a scene into geometric and semantically consistent regions
CN102682302B (en) Human body posture identification method based on multi-characteristic fusion of key frame
CN108764065A (en) A kind of method of pedestrian&#39;s weight identification feature fusion assisted learning
CN111291739A (en) Face detection and image detection neural network training method, device and equipment
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN106815323B (en) Cross-domain visual retrieval method based on significance detection
CN111160264A (en) Cartoon figure identity recognition method based on generation of confrontation network
CN112906631B (en) Dangerous driving behavior detection method and detection system based on video
JP2022082493A (en) Pedestrian re-identification method for random shielding recovery based on noise channel
CN112131944B (en) Video behavior recognition method and system
CN112906520A (en) Gesture coding-based action recognition method and device
Neverova Deep learning for human motion analysis
Ma et al. An universal image attractiveness ranking framework
CN111401267A (en) Video pedestrian re-identification method and system based on self-learning local feature characterization
CN114882537A (en) Finger new visual angle image generation method based on nerve radiation field
CN113033507B (en) Scene recognition method and device, computer equipment and storage medium
CN113486751A (en) Pedestrian feature extraction method based on graph volume and edge weight attention
Liu et al. Weighted sequence loss based spatial-temporal deep learning framework for human body orientation estimation
CN117671297A (en) Pedestrian re-recognition method integrating interaction attributes
CN112766057B (en) Complex scene-oriented fine-grained attribute-driven gait data set forming method
JP2003345830A (en) Video retrieval device, video retrieval method used therefor, and program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination