CN112001251A - Pedestrian re-identification method and system based on combination of human body analysis and clothing color - Google Patents

Pedestrian re-identification method and system based on combination of human body analysis and clothing color Download PDF

Info

Publication number
CN112001251A
CN112001251A CN202010711107.2A CN202010711107A CN112001251A CN 112001251 A CN112001251 A CN 112001251A CN 202010711107 A CN202010711107 A CN 202010711107A CN 112001251 A CN112001251 A CN 112001251A
Authority
CN
China
Prior art keywords
pedestrian
target pedestrian
image
color
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010711107.2A
Other languages
Chinese (zh)
Inventor
宋勇
许金辉
李贻斌
李彩虹
庞豹
许庆阳
袁宪锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010711107.2A priority Critical patent/CN112001251A/en
Publication of CN112001251A publication Critical patent/CN112001251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses pedestrian re-identification method and system based on combination of human body analysis and clothing color, comprising the following steps of: acquiring a target pedestrian image to be identified; inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized; extracting color features of body parts of various categories from an image of a target pedestrian to be recognized; and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.

Description

Pedestrian re-identification method and system based on combination of human body analysis and clothing color
Technical Field
The application relates to the technical field of computer vision, in particular to a pedestrian re-identification method and system based on combination of human body analysis and clothing color.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The pedestrian Re-identification (ReID) is pedestrian matching performed in a multi-camera network in non-overlapping vision fields, that is, after a specific pedestrian is captured by a monitoring camera at a certain time, a computer searches whether the trace of the pedestrian is included in other cameras by using the image as a sample image, so that the problem of pedestrian Re-identification is also called as a pedestrian search problem in the non-overlapping vision field camera network, and is also called as a cross-lens tracking technology. Zajdel et al introduced pedestrian re-identification in multi-shot tracking in 2005, but Gheissari et al started to emerge in abundance since the first time CVPR 2006 with independent vision tasks.
The face recognition technology is developed in recent years and is mature, and is applied to a plurality of scenes and products, but the face recognition technology can only be used for face information of a human body, the utilization of other important information of the human body, such as clothes, postures, behaviors and the like, is abandoned, and in addition, clear face photos of the face are required during application, but the requirements, such as head lowering, back shadow, blurred body shape, cap shielding and the like, cannot be met in a plurality of scenes. The cross-lens tracking technology can just make up the defects of face recognition, and the pedestrian re-recognition can identify the searched pedestrian according to the information of accessories, appearance postures, dresses and the like of the tracked pedestrian. As the security industry is more and more emphasized in the countries in recent years, cross-lens tracking has become an important research direction in the security field, which is also an important embodiment that the cognitive level of artificial intelligence is gradually improved.
Pedestrian re-identification is initially studied by manual image feature expression such as color and texture, and since high-dimensional visual features cannot obtain invariant factors when samples change, the most critical factor in these pedestrian re-identification models is distance measurement, that is, the discriminant feature Of a Region Of Interest (ROI) is obtained by metric learning. However, if the pedestrian re-recognition algorithm is based on manual features, the two steps of metric learning and feature extraction are used separately, and the related information between the two steps cannot be mutually utilized, so that the best performance cannot be obtained.
At present, a large number of pedestrian re-identification researches are carried out based on deep learning. The team of building university's accurate clock, zheng and good participation guide accomplishes the image style migration through cycleGAN, increases more various originally understanding the style difference between the camera in training set, solves the scarce problem of data in individual identification to unchangeable characteristic between the different cameras of study. The university of beijing, high-text team proposes a generative confrontation network PTGAN for ReID, which can perform background migration of pedestrian pictures on different data sets of ReID, i.e. ensure that the foreground containing the pedestrian himself is not changed and the background is converted into the background style of a specific data set. The Sunday-Zernia theme group of the Chinese academy of Automation mainly aims at partial ReID problem and carries out related research on re-identification problem under the shielding condition. The algorithm proposed by them reconstructs the image space domain through a Convolutional Neural Network (CNN) Network, and keeps the output space domain feature map consistent with the input image size. The paper of the automated organization's topic group of john-ran, who proposes to solve the problem of pedestrian re-identification at night by joint pixel and feature alignment, the align gan method in the paper has obvious advantages over other methods.
In the process of implementing the present application, the inventors found that the following technical problems exist in the prior art:
although researchers have achieved a great deal of successful results in the direction of pedestrian re-identification, meanwhile, deep learning also has many problems when people fall to the ground in the direction of pedestrian re-identification, for example, in an actual scene, the clothes of pedestrians are affected by various conditions such as illumination, angle, shading, definition and the like, and the traditional method is very difficult to identify the clothes of pedestrians and related attributes.
Disclosure of Invention
In order to overcome the technical defect that the deep learning has poor performance on traditional color feature recognition, the application provides a pedestrian re-recognition method and system based on the combination of human body analysis and clothing color;
in a first aspect, the application provides a pedestrian re-identification method based on human body analysis and clothes color combination;
the pedestrian re-identification method based on the combination of human body analysis and clothing color comprises the following steps:
acquiring a target pedestrian image to be identified;
inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
In a second aspect, the present application provides a pedestrian re-identification system based on human body analysis in combination with clothing color;
pedestrian re-identification system based on human body analysis and clothes color combination includes:
an acquisition module configured to: acquiring a target pedestrian image to be identified;
a category classification module configured to: inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
a color feature extraction module configured to: extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
an output module configured to: and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
In a third aspect, the present application further provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs are stored in the memory, and when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first aspect.
In a fourth aspect, the present application also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
In a fifth aspect, the present application also provides a computer program (product) comprising a computer program for implementing the method of any of the preceding first aspects when run on one or more processors.
Compared with the prior art, the beneficial effects of this application are:
according to the pedestrian re-identification method based on human body analysis, the colors of the upper garment and the lower garment of a pedestrian are identified by adopting an improved human body analysis network, and the problem that the learning result of deep learning on the traditional manual characteristics is poor can be solved. Meanwhile, the method is a lightweight method, and experiments prove that the method can be used as a prior method for re-identifying other pedestrians to screen out relevant non-conforming pictures.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a schematic diagram of a human body parsing network framework according to a first embodiment of the present application;
FIG. 2(a) -FIG. 2(b) are schematic diagrams of HSV color space distribution according to a first embodiment of the present application;
3(a) -3 (d) are schematic diagrams of human body segmentation after human body analysis according to the first embodiment of the present application;
fig. 4(a) -4 (e) are schematic diagrams of the results of the screened pictures with human body analysis as the prior condition according to the first embodiment of the present application.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example one
The embodiment provides a pedestrian re-identification method based on the combination of human body analysis and clothing color;
the pedestrian re-identification method based on the combination of human body analysis and clothing color comprises the following steps:
s101: acquiring a target pedestrian image to be identified;
s102: inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
s103: extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
s104: and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
As one or more embodiments, the pre-constructed human analytical model; the specific construction steps comprise:
s1021: constructing a neural network model;
s1022: constructing a training set; the training set adopts a plurality of human example images of known human body part types;
s1023: and inputting the training set into the neural network model, stopping training when the loss function of the neural network model reaches the minimum value, and outputting the trained neural network model.
Illustratively, the neural network model is a U-net model.
The U-net model is an improved full convolution algorithm by expanding a network decoder, a contraction path is added in an encoding and decoding module by the algorithm to better realize the positioning of a pixel boundary, the network model is very similar to a U-type, a left network uses convolution and maximum pooling in the down-sampling process, and a characteristic channel is gradually increased in descending. The overall concept of the U-Net is similar to that of a full convolution, the main difference is that the shallow feature fusion is carried out without summation operation but superposition, and meanwhile, a pre-training model is not used in the U-Net.
Further, an improved U-net model, comprising:
in the original structure of U-net, the convolution layer does not use a residual error network and does not use an ASPP structure, the improved network uses a void convolution layer of deep-ASPP to enlarge a receptive field and a deconvolution layer of Unet to realize feature visualization, and the main structure of a human body analysis network is shown in figure 1 in the attached drawing.
Illustratively, the human body analysis model in the application is trained by adopting a Look In Person (LIP) data set, the data set comprises 19 types of human body part category labels and 16 key point posture labels, and five million people example images cut from a COCO data set are total. The LIP data set has very detailed pixel labeling, images are collected from the real world, various shelters, different visual angles and different resolutions are provided, the background is very complex, and the naming mode of the data set is image-id _ instance-id.
The attributes in the LIP dataset plus background total 20, respectively: back, Hat, Hair, Glove, sungases, Upper-rings, Dress, Coats, Socks, pans, Jumpsuits, Scarf, Skirt, Face, Left-arm, Right-arm, Left-leg, Right-leg, Left-legs, Right-legs, and Right-legs.
The color recognition algorithm provided by the application only needs to count color histograms of the pedestrian upper garment and the pedestrian lower garment, so that relevant attributes in the LIP data set are merged and classified during attribute design, such as: classifying Left-arm, Right-arm and Glove as arm; classifying Left-leg, Right-leg and Socks as leg; classifying Left-shape and Right-shape as shapes; classifying Pants and Skirt as l-rings; classifying Hat and Hair as Hair; classifying Sunglases and Face as Face; we classify Upper-rings, stress, Coats, Jumpsides as u-rings. In conclusion, the jacket is the category in u-rings; similarly, under-wear is a category in l-cords.
In order to improve the operating efficiency of the network, the step of coloring the palette of the human body, which consumes the most computer resources in the original test process, is omitted, and the step is directly simplified into the step of returning the coordinates of the specified position of the pedestrian, so that the step is convenient to deploy on the equipment with general hardware resources.
As one or more embodiments, the image of the target pedestrian to be recognized is input into a human body analytic model which is constructed in advance, and the body part category of the target pedestrian to be recognized is output; the method comprises the following steps:
inputting the image of the target pedestrian to be recognized into a pre-constructed human body analysis model, outputting each attribute category, classifying the attribute categories according to the upper garment and the lower garment, and outputting the upper garment image and the lower garment image of the target pedestrian to be recognized.
Illustratively, the various attribute categories include: background, hat, hair, gloves, sunglasses, coat, skirt, socks, trousers, jumpsuits, face, left arm, right arm, left leg, right leg, left shoe, right shoe.
Illustratively, the classifying the attribute categories into upper garment and lower garment means classifying a hat, hair, gloves, sunglasses, coat, face, left arm, and right arm into an upper garment; socks, trousers, jumpsuits, left legs, right legs, left shoes, right shoes and skirts are classified as lower clothes.
It should be understood that the upper garment refers to a garment worn by the upper body, and the lower garment refers to a garment worn by the lower body.
Human body parsing (Human segmentation) is a sub-topic in semantic segmentation, which segments a Human body image captured in an image into multiple semantically identical regions such as body parts, clothes, etc., and is a fine-grained semantic segmentation task that is more difficult than the segmentation for finding Human body contours. An improved countermeasure network for human body analysis is adopted, and the novel network reduces semantic inconsistency such as human body dislocation, fuzziness and the like. The adopted improved network realizes global and local supervision through a smaller receptive field, and avoids the problem that the GAN network has poor convergence when processing high-resolution images.
As one or more embodiments, the color features of body parts of various categories are extracted from the image of the target pedestrian to be recognized; refers to identifying the HSV character of the upper garment and the HSV character of the lower garment.
Further, extracting color features of body parts of various categories from the image of the target pedestrian to be recognized; the method comprises the following steps:
s1031: identifying RGB color characteristics of an upper garment and RGB color characteristics of a lower garment of a target pedestrian image to be identified;
s1032: and converting the RGB color characteristic of the upper garment and the RGB color characteristic of the lower garment into an HSV color space to obtain the HSV color characteristic of the upper garment and the HSV color characteristic of the lower garment.
It should be understood that color space is a simplified color specification, which is a mathematical method of representing colors in digital form, also known as color space or color system, usually represented in a three-dimensional model. The color space also determines that the conversion between different colors is different.
An RGB (Red, Green, Blue) color space is a space defined according to colors recognized by human eyes, but is generally not used in scientific research because three parameters of saturation, hue, and brightness are put together, which makes it difficult to digitally adjust details and is not intuitive enough.
HSV (Hue, Value) color space is a common color model in industry, and all colors are generated by the variation of Hue (H), Saturation (S), and lightness (S) and their superposition. The method is characterized in that the hue saturation component has the same mode with the color perceived by human eyes, and the brightness component is irrelevant to the color information of the image, so that the HSV color space is very suitable for perceiving the color characteristic and is applied to an image processing algorithm.
As shown in fig. 2(a) -2 (b), HSV color space exhibits a hexagonal pyramid distribution with a circumferential angle representing hue H, ranging from 0 to 360 °, red 0 °, green 120 °, and blue 240 °. The saturation S represents the degree of similarity between the color and the spectral color, the saturation is higher as the color is darker and brighter, and the saturation is only gray when S is 0, which is usually in the range of 0% to 100%.
V denotes the brightness of the color, ranging from 0% to 100%, and when V is 0, the color of the light source is black, but it has no direct relationship with the intensity of light. The quantization intervals for judging various colors in the present application are:
HSV_COLOR_SPACE={'black′:[[0,180],[0,255],[0,46]],
′white′:[[0,180],[0,43],[120,255]],
′gray′:[[0,180],[0,43],[46,120]],
′red1':[[0,10],[43,255],[46,255]],
′red2':[[156,180],[43,255],[46,255]],
′orange':[[11,25],[43,255],[46,255]],
′yellow':[[26,34],[43,255],[46,255]],
′green':[[35,77],[43,255],[46,255]],
′blue-green':[[78,99],[43,255],[46,255]],
′blue':[[100,124],[43,255],[46,255]],
′purple':[[125,155],[43,255],[46,255]],}
since the open source computer vision library OpenCV of the Python version is adopted in the experiment for extracting the color, and the default color space is BGR, the color space needs to be converted into HSV color space again, and the conversion formulas are shown as formulas (1) to (9):
R'=R/255 (1)
G′=G/255 (2)
B'=B/255 (3)
Cmax=max(R′,G′,B′) (4)
Cmin=min(R′,G′,B′) (5)
Δ=Cmax-Cmin (6)
Figure BDA0002596552920000101
Figure BDA0002596552920000102
V=Cmax (9)
the premise of color matching is that the human body analysis network divides the pedestrian picture into an upper half part and a lower half part, and the upper and lower parts are input into a color histogram algorithm. Firstly, converting a BGR color space into an HSV space by using formulas (1) to (9); searching the range of the HSV color space numerical value which accords with each color, namely counting the proportion of each color which meets the HSV color quantization space; after the pixel values meeting the conditions are counted, the pixel values are compared with the pixel values of the previous human body analysis type, and if the counted number of the pixels is too small, namely the color area of the relevant area is too small, the pixel values are discarded; and finally, sorting.
The pseudo-code of the color matching algorithm is described as follows:
Figure BDA0002596552920000111
Figure BDA0002596552920000121
as one or more embodiments, the target pedestrian is identified from the candidate image according to the extracted color features of the body parts of the various classes; the method comprises the following specific steps:
the method comprises the steps of assuming that the HSV color characteristics of the upper garment and the HSV color characteristics of the lower garment of a pedestrian in a candidate image are known;
calculating the distance between the HSV color feature of the coat of the target pedestrian to be recognized and the HSV color features of all known target coats of all candidate images, and if the distance is smaller than a set threshold value, indicating that the coat of the target pedestrian to be recognized is the same as the coat color of the candidate image, adding one to the confidence coefficient of the corresponding candidate image;
calculating the distances between the HSV color feature of the lower garment of the target pedestrian to be identified and all known HSV color features of all candidate images, and if the distance is smaller than a set threshold value, indicating that the lower garment of the target pedestrian to be identified is the same as the lower garment colors of the candidate images, adding one to the confidence coefficient of the corresponding candidate images;
and (4) sorting the candidate images from high to low according to the confidence coefficient, and outputting the N candidate images which are sorted in the front as the result of pedestrian re-identification.
Illustratively, a target pedestrian picture is input into a human body analysis network, and a human body analysis algorithm is used for screening the pedestrian pictures in a search library. And selecting the first three colors in the ratio sorting in the color matching process, and judging the qualified picture if the first two colors are the same.
Meanwhile, when the target image is compared with the pedestrian image in the candidate library, a confidence judgment method is adopted, and qualified pictures are calculated after the colors of the upper garment or the lower garment of the pedestrian are the same and reach a set threshold value 1 and a set threshold value 2.
The specific steps of the algorithm implementation are shown as follows:
Figure BDA0002596552920000122
Figure BDA0002596552920000131
pedestrian re-identification is that pedestrian clothes can be influenced by various conditions such as illumination, angle, shielding and definition in an actual scene, and the traditional method is very difficult to identify the pedestrian clothes and related attributes. Whereas the deep learning method is less sensitive to the color recognition result in the conventional method. In order to overcome the technical defect that the deep learning has poor traditional color feature recognition performance, the human body analysis method in the deep learning and the HSV color histogram method of traditional color recognition are combined, and the method for judging the color on the premise that each part is successfully extracted based on the human body analysis network is provided.
In order to illustrate the effectiveness of the human body analysis model, experiments are carried out in a computer, and pictures of pedestrians are sent into a network, as shown in fig. 3(a) -3 (d), the semantic segmentation of the pedestrians is successful, and upper body clothes and lower body clothes can be extracted.
In order to verify the effectiveness of the screening method, the pedestrian images wearing clothes of different colors are sent to the algorithm in the application, as shown in fig. 4(a) -4 (e), no condition that the color difference of clothes is large appears in the images after the pedestrian re-identification method with human body analysis as a prior condition, and preliminary screening of pedestrian re-identification data is realized.
After the computer analyzes the pedestrian image, the network can extract the corresponding part, the clothes need to be subjected to color identification in the subsequent steps, and the traditional HSV method of color space is mainly adopted.
The method and the device can solve the problem that deep learning is poor in learning result of traditional manual features, and can be used as a re-identification prior method for other pedestrians to screen out relevant non-conforming pictures.
Human body analysis belongs to one field of deep learning, and an improved countermeasure network of human body analysis is adopted, so that semantic inconsistency such as human body dislocation, ambiguity and the like is reduced. The adopted improved network realizes global and local supervision through a smaller receptive field, and avoids the problem that the GAN network has poor convergence when processing high-resolution images.
Example two
The embodiment provides a pedestrian re-identification system based on the combination of human body analysis and clothing color;
pedestrian re-identification system based on human body analysis and clothes color combination includes:
an acquisition module configured to: acquiring a target pedestrian image to be identified;
a category classification module configured to: inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
a color feature extraction module configured to: extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
an output module configured to: and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
It should be noted here that the above-mentioned obtaining module, the category dividing module, the color feature extracting module and the output module correspond to steps S101 to S104 in the first embodiment, and the above-mentioned modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the contents disclosed in the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In the foregoing embodiments, the descriptions of the embodiments have different emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The proposed system can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical functional division, and in actual implementation, there may be other divisions, for example, multiple modules may be combined or integrated into another system, or some features may be omitted, or not executed.
EXAMPLE III
The present embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein, a processor is connected with the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first embodiment.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software.
The method in the first embodiment may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Example four
The present embodiments also provide a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the method of the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. The pedestrian re-identification method based on the combination of human body analysis and clothing color is characterized by comprising the following steps of:
acquiring a target pedestrian image to be identified;
inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
2. The method of claim 1, wherein the pre-constructed human analytical model; the specific construction steps comprise:
constructing a neural network model;
constructing a training set; the training set adopts a plurality of human example images of known human body part types;
and inputting the training set into the neural network model, stopping training when the loss function of the neural network model reaches the minimum value, and outputting the trained neural network model.
3. The method of claim 2, wherein the neural network model is a U-net model.
4. The method as claimed in claim 1, wherein the image of the target pedestrian to be recognized is input into a human body analytic model constructed in advance, and the body part category of the target pedestrian to be recognized is output; the method comprises the following steps:
inputting the image of the target pedestrian to be recognized into a pre-constructed human body analysis model, outputting each attribute category, classifying the attribute categories according to the upper garment and the lower garment, and outputting the upper garment image and the lower garment image of the target pedestrian to be recognized.
5. The method as claimed in claim 1, wherein the image of the target pedestrian to be recognized is used for extracting color features of body parts of various categories; refers to identifying the HSV character of the upper garment and the HSV character of the lower garment.
6. The method as claimed in claim 1, wherein the image of the target pedestrian to be recognized is used for extracting color features of body parts of various categories; the method comprises the following steps:
identifying RGB color characteristics of an upper garment and RGB color characteristics of a lower garment of a target pedestrian image to be identified;
and converting the RGB color characteristic of the upper garment and the RGB color characteristic of the lower garment into an HSV color space to obtain the HSV color characteristic of the upper garment and the HSV color characteristic of the lower garment.
7. The method of claim 1, wherein the target pedestrian is identified from the candidate pictures based on the extracted color features of the body parts of each category; the method comprises the following specific steps:
the method comprises the steps of assuming that the HSV color characteristics of the upper garment and the HSV color characteristics of the lower garment of a pedestrian in a candidate image are known;
calculating the distance between the HSV color feature of the coat of the target pedestrian to be recognized and the HSV color features of all known target coats of all candidate images, and if the distance is smaller than a set threshold value, indicating that the coat of the target pedestrian to be recognized is the same as the coat color of the candidate image, adding one to the confidence coefficient of the corresponding candidate image;
calculating the distances between the HSV color feature of the lower garment of the target pedestrian to be identified and all known HSV color features of all candidate images, and if the distance is smaller than a set threshold value, indicating that the lower garment of the target pedestrian to be identified is the same as the lower garment colors of the candidate images, adding one to the confidence coefficient of the corresponding candidate images;
and (4) sorting the candidate images from high to low according to the confidence coefficient, and outputting the N candidate images which are sorted in the front as the result of pedestrian re-identification.
8. Pedestrian re-identification system based on human body analysis and clothes color combination is characterized by comprising:
an acquisition module configured to: acquiring a target pedestrian image to be identified;
a category classification module configured to: inputting an image of a target pedestrian to be recognized into a pre-constructed human body analytical model, and outputting the body part category of the target pedestrian to be recognized;
a color feature extraction module configured to: extracting color features of body parts of various categories from an image of a target pedestrian to be recognized;
an output module configured to: and identifying the target pedestrian from the candidate picture according to the extracted color features of the body parts of the various categories.
9. An electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs being stored in the memory, the processor executing the one or more computer programs stored in the memory when the electronic device is running, to cause the electronic device to perform the method of any of the preceding claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 7.
CN202010711107.2A 2020-07-22 2020-07-22 Pedestrian re-identification method and system based on combination of human body analysis and clothing color Pending CN112001251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010711107.2A CN112001251A (en) 2020-07-22 2020-07-22 Pedestrian re-identification method and system based on combination of human body analysis and clothing color

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010711107.2A CN112001251A (en) 2020-07-22 2020-07-22 Pedestrian re-identification method and system based on combination of human body analysis and clothing color

Publications (1)

Publication Number Publication Date
CN112001251A true CN112001251A (en) 2020-11-27

Family

ID=73467102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010711107.2A Pending CN112001251A (en) 2020-07-22 2020-07-22 Pedestrian re-identification method and system based on combination of human body analysis and clothing color

Country Status (1)

Country Link
CN (1) CN112001251A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096162A (en) * 2021-04-21 2021-07-09 青岛海信智慧生活科技股份有限公司 Pedestrian identification tracking method and device
CN113139508A (en) * 2021-05-12 2021-07-20 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113239939A (en) * 2021-05-12 2021-08-10 北京杰迈科技股份有限公司 Track signal lamp identification method, module and storage medium
CN113628287A (en) * 2021-08-16 2021-11-09 杭州知衣科技有限公司 Deep learning-based single-stage garment color recognition system and method
CN113657186A (en) * 2021-07-26 2021-11-16 浙江大华技术股份有限公司 Feature extraction method and device based on pedestrian re-recognition and storage medium
CN113762221A (en) * 2021-11-05 2021-12-07 通号通信信息集团有限公司 Human body detection method and device
CN114519789A (en) * 2022-01-27 2022-05-20 北京精鸿软件科技有限公司 Classroom scene classroom switching discrimination method and device and electronic equipment
CN115858846A (en) * 2023-02-16 2023-03-28 云南派动科技有限公司 Deep learning-based skier image retrieval method and system
CN113657186B (en) * 2021-07-26 2024-05-31 浙江大华技术股份有限公司 Feature extraction method and device based on pedestrian re-recognition and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679467A (en) * 2017-09-19 2018-02-09 浙江师范大学 A kind of pedestrian's weight recognizer implementation method based on HSV and SDALF
CN110298893A (en) * 2018-05-14 2019-10-01 桂林远望智能通信科技有限公司 A kind of pedestrian wears the generation method and device of color identification model clothes
CN111046789A (en) * 2019-12-10 2020-04-21 哈尔滨工程大学 Pedestrian re-identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679467A (en) * 2017-09-19 2018-02-09 浙江师范大学 A kind of pedestrian's weight recognizer implementation method based on HSV and SDALF
CN110298893A (en) * 2018-05-14 2019-10-01 桂林远望智能通信科技有限公司 A kind of pedestrian wears the generation method and device of color identification model clothes
CN111046789A (en) * 2019-12-10 2020-04-21 哈尔滨工程大学 Pedestrian re-identification method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096162B (en) * 2021-04-21 2022-12-13 青岛海信智慧生活科技股份有限公司 Pedestrian identification tracking method and device
CN113096162A (en) * 2021-04-21 2021-07-09 青岛海信智慧生活科技股份有限公司 Pedestrian identification tracking method and device
CN113139508A (en) * 2021-05-12 2021-07-20 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113239939A (en) * 2021-05-12 2021-08-10 北京杰迈科技股份有限公司 Track signal lamp identification method, module and storage medium
CN113139508B (en) * 2021-05-12 2023-11-14 深圳他米科技有限公司 Hotel safety early warning method, device and equipment based on artificial intelligence
CN113657186A (en) * 2021-07-26 2021-11-16 浙江大华技术股份有限公司 Feature extraction method and device based on pedestrian re-recognition and storage medium
CN113657186B (en) * 2021-07-26 2024-05-31 浙江大华技术股份有限公司 Feature extraction method and device based on pedestrian re-recognition and storage medium
CN113628287A (en) * 2021-08-16 2021-11-09 杭州知衣科技有限公司 Deep learning-based single-stage garment color recognition system and method
CN113762221B (en) * 2021-11-05 2022-03-25 通号通信信息集团有限公司 Human body detection method and device
WO2023077897A1 (en) * 2021-11-05 2023-05-11 通号通信信息集团有限公司 Human body detection method and apparatus, electronic device, and computer-readable storage medium
CN113762221A (en) * 2021-11-05 2021-12-07 通号通信信息集团有限公司 Human body detection method and device
CN114519789A (en) * 2022-01-27 2022-05-20 北京精鸿软件科技有限公司 Classroom scene classroom switching discrimination method and device and electronic equipment
CN114519789B (en) * 2022-01-27 2024-05-24 北京精鸿软件科技有限公司 Classroom scene classroom switching discriminating method and device and electronic equipment
CN115858846A (en) * 2023-02-16 2023-03-28 云南派动科技有限公司 Deep learning-based skier image retrieval method and system
CN115858846B (en) * 2023-02-16 2023-04-21 云南派动科技有限公司 Skier image retrieval method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN112001251A (en) Pedestrian re-identification method and system based on combination of human body analysis and clothing color
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
Li et al. Robust rooftop extraction from visible band images using higher order CRF
CN106682601B (en) A kind of driver's violation call detection method based on multidimensional information Fusion Features
CN105141903B (en) A kind of method for carrying out target retrieval in video based on colouring information
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN106933816A (en) Across camera lens object retrieval system and method based on global characteristics and local feature
CN109271932A (en) Pedestrian based on color-match recognition methods again
CN110222644A (en) Forest fire smoke detection method based on image segmentation
CN102096823A (en) Face detection method based on Gaussian model and minimum mean-square deviation
CN108537239A (en) A kind of method of saliency target detection
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN105825168A (en) Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD
CN106909883A (en) A kind of modularization hand region detection method and device based on ROS
CN107103301B (en) Method and system for matching discriminant color regions with maximum video target space-time stability
CN110688512A (en) Pedestrian image search algorithm based on PTGAN region gap and depth neural network
Galiyawala et al. Visual appearance based person retrieval in unconstrained environment videos
Hu et al. Fast face detection based on skin color segmentation using single chrominance Cr
Dwina et al. Skin segmentation based on improved thresholding method
CN106650824B (en) Moving object classification method based on support vector machines
Elaw et al. Comparison of video face detection methods using HSV, HSL and HSI color spaces
Wang et al. Deep learning-based human activity analysis for aerial images
Prasad et al. Unsupervised resolution independent based natural plant leaf disease segmentation approach for mobile devices
CN104484324B (en) A kind of pedestrian retrieval method of multi-model and fuzzy color

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination