CN113536917A - Dressing identification method, dressing identification system, electronic device and storage medium - Google Patents

Dressing identification method, dressing identification system, electronic device and storage medium Download PDF

Info

Publication number
CN113536917A
CN113536917A CN202110647145.0A CN202110647145A CN113536917A CN 113536917 A CN113536917 A CN 113536917A CN 202110647145 A CN202110647145 A CN 202110647145A CN 113536917 A CN113536917 A CN 113536917A
Authority
CN
China
Prior art keywords
human body
detection
feature set
dressing
detection target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110647145.0A
Other languages
Chinese (zh)
Inventor
魏乃科
潘华东
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110647145.0A priority Critical patent/CN113536917A/en
Publication of CN113536917A publication Critical patent/CN113536917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Abstract

The application relates to a dressing identification method, a system, an electronic device and a storage medium, wherein key points of human body parts of a detection target are obtained, component area division is carried out on a detection image of the detection target according to the key points of the human body parts, component area maps are obtained, dressing characteristic extraction is carried out on each component area map, a detection characteristic set is obtained, and under the condition that the similarity between the detection characteristic set and a standard characteristic set is greater than a preset threshold value, the detection target is judged to be in accordance with preset dressing, so that the problem of low dressing identification efficiency is solved, and accurate and efficient dressing identification is realized.

Description

Dressing identification method, dressing identification system, electronic device and storage medium
Technical Field
The present application relates to the field of image recognition, and more particularly, to a method, system, electronic device, and storage medium for dressing recognition.
Background
In many production environment, there is strict restriction to the dress requirement of staff, in case there is the personnel that dress is irregular to appear in the work environment, then can lead to work scene to have the potential safety hazard to cause unnecessary economic loss. In the prior art, a method for detecting the dressing of workers generally includes the steps of installing a video camera outside a working scene, shooting personnel entering the working scene through the video camera, and finally judging whether the dressing of the personnel entering the working scene is standard or not by monitoring personnel. But the manual participation in the identification consumes the energy of monitoring personnel. Some proposals put forward to replace manual work through image recognition technology, realize dress discernment, but have the problem that the recognition rate is low, adaptability is not good.
At present, no effective solution is provided for the problem of low dressing identification efficiency in the related technology.
Disclosure of Invention
The embodiment of the application provides a dressing identification method, a dressing identification system, an electronic device and a storage medium, and aims to at least solve the problem of low dressing identification efficiency in the related art.
In a first aspect, an embodiment of the present application provides a dressing identification method, including:
acquiring key points of human body parts of a detection target, and dividing component areas of a detection image of the detection target according to the key points of the human body parts to obtain a component area map;
extracting the dressing characteristics of each part area map to obtain a detection characteristic set;
and under the condition that the similarity between the detection feature set and a standard feature set is greater than a preset threshold value, judging that the detection target conforms to a preset dress, wherein the standard feature set corresponds to the preset dress.
In some embodiments, the detection feature set includes a global feature set, the extracting the dressing features from each of the component region maps, and acquiring the detection feature set includes:
dividing each part region graph into sub regions, and extracting the average color of each sub region;
carrying out gray level processing on the average color of each sub-region, and carrying out low-pass filtering to obtain a global feature map;
and extracting the dressing features of the global feature map through a neural network to obtain the global feature set.
In some embodiments, the detecting feature set includes a contour feature set, the extracting the dressing feature from each of the component region maps, and acquiring the detecting feature set includes:
performing band-pass filtering on each part area graph to obtain a profile characteristic graph;
and extracting the dressing features of the contour feature map through a neural network to obtain the contour feature set.
In some embodiments, the detection feature set includes a detail feature set, the extracting the dressing feature from each of the component region maps, and the obtaining the detection feature set includes:
according to a preset specific identifier, performing sliding window traversal matching on each part area graph;
and acquiring the detail feature set through the image in the sliding window matched with the specific identifier.
In some embodiments, a human body part key point of a detection target is obtained, and before component region division is performed on a detection image of the detection target according to the human body part key point, the method includes:
acquiring human body characteristics of the detection target, and judging the human body state of the detection target according to the human body characteristics;
and acquiring a detection image of the detection target under the condition that the human body state accords with a preset state.
In some of these embodiments, the human features include human joint point features and human segmentation features, and the human states include human pose, human angle, and human occlusion.
In some of these embodiments, the component regions include a left leg region, a right leg region, a left hand region, a right hand region, and an abdominal chest region.
In a second aspect, an embodiment of the present application provides a dressing identification system, including: an image acquisition device and a server device, wherein;
the image acquisition equipment is used for acquiring a detection image of a detection target;
the server device is configured to perform the dressing identification method according to the first aspect.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the clothing recognition method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the clothing recognition method according to the first aspect.
Compared with the related art, the dressing identification method system, the electronic device and the storage medium provided by the embodiment of the application have the advantages that the key points of the human body parts of the detection target are obtained, the part area images are obtained by dividing the part areas of the detection image of the detection target according to the key points of the human body parts, the dressing characteristics of each part area image are extracted, the detection characteristic set is obtained, and the detection target is judged to be in line with the preset dressing under the condition that the similarity between the detection characteristic set and the standard characteristic set is greater than the preset threshold value, so that the problem of low dressing identification efficiency is solved, and accurate and efficient dressing identification is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a terminal of a dressing identification method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of garment identification according to an embodiment of the present application;
FIG. 3 is a flow chart of another method of apparel identification according to an embodiment of the present application;
FIG. 4 is a flow chart of a method of garment identification according to a preferred embodiment of the present application;
FIG. 5 is a schematic diagram of key points of a human body part in a clothing identification method according to a preferred embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the division of component areas in a dressing identification method according to a preferred embodiment of the present application;
FIG. 7 is a schematic diagram of global feature extraction in a clothing recognition method according to the preferred embodiment of the present application;
FIG. 8 is a schematic diagram of contour feature extraction in a clothing identification method according to a preferred embodiment of the present application;
FIG. 9 is a schematic diagram of detail feature extraction in a clothing recognition method according to a preferred embodiment of the present application;
fig. 10 is a block diagram of a clothing recognition system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The method provided by the embodiment can be executed in a terminal, a computer or a similar operation device. Taking the example of the application running on a terminal, fig. 1 is a hardware structure block diagram of the terminal according to the dressing identification method of the embodiment of the application. As shown in fig. 1, the terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the terminal. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the method for identifying a clothing in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The present embodiment provides a clothing identification method, and fig. 2 is a flowchart of a clothing identification method according to an embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, obtaining key points of human body parts of the detection target, and dividing the part region of the detection image of the detection target according to the key points of the human body parts to obtain a part region map. The key points of the human body parts may be parts regions of the human body image, for example, the head, the upper body, and the lower body, which are divided according to parts that are relatively easy to capture and detect during the motion of the human body, and the upper body image corresponds to the upper garment of the detection target and the lower body image corresponds to the trousers of the detection target during the wear detection, so that the different parts of the wear image can be better processed and compared. The method for dividing the regions according to the key parts of the human body is more reasonable and accurate compared with the method for directly dividing the detection picture into an upper part, a middle part and a lower part to correspond to different dresses. In some embodiments, the human target is divided into the following five component regions by human part keypoints: dividing the area where the left waist, the left knee and the left foot head connecting line is located into left leg areas; dividing the area where the connecting lines of the right waist, the right knee and the right foot head are located into right leg areas; dividing the area where the left shoulder, the left elbow and the left hand head are connected into left hand areas; dividing the area where the head connecting line of the right shoulder, the right elbow and the right hand is located into a right hand area; and dividing the area limited by the left shoulder, the right shoulder, the left waist and the right waist into an abdominal and thoracic area. By dividing the image of the detection target into the five component regions, the features of the respective component regions can be extracted more favorably, and for example, the features of the body and the sleeve of the wearing garment can be extracted and compared with each other, which further improves the accuracy of garment identification.
Step S202, dressing feature extraction is carried out on each component area map, and a detection feature set is obtained. And respectively extracting the dressing characteristics of the images of the component areas to acquire the dressing characteristics of the current detection target in each component area. The set of detection features may be a set of rigging features corresponding to each component region, i.e., each component region may have different rigging features. The dressing features may be the outline, texture, color or other specific identification of the part of the garment, such as buttons, badges, etc.
Step S203, under the condition that the similarity between the detection feature set and the standard feature set is greater than a preset threshold, judging that the detection target conforms to a preset dress. And comparing the detection feature set acquired in the step S202 with a standard feature set, wherein the standard feature set is a feature set extracted from an image of a preset clothing. And comparing the detection feature set of each component area with the standard feature set one by one, and outputting similarity, wherein if the similarity is greater than a preset threshold, the dressing of the detection target is consistent with the preset dressing, namely the dressing of the detection target is in accordance with the preset dressing. Optionally, in some application scenarios, the dresses with the similarity smaller than the preset threshold may also be set to conform to the preset dress. For example, if the set of specified standard features reflects garments that are prohibited from being worn, only garments having a similarity less than a predetermined threshold are permitted.
Through the steps, the detection image of the detection target is subjected to component region division according to the key points of the human body part to obtain a component region map, and then the dressing characteristics of the component region map are extracted and compared, so that dressing identification is carried out. On one hand, the method divides the part region through the key points of the human body part, and the part region map can better correspond to the clothing image of each part of the human body compared with the method of directly dividing the detected image, so that the detection characteristic set and the region corresponding to the standard characteristic set are kept consistent when the characteristic extraction and comparison are carried out; on the other hand, the scheme can flexibly meet different dressing identification requirements only by switching different standard feature sets, and even can meet the requirement of simultaneously identifying multiple dressing scenes, so that the dressing identification efficiency and accuracy are improved.
In some embodiments, the detection feature set includes a global feature set, and the process of extracting the dressing features from the component region maps includes: dividing each part area graph into sub-areas, and extracting the average color of each sub-area; carrying out gray scale operation on the component region image, and carrying out low-pass filtering to obtain a global feature image; and extracting the dressing features of the global feature map through a neural network to obtain a global feature set. Preferably, the process of obtaining the detection feature set includes: dividing the component region picture into N-N sub-regions, and extracting average colors in the sub-regions; and performing gray level processing according to the average color of each subregion, changing the color picture into a gray level picture, performing low-pass filtering, removing detail feature information, leaving global features, performing feature extraction on the processed image through a neural network, and outputting a feature layer as a global feature set.
In some embodiments, the detection feature set includes a contour feature set, and the process of extracting the dressing features from the component region maps includes: performing band-pass filtering on each part area graph to obtain a contour characteristic graph; and (4) extracting the dressing characteristics of the contour characteristic diagram through a neural network to obtain a contour characteristic set. Preferably, the process of obtaining the detection feature set includes: band pass filtering, such as a butterworth band pass filter, gaussian band pass filter, or the like, is performed on the part region image to remove background and detail information, leaving contour information. And then, extracting the features of the processed image through a neural network to obtain a contour feature set.
In some embodiments, the detection feature set includes a detail feature set, and the process of extracting the dressing features from the component region maps includes: and performing sliding window traversal matching on each part area graph according to a preset specific identifier. The specific mark can be a part which is different from other parts on the clothing, such as a pocket, a button, a pattern on the clothing, an identity mark and the like. Preferably, the size of the sliding window used for the sliding window traversal is adapted to the size of the specific marker. In the process of traversing the part area map, under the condition that the image in the sliding window is matched with the preset specific identification, the image in the sliding window at the moment is stored, and finally the detail feature set is obtained.
It should be noted that the global feature set, the contour feature set, and the detail feature set provide three different dimensions for extracting the dressing features, and can adapt to different identification requirements. In the practical application process, the dressing identification can be performed through the extraction and comparison of any one of the feature sets, or a mode of combining and comparing a plurality of feature sets can be adopted, so as to extract the dressing features from a plurality of dimensions, thereby further improving the identification efficiency and accuracy. The comparison of each dimension may be progressive, for example, the global feature set is compared first, and then the outline feature set is compared, and the dressing of the detection target is determined to be in accordance with the preset dressing only when the two comparison results are both greater than the threshold value. The comparison of each dimension can also be embodied in a weighting mode, the comparison results of different dimensions have different weights, and the dressing of the detection target is judged to be in accordance with the preset dressing only when the weighting result of the comparison result of each dimension is greater than the threshold value. The specific determination method may be selected according to the actual application, and is not limited herein.
In some embodiments, fig. 3 is a flowchart of another method for clothing identification according to an embodiment of the present application, and as shown in fig. 3, before obtaining key points of a human body part of a detection target, and performing component region division on a detection image of the detection target according to the key points of the human body part, the method for clothing identification further includes the following steps:
step S301, the human body characteristics of the detection target are obtained, and the human body state of the detection target is judged according to the human body characteristics. In this step, human body features of the detection target are also located, and the human body features may be human body joint points such as wrists, elbows, shoulders, crotch, knees, and the like, human body positioning points where the head, face, hands, waist, feet, and the like are easy to locate, or a human body is divided into a head, an upper body, a lower body, and the like by a dividing line formed by connecting joint points or human body positioning points, for example, a dividing line formed by connecting left and right waists. Through the human body characteristics, the human body state of the detection target can be analyzed, for example, whether the detection target is walking vertically, whether the detection target is on the front side or the back side, and the like. In some embodiments, the body state includes a body posture, a body angle and a body shelter, the body posture may further include a squatting posture, a sitting posture, an upright posture and the like, the body angle includes a front side, a side surface and a back side, and the body shelter includes an upper body shelter and a lower body shelter and the like.
Step S302, under the condition that the human body state accords with the preset state, a detection image of the detection target is obtained. After the human body state is determined in step S301, a detection image of the detection target in the preset state may be selected for subsequent dressing analysis. The preset state can better show the wearing clothes, and the human body state of the folded wearing clothes is reduced as much as possible, for example, the human body posture is vertical, the human body angle is the front, and the human body shielding is the non-shielding state.
Through the steps, the current human body state of the detection target can be acquired through the human body characteristic information in the process of acquiring the detection image of the detection target, so that the detection image is screened, the front or back of the detection target is selected to be upright and not to shield the human body as the detection image to be analyzed, the image with too high folding or overlapping degree is screened, the quality of the detection image can be effectively improved, and the identification accuracy is improved.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
Fig. 4 is a flowchart of a clothing identification method according to a preferred embodiment of the present application, as shown in fig. 4, the clothing identification method including the steps of:
step S401, inputting a video and detecting a target. Detecting a human body in a video frame in real time to obtain a detection target;
step S402, target tracking is detected. And tracking the detection target, generating an ID (identity) and tracking the detection target according to the ID. The method aims to obtain detection images of detection targets as many as possible so as to screen out images convenient for dressing feature extraction in subsequent steps;
and step S403, analyzing human body characteristics. And analyzing human body characteristics of the detection target in the video, wherein the human body characteristics can comprise human body joint point detection and human body segmentation. The human body state is determined through human body characteristic analysis, and the human body state comprises human body postures of squatting, sitting, lying, standing, bending and the like, human body angles of the front, the side and the back and human body shielding states of upper body shielding, lower body shielding, head shielding, no shielding and the like.
Step S404, image selection is detected. According to the human body state obtained by human body feature analysis, screening a detection target in the tracking process, and selecting an image of a detection object in a back-side upright non-shielding state or a front-side upright non-shielding state as a detection image to be analyzed.
And acquiring the screened detection image, and dividing the detection image into component areas. First, key points of a human body part of a detection target in an image are obtained, fig. 5 is a schematic diagram of key points of a human body part in a dressing identification method according to a preferred embodiment of the present application, and as shown in fig. 5, the key points of a human body include a left-hand head, a right-hand head, a left elbow, a right elbow, a left shoulder, a right shoulder, a left waist, a right waist, a left knee, a right knee, a left-foot head, a right-foot head, and the like. Dividing a detection image of a detection target into a plurality of component areas according to key points of a human body, wherein fig. 6 is a component area division schematic diagram in the dressing identification method according to the preferred embodiment of the application, and as shown in fig. 6, the detection image is divided into five component areas, namely a left leg area 5 and a division area comprising a head line of a left waist, a left knee and a left foot; a right leg region 4 including a division region of a right waist, a right knee and a right foot head line; a left hand area 3 which comprises a left shoulder, a left elbow and a left hand head connecting line division area; and the right hand region 1 comprises a right shoulder, a right elbow, a segmentation region of a head line of the right hand and an abdomen and chest region 2, and comprises abdomen and chest segmentation regions limited by the left shoulder, the right shoulder, the left waist and the right waist.
In step S405, a detection feature set is obtained. And respectively extracting dressing features of the five component areas. The dressing feature includes three dimensions, a global feature dimension, a profile feature dimension, and a detail feature dimension.
Fig. 7 is a schematic diagram of global feature extraction in the clothing identification method according to the preferred embodiment of the present application, and fig. 7 corresponds to an abdominal-chest region, as shown in fig. 7, in the global feature dimension, a component region picture is first divided into N × N regions, and an average color in the regions is extracted; and then, performing gray level processing according to the average color of the sub-region pictures, performing low-pass filtering on the obtained gray level pictures, removing detail feature information, leaving global features, performing feature extraction on the processed images through a neural network, and outputting a feature layer as a global feature set.
Fig. 8 is a schematic diagram of contour feature extraction in a clothing identification method according to the preferred embodiment of the present application, and fig. 8 corresponds to a chest region, as shown in fig. 8, in the dimension of the contour feature, a band-pass filtering, such as a butterworth band-pass filter, a gaussian band-pass filter, etc., is first performed on the component region image to remove background and detail information, and contour information is left; and then, extracting the features of the processed image through a neural network to obtain a contour feature set.
Fig. 9 is a schematic diagram of extracting detail features in the clothing recognition method according to the preferred embodiment of the present application, and fig. 9 corresponds to the abdomen-chest region, as shown in fig. 9, the detail feature dimension is mainly a specific identifier of the recognition region, the specific identifier is set in the standard block region, and then sliding window traversal matching is performed in the picture of the block region to be recognized of the human body. And searching similar identifications to obtain a detail feature set.
Finally, the feature sets of all dimensions form a detection feature set corresponding to a detection target;
step S406, acquiring a standard dressing picture. The standard dressing picture is used for acquiring a standard feature set;
step S407, feature extraction. Respectively extracting standard features by the three-dimensional feature extraction method in the step S405;
step S408, acquiring a standard feature set;
step S409, feature set comparison. A detection feature set is acquired by step S405. And comparing the test feature set with the features in the standard feature set one by one, and outputting a comprehensive comparison analysis result. And the characteristic comparison process sequentially judges whether the global characteristic, the contour characteristic and the detail characteristic are similar or not, and finally whether the dressing is in compliance or not is obtained. The feature comparison method can determine whether the similarity between the detection feature set and the standard feature set exceeds a set threshold.
And step S410, outputting a dressing comparison result. And indicating that the dressing of the detection target meets the preset dressing requirement under the condition that the similarity between the detection feature set and the standard feature set exceeds a set threshold.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The dressing identification method can analyze the human body characteristics of the tracked human body target, can obtain the detection image which is convenient for characteristic comparison, and ensures the dressing identification analysis quality. The dressing comparison area is divided into a plurality of component areas, and each component area is independently compared to define an analysis boundary, so that more targeted characteristics can be extracted by the model. The process of judging the human eyes by the feature extraction simulation is divided into three dimensions, namely global features, outline features and detail features, and the auxiliary neural network extracts features more pertinently. In addition, the dressing identification types are switched, the preset models do not need to be switched, and only the replacement of the standard dressing characteristics is needed. Visual characteristics of a person are simulated in the characteristic comparison process, multi-dimensional characteristic matching judgment is carried out, and the problem of low recognition rate caused by dressing deformation and the like is solved. And the problem of area division is solved by combining human body joint point information, and the problem of inconsistent characteristic comparison areas can be well avoided.
The present embodiment further provides a dressing identification system, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the dressing identification system is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram showing a configuration of a clothing recognition system according to an embodiment of the present application, and as shown in fig. 10, the clothing recognition system 100 includes: an image acquisition device 1001 and a server device 1002. Wherein the image acquisition apparatus 1001 is used to acquire a detection image of a detection target; the server device 1002 is configured to execute the above-described dressing recognition method.
It should be noted that the system may include functional modules or program modules, and the method may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
acquiring key points of human body parts of a detection target, and dividing component regions of a detection image of the detection target according to the key points of the human body parts to obtain a component region diagram;
extracting the dressing characteristics of each part area map to obtain a detection characteristic set;
and under the condition that the similarity between the detection feature set and the standard feature set is greater than a preset threshold value, judging that the detection target conforms to a preset dress, wherein the standard feature set corresponds to the preset dress.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the dressing identification method in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the method of clothing identification in the above embodiments.
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for clothing identification, comprising:
acquiring key points of human body parts of a detection target, and dividing component areas of a detection image of the detection target according to the key points of the human body parts to obtain a component area map;
extracting the dressing characteristics of each part area map to obtain a detection characteristic set;
and under the condition that the similarity between the detection feature set and a standard feature set is greater than a preset threshold value, judging that the detection target conforms to a preset dress, wherein the standard feature set corresponds to the preset dress.
2. The clothing recognition method of claim 1, wherein the detection feature set comprises a global feature set, the clothing feature extraction of each of the component region maps is performed, and obtaining the detection feature set comprises:
dividing each part region graph into sub regions, and extracting the average color of each sub region;
carrying out gray level processing on the average color of each sub-region, and carrying out low-pass filtering to obtain a global feature map;
and extracting the dressing features of the global feature map through a neural network to obtain the global feature set.
3. The clothing recognition method of claim 1, wherein the detection feature set comprises a contour feature set, and the extracting the clothing features from each of the component region maps to obtain the detection feature set comprises:
performing band-pass filtering on each part area graph to obtain a profile characteristic graph;
and extracting the dressing features of the contour feature map through a neural network to obtain the contour feature set.
4. The clothing recognition method of claim 1, wherein the detection feature set comprises a detail feature set, and the extracting the clothing features from each of the component area maps to obtain the detection feature set comprises:
according to a preset specific identifier, performing sliding window traversal matching on each part area graph;
and acquiring the detail feature set through the image in the sliding window matched with the specific identifier.
5. The clothing recognition method according to any one of claims 1 to 4, wherein a human body part key point of a detection target is acquired, and before component region division is performed on a detection image of the detection target according to the human body part key point, the method comprises:
acquiring human body characteristics of the detection target, and judging the human body state of the detection target according to the human body characteristics;
and acquiring a detection image of the detection target under the condition that the human body state accords with a preset state.
6. The clothing recognition method of claim 5, wherein the human body features comprise human body joint point features and human body segmentation features, and the human body states comprise human body postures, human body angles and human body occlusion.
7. The method of claim 1, wherein the component areas include a left leg area, a right leg area, a left hand area, a right hand area, and an abdominal chest area.
8. A dressing identification system, comprising: an image acquisition device and a server device; wherein;
the image acquisition equipment is used for acquiring a detection image of a detection target;
the server device is configured to perform the method of clothing identification as claimed in any one of claims 1 to 7.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the dressing identification method according to any one of claims 1 to 7.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the method of clothing identification of any one of claims 1 to 7 when running.
CN202110647145.0A 2021-06-10 2021-06-10 Dressing identification method, dressing identification system, electronic device and storage medium Pending CN113536917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110647145.0A CN113536917A (en) 2021-06-10 2021-06-10 Dressing identification method, dressing identification system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110647145.0A CN113536917A (en) 2021-06-10 2021-06-10 Dressing identification method, dressing identification system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113536917A true CN113536917A (en) 2021-10-22

Family

ID=78124794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110647145.0A Pending CN113536917A (en) 2021-06-10 2021-06-10 Dressing identification method, dressing identification system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113536917A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989858A (en) * 2021-12-28 2022-01-28 安维尔信息科技(天津)有限公司 Work clothes identification method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN111553327A (en) * 2020-05-29 2020-08-18 上海依图网络科技有限公司 Clothing identification method, device, equipment and medium
CN111626210A (en) * 2020-05-27 2020-09-04 上海科技大学 Person dressing detection method, processing terminal, and storage medium
CN111696172A (en) * 2019-03-12 2020-09-22 北京京东尚科信息技术有限公司 Image labeling method, device, equipment and storage medium
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112633196A (en) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 Human body posture detection method and device and computer equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344841A (en) * 2018-08-10 2019-02-15 北京华捷艾米科技有限公司 A kind of clothes recognition methods and device
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN111696172A (en) * 2019-03-12 2020-09-22 北京京东尚科信息技术有限公司 Image labeling method, device, equipment and storage medium
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN111626210A (en) * 2020-05-27 2020-09-04 上海科技大学 Person dressing detection method, processing terminal, and storage medium
CN111553327A (en) * 2020-05-29 2020-08-18 上海依图网络科技有限公司 Clothing identification method, device, equipment and medium
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112633196A (en) * 2020-12-28 2021-04-09 浙江大华技术股份有限公司 Human body posture detection method and device and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989858A (en) * 2021-12-28 2022-01-28 安维尔信息科技(天津)有限公司 Work clothes identification method and system

Similar Documents

Publication Publication Date Title
CN107301370B (en) Kinect three-dimensional skeleton model-based limb action identification method
CN104881881B (en) Moving Objects method for expressing and its device
US10380794B2 (en) Method and system for generating garment model data
CN105187785B (en) A kind of across bayonet pedestrian's identifying system and method based on choice of dynamical notable feature
CN112633196A (en) Human body posture detection method and device and computer equipment
CN107527046B (en) Unlocking control method and related product
CN111950321B (en) Gait recognition method, device, computer equipment and storage medium
CN109938737A (en) A kind of human body body type measurement method and device based on deep learning critical point detection
WO2019237721A1 (en) Garment dimension data identification method and device, and user terminal
CN102486816A (en) Device and method for calculating human body shape parameters
Jiang et al. Automatic body feature extraction from front and side images
US11450148B2 (en) Movement monitoring system
CN109426785A (en) A kind of human body target personal identification method and device
CN110555393A (en) method and device for analyzing pedestrian wearing characteristics from video data
CN113536917A (en) Dressing identification method, dressing identification system, electronic device and storage medium
CN109829418A (en) A kind of punch card method based on figure viewed from behind feature, device and system
CN108875654A (en) A kind of face characteristic acquisition method and device
CN107704882A (en) A kind of kinds of laundry recognition methods and system based on digital image processing techniques
KR102328072B1 (en) Server for providing virtual fitting service and method using the same
CN115830712B (en) Gait recognition method, device, equipment and storage medium
Zhang et al. Multi-modal image fusion with KNN matting
CN111696172A (en) Image labeling method, device, equipment and storage medium
CN112528855A (en) Electric power operation dressing standard identification method and device
CN111126179A (en) Information acquisition method and device, storage medium and electronic device
CN113158729A (en) Pull-up counting method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination