CN112347824A - Wearing object identification method, device, equipment and storage medium - Google Patents

Wearing object identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112347824A
CN112347824A CN201910733545.6A CN201910733545A CN112347824A CN 112347824 A CN112347824 A CN 112347824A CN 201910733545 A CN201910733545 A CN 201910733545A CN 112347824 A CN112347824 A CN 112347824A
Authority
CN
China
Prior art keywords
wearing object
wearing
identification
portrait
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910733545.6A
Other languages
Chinese (zh)
Inventor
符殷铭
陈信宇
吕嘉鹏
黄彩云
陈涛
张毅
雷苗
朱青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910733545.6A priority Critical patent/CN112347824A/en
Publication of CN112347824A publication Critical patent/CN112347824A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a wearing article identification method, a wearing article identification device, equipment and a storage medium. The method comprises the following steps: acquiring an image frame; recognizing human image contour position features from the image frames based on a pre-trained wearing object recognition model; determining wearing object position characteristics and wearing object classification characteristics according to the portrait contour position characteristics based on a pre-trained wearing object recognition model; and combining and outputting the wearing object position characteristics and the corresponding wearing object classification characteristics as recognition results. According to the embodiment of the invention, the wearing object can be automatically monitored by the wearing object identification method based on image identification, so that the identification accuracy and efficiency are improved.

Description

Wearing object identification method, device, equipment and storage medium
Technical Field
The invention belongs to the field of image identification and safety detection, and particularly relates to a wearing article identification method, a wearing article identification device, wearing article identification equipment and a storage medium.
Background
The wearing article is used as a basic basis for identifying the identity of personnel in an office area, and is widely applied to various office area safety management. However, in an office area, office staff often forget wearing articles such as work cards, and the problem that the identity of a target person and the identity of a stranger cannot be recognized is caused, so that the safety of the office area is not high. In order to avoid such a situation, it is important to detect and alarm whether a target person in an office wears a work card or other wearing objects.
Whether the identification of wearing the thing such as work card is worn to the target portrait at present all is based on the naked eye judgement, no matter watch surveillance video or patrol the office area, all need set up the manpower and monitor, and this kind of mode is both wasted time and energy, and the control personnel also can lead to lou examining because of tired for office's safety can not obtain the assurance.
In the aspect of object identification, the traditional object identification method based on the feature extraction algorithm has poor effect on identification accuracy and robustness, and is easy to generate false identification or missing detection.
Disclosure of Invention
The embodiment of the invention provides a wearing article identification method, a wearing article identification device, equipment and a computer storage medium, which can automatically monitor wearing articles through a wearing article identification method based on image identification, and reduce identification errors caused by fatigue of naked eyes due to single repeated tasks.
In a first aspect, an embodiment of the present invention provides a method for identifying a wearing article, where the method includes: acquiring an image frame; recognizing human image contour position characteristics from image frames based on a pre-trained wearing object recognition model; determining the position characteristics and the classification characteristics of the wearing objects according to the contour position characteristics of the portrait based on a pre-trained wearing object recognition model; and combining and outputting the wearing object position characteristics and the corresponding wearing object classification characteristics as recognition results.
In one possible implementation, a wearing article includes: certificates, cards and safety helmets.
In one possible implementation, the recognition result further includes: if the wearing object classification features are that the wearing objects are recognized, outputting confirmation identification information and a confidence value of the confirmation identification information; and if the wearing object classification characteristic is that the wearing object is not recognized, outputting the denial identification information and the confidence value of the denial identification information.
In one possible implementation, the method further comprises: and if the negative identification information is received, sending first alarm information.
In one possible implementation, the method further comprises: if the confidence value of the received confirmation identification information is smaller than the confidence threshold value of the preset confirmation identification information, sending out second alarm information; or if the confidence value of the received denial identification information is larger than the preset confidence threshold of the denial identification information, sending out second alarm information.
In one possible implementation, the method further comprises: the wearing object recognition model training comprises the following steps: acquiring a picture frame containing a portrait and a wearing object, and taking the picture frame containing the portrait and not containing the wearing object as a training sample; training the basic recognition model according to the training samples and the labels of the pre-marked training samples; calculating a loss function value of a wearing object classification feature recognition result based on a label corresponding to a training sample output by the basic recognition model and a label of a pre-marked training sample, and adjusting a model parameter according to the loss function value; and (4) repeatedly operating the parameters of the adjusting model, determining the optimal parameters of the basic identification model, and obtaining the wearing object identification model.
In one possible implementation, the method further comprises: the training samples are picture frames containing human figures and wearing objects, and the labels of the pre-marked training samples are human figure outline position feature labels, wearing object position feature labels and wearing object classification feature labels; the training sample is a picture frame which contains the portrait and does not contain the wearing object, and the pre-marked label of the training sample is a portrait outline position characteristic label.
In one possible implementation, the method further comprises: and if the image frame does not identify the portrait outline position characteristics, the identification is finished. The useless fragments of a large amount of the unmanned images can be filtered out, and the working efficiency is improved.
In a second aspect, an embodiment of the present invention provides a processing apparatus, where the apparatus includes:
the image acquisition module is used for acquiring image frames; the figure recognition module is used for recognizing figure outline position characteristics from image frames based on a pre-trained wearing object recognition model; the wearing object identification module is used for determining wearing object position characteristics and wearing object classification characteristics according to the portrait contour position characteristics based on a pre-trained wearing object identification model; and the result output module is used for combining and outputting the wearing article classification features and the corresponding wearing article position features as recognition results.
In a third aspect, an embodiment of the present invention provides a computing device, where the device includes: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the computing method as provided by embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the computer program instructions implement the processing method provided by the embodiment of the present invention.
According to the wearing article identification method, the wearing article identification device, the wearing article identification equipment and the computer storage medium, identification errors caused by fatigue of naked eyes due to single repeated tasks can be reduced through the wearing article identification method based on image identification. The recognition accuracy is high, the recognition efficiency is high, the personnel management of being convenient for has improved the security in office area.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a wearing article identification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a work card identification process provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a process for training a work card recognition model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an exemplary hardware architecture provided by an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In order to improve the wearing article detection efficiency and accuracy, an embodiment of the present invention provides a wearing article identification method, and first, the wearing article identification method provided in the embodiment of the present invention is described in detail below.
Fig. 1 is a schematic flow chart of a user wearing article identification method according to an embodiment of the present invention. As shown in fig. 1, the execution subject of the method is a server, and the method may include S101-S104, which is specifically as follows:
s101, image frames are acquired.
In one embodiment, the image frame comprises: the method comprises the steps of regularly acquiring monitoring video pictures, regularly shooting images and capturing pictures based on the monitoring videos.
S102, recognizing the human image contour position characteristics from the image frames based on the pre-trained wearing object recognition model.
In one embodiment, the image frame is input into a pre-trained wearing object recognition model, and whether the portrait exists in the image frame is detected.
That is, possible position information of the portrait in the image frame is detected from the image frame, the portrait outline position characteristic is identified, and then the portrait outline position characteristic is judged, namely whether the portrait exists in the image frame or not is determined.
In one embodiment, the method further comprises: and if the image frame does not identify the portrait outline position characteristics, the identification is finished. That is to say, if no portrait appears in the image frame, no further detection is needed, and a large number of useless blank segments of the portrait are filtered out, so that the working efficiency is improved.
S103, determining the position characteristic and the classification characteristic of the wearing object according to the contour position characteristic of the portrait based on the pre-trained wearing object recognition model.
In one embodiment, after the portrait is detected from the picture frame, the position contour information of the wearing article in the portrait contour range is detected based on the portrait contour position feature, and the position feature of the wearing article is determined.
In one embodiment, after the position characteristics of the wearing object are determined, whether wearing object information exists in the wearing object position outline information is judged.
In another embodiment, a wearing article includes: certificates, cards and safety helmets.
In one embodiment, the method further comprises: and training a wearing object recognition model.
Wherein, wearing thing recognition model training includes: acquiring a picture frame containing a portrait and a wearing object, and taking the picture frame containing the portrait and not containing the wearing object as a training sample; training the basic recognition model according to the training samples and the labels of the pre-marked training samples; calculating a loss function value of a wearing object classification feature recognition result based on a label corresponding to a training sample output by the basic recognition model and a label of a pre-marked training sample, and adjusting a model parameter according to the loss function value; and (4) repeatedly operating the parameters of the adjusting model, determining the optimal parameters of the basic identification model, and obtaining the wearing object identification model.
In one embodiment, the training samples include: the picture frame comprises a first group of picture frames and a second group of picture frames, wherein the first group of picture frames and the second group of picture frames respectively comprise a plurality of picture frames. Each picture frame in the first group of image frames comprises a portrait and a wearing article; each picture frame in the second group of picture frames only contains the portrait and does not contain the wearing article.
In one embodiment, the labeling of the pre-labeled training samples includes labeling the portrait position and the wear position for a picture frame containing the portrait and the wear, and adding the confirmation identification information. And marking the position of the portrait for the picture frame which only contains the portrait and does not contain the wearing object, and adding the negative identification information.
In one embodiment, the image frame input with the basic recognition model is subjected to position feature detection and classification feature judgment, and a position feature detection result and a classification feature judgment result of the image frame are output. Multilayer object characteristics can be extracted from the convolutional layer, the object detection and identification are combined by the wearing object identification model, and the identification accuracy and robustness are improved.
In one embodiment, a training sample is input into a basic recognition model, the basic recognition model outputs a label corresponding to the sample, a loss function value of the basic recognition model can be calculated based on the label output by the basic recognition model and a pre-marked label of the training sample, parameters of the basic recognition model are adjusted according to the loss function value, and the parameters of the basic recognition model are continuously updated so as to minimize the loss function value.
In one embodiment, the set of parameters that minimizes the loss function value is the optimal parameters of the clothing recognition model, and the optimal parameter determination is the training of the clothing recognition model.
In one embodiment, the method further comprises: the training samples are picture frames containing human figures and wearing objects, and the labels of the pre-marked training samples are human figure outline position feature labels, wearing object position feature labels and wearing object classification feature labels; the training sample is a picture frame which contains the portrait and does not contain the wearing object, and the pre-marked label of the training sample is a portrait outline position characteristic label.
And S104, combining and outputting the wearing object classification features and the corresponding wearing object position features as recognition results.
In one embodiment, the recognition result further comprises: if the wearing object classification features are that the wearing objects are recognized, outputting confirmation identification information and a confidence value of the confirmation identification information; and if the wearing object classification characteristic is that the wearing object is not recognized, outputting the denial identification information and the confidence value of the denial identification information.
In one embodiment, the method further comprises: and if the negative identification information is received, sending first alarm information.
In one embodiment, the first alert information is indicative of a person not wearing the target wearing object. When the staff hears or sees the first warning information, the staff can quickly find the staff who does not wear the target wearing object. Recognition errors due to eye fatigue from a single repetitive task can be reduced.
In another embodiment, the method further comprises: if the confidence value of the received confirmation identification information is smaller than the confidence threshold value of the preset confirmation identification information, sending out second alarm information; or if the confidence value of the received denial identification information is larger than the preset confidence threshold of the denial identification information, sending out second alarm information.
In another embodiment, the first warning information is used for indicating the situation that the wearing article identification result is uncertain. When the staff hears or sees the second warning information, whether the portrait in the picture frame wears the target wearing object can be confirmed by naked eyes. By reconfirming the recognition result with the reliability not meeting the standard by the staff, the false recognition condition possibly occurring in the judgment of the wearing object recognition model can be prevented, and the precision of the recognition result is improved.
Fig. 2 is a schematic diagram illustrating a work card identification process according to an embodiment of the present invention. As shown in fig. 1, the following is detailed:
when the wearing object is an employee work card, a video file or a video frame in the monitoring video is obtained, and then a result is obtained through a feed-forward process of the wearing object identification model.
First, a picture or video frame is acquired.
And secondly, detecting whether the staff exists in the input picture frame, wherein the identified staff outline position information is formed on the basis of a rectangular diagonal line formed by connecting two coordinate points.
And then, if the employee exists, whether the employee wears the work card is continuously detected.
If the employee is detected to wear the work card, the position information and the WearID are returned to serve as the identification, and the subsequent number is the confidence level that the employee wears the work card.
If the employee does not wear the work card, the position information and the UnWearID are returned as the identification, and the number behind the UnWearID is the confidence that the employee does not wear the work card.
And if the picture frame does not detect the staff, finishing the detection.
In one embodiment, when the wearing article is a safety helmet, video frames in a video file or a monitoring video are obtained, and then a result is obtained through a feed-forward process of a safety helmet identification model.
Whether a portrait exists in an input picture frame is detected in the identification process, and the identified portrait outline position information is formed on the basis of a rectangular diagonal line formed by connecting two coordinate points.
And if the portrait exists, continuously detecting whether the portrait wears the safety helmet or not.
And if the person is detected to Wear the safety helmet, returning the position information and the 'Wear' as the identification, wherein the number behind the 'Wear' is the confidence level of the person wearing the safety helmet.
If the portrait is detected not wearing the safety helmet, the position information and the UnWear are returned as the identification, and the number behind the UnWear is the confidence level of the portrait not wearing the safety helmet.
And if no portrait exists in the picture frame, finishing the detection.
In one embodiment, the exact location of the item is further determined by decomposing the obtained data features of the item to be identified into classification features and location features and performing confidence value calculations on the classification features.
In one embodiment, the deep learning model is a convolutional neural network-based deep learning model VGG16, and is trained with a large amount of sample data.
Fig. 3 is a schematic diagram illustrating a process of training a work card recognition model according to an embodiment of the present invention. As shown in fig. 3, the following is detailed:
firstly, a training picture is collected, and a label is added to the training picture.
In one embodiment, a picture frame of a target portrait without a work card and a picture frame with the work card are obtained; adding a target portrait position identification label to a picture frame of a target portrait without wearing a work card; and adding a target portrait position identification label and a work card position identification label to a picture frame of the target portrait wearing the work card.
In one embodiment, a target portrait picture frame worn by a work card is used as a training sample and is input into a work card identification model, and the work card identification model outputs a label corresponding to the sample; updating the parameters of the work card identification model based on the label output by the work card identification model and the marked label of the training sample, wherein the marked label of the training sample is a target human image picture frame which is provided with a target human image position identification label and a wearing object position identification label and is used for wearing the work card; the wearing object position identification tag is a tag obtained by labeling the position of a target wearing object in a picture frame.
In one embodiment, outputting a target portrait picture frame without a work card with a target portrait position identification tag; and outputting the target portrait picture frame with the target portrait position identification label and the work card position identification label and wearing the work card.
And then, acquiring basic parameters of the migration learning model, and performing work card identification training on the basic identification model to obtain a work card identification model.
In one embodiment, a target portrait picture frame without a work card is used as a training sample and input into a basic recognition model, and the basic recognition model outputs a label corresponding to the sample; and updating the parameters of the basic recognition model based on the labels output by the basic recognition model and the labeled labels of the training samples to obtain the work card recognition model.
In one embodiment, a training sample ImageNet is input into a basic recognition model, the basic recognition model outputs a label corresponding to the sample ImageNet, and the basic recognition model parameters are updated based on the label output by the basic recognition model and the labeled label of the training sample ImageNet; as a supplementary parameter to the transfer learning model.
In one embodiment, the training process is mainly used to establish a work card identification model, the work card identification algorithm is implemented by a deep convolutional neural network, and the network model is a target detection algorithm (SSD).
In one embodiment, the training data is one work card picture and one work card picture of the same employee, and the positions of the employee and the positions of the work card are manually labeled.
In one embodiment, the manual label requires labeling of the employee's position in the picture, the position of the wordpad, etc.
In one embodiment, the training mode is transfer learning, the transfer model is a model generated by taking an image recognition database ImageNet as training data and VGG16 as a training network.
In one embodiment, to ensure that the learned features are not lost during the transfer learning process, the initial learning rate is adjusted to be relatively small.
In one embodiment, the image processing aspect is performed using an Open Source Computer Vision Library (Open CV) Library, mainly for reading of input images, adding of identification tags, and result output.
In one embodiment, in the above scheme, the convolutional neural network is a feedforward neural network, and its artificial neurons can respond to a part of surrounding units in the coverage range, so that the method is suitable for processing large-scale images, and the deep learning model based on the convolutional neural network is more accurate for processing the images.
In one embodiment, the SSD (learned location and class) based on VGG16 (providing backbone parameters) is trained as a network model, obtaining optimal parameters to determine the clothing recognition model.
Fig. 4 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention. As shown in fig. 4, the apparatus 400 specifically includes 410 and 440, which are specifically shown as follows:
an image acquisition module 410 for acquiring image frames.
And the portrait recognition module 420 is used for recognizing the portrait outline position characteristics from the image frames based on the pre-trained wearing object recognition model.
And the wearing object identification module 430 is used for determining the wearing object position characteristics and the wearing object classification characteristics according to the portrait contour position characteristics based on a pre-trained wearing object identification model.
And the result output module 440 combines and outputs the wearing article classification features and the corresponding wearing article position features as the recognition result.
In one embodiment, the portrait identification module 420 in this embodiment is specifically configured to end the identification if the image frame does not identify the portrait contour position feature. A large number of useless blank fragments can be filtered out, and the working efficiency is improved.
In another embodiment, the wearing article identification module 430 in the embodiment of the present invention is specifically used for a wearing article, and includes: certificates, cards and safety helmets.
In one embodiment, the wearing article identification module 430 in the embodiment of the present invention is further configured for wearing article identification model training including: acquiring a picture frame containing a portrait and a wearing object, and taking the picture frame containing the portrait and not containing the wearing object as a training sample; training the basic recognition model according to the training samples and the labels of the pre-marked training samples; calculating a loss function value of a wearing object classification feature recognition result based on a label corresponding to a training sample output by the basic recognition model and a label of a pre-marked training sample, and adjusting parameters of the basic recognition model according to the loss function value; and (4) repeatedly operating the parameters of the adjustment basic identification model, determining the optimal parameters of the basic identification model, and obtaining the wearing object identification model. Multilayer object features can be extracted from the convolutional layer, the object detection and the recognition are combined by the wearable object recognition model, and the recognition accuracy and robustness are improved.
In one embodiment, the wearing object identification module 430 in the embodiment of the present invention is further configured to use the training sample as a picture frame including a portrait and a wearing object, and the pre-labeled training sample tags are a portrait contour position feature tag, a wearing object position feature tag, and a wearing object classification feature tag; the training sample is a picture frame which contains the portrait and does not contain the wearing object, and the pre-marked label of the training sample is a portrait outline position characteristic label.
In one embodiment, the result output module 440 is further configured to output the confirmation identification information and the confidence value of the confirmation identification information if the wearing object classification characteristic is that the wearing object is recognized; and if the wearing object classification characteristic is that the wearing object is not recognized, outputting the denial identification information and the confidence value of the denial identification information.
In one embodiment, the result output module 440 is further configured to issue the first warning message if the negative identification information is received.
In another embodiment, the result output module 440 is further configured to send out a second warning message if the confidence value of the received confirmation identification information is smaller than the confidence threshold of the preset confirmation identification information; or if the confidence value of the received denial identification information is larger than the preset confidence threshold of the denial identification information, sending out second alarm information.
Each unit of the device can implement the method shown in fig. 1 and achieve the corresponding technical effect, and for brevity, the description is omitted here.
Fig. 5 is a schematic diagram illustrating a hardware structure of the wearing article identification method according to the embodiment of the present invention.
The processing device may include a processor 501 and a memory 502 storing computer program instructions.
The processor 501 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 502 may include mass storage for data or instructions. By way of example, and not limitation, memory 502 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 502 may include removable or non-removable (or fixed) media, where appropriate. The memory 502 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 502 is non-volatile solid-state memory. In a particular embodiment, the memory 502 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 501 reads and executes the computer program instructions stored in the memory 502 to implement any one of the processing methods in the embodiments shown in fig. 1 to 3.
In one example, the processing device may also include a communication interface 503 and a bus 510. As shown in fig. 5, the processor 501, the memory 502, and the communication interface 503 are connected via a bus 510 to complete communication therebetween.
The communication interface 503 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present invention.
Bus 510 includes hardware, software, or both to couple the components of the article of wear identification device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 510 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The processing device may execute the wearing article identification method in the embodiment of the present invention, thereby implementing the wearing article identification method and apparatus described in conjunction with fig. 1 and 4.
In addition, in combination with the method for identifying a wearing article in the above embodiments, the embodiments of the present invention may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the clothing identification methods in the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams can be implemented in software, and the elements of the present invention are programs or code segments used to perform desired tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (11)

1. A wearing article identification method, characterized by comprising:
acquiring an image frame;
recognizing human image contour position features from the image frames based on a pre-trained wearing object recognition model;
determining wearing object position characteristics and wearing object classification characteristics according to the portrait contour position characteristics based on a pre-trained wearing object recognition model;
and combining and outputting the wearing object position characteristics and the corresponding wearing object classification characteristics as recognition results.
2. The method of claim 1, wherein the apparel comprises: certificates, cards and safety helmets.
3. The method of claim 1, wherein the recognition result further comprises:
if the wearing object classification characteristic is that a wearing object is recognized, outputting confirmation identification information and a confidence value of the confirmation identification information;
and if the wearing object classification characteristic is that the wearing object is not recognized, outputting denial identification information and a confidence value of the denial identification information.
4. The method of claim 3, further comprising: and if the denial identification information is received, sending first alarm information.
5. The method of claim 3, further comprising:
if the received confidence value of the confirmation identification information is smaller than the confidence threshold value of the preset confirmation identification information, sending out second alarm information; alternatively, the first and second electrodes may be,
and if the confidence value of the received denial identification information is larger than the confidence threshold value of the preset denial identification information, sending out second alarm information.
6. The method of claim 1, wherein the clothing recognition model training comprises:
acquiring a picture frame containing a portrait and a wearing object, and taking the picture frame containing the portrait and not containing the wearing object as a training sample;
training a basic recognition model according to the training samples and the labels of the pre-marked training samples;
calculating a loss function value of a wearing object classification feature recognition result based on the label corresponding to the training sample output by the basic recognition model and the label of the pre-marked training sample, and adjusting a model parameter according to the loss function value;
and repeating the operation of the parameters of the adjustment model, and determining the optimal parameters of the basic identification model to obtain the wearing article identification model.
7. The method of claim 6, further comprising:
the training samples are the picture frames containing the portrait and the wearing object, and the labels of the pre-marked training samples are portrait outline position feature labels, wearing object position feature labels and wearing object classification feature labels;
the training sample is the picture frame which contains the portrait and does not contain the wearing object, and the label of the pre-marked training sample is a portrait outline position feature label.
8. The method of claim 1, further comprising: and if the image frame does not identify the portrait contour position feature, the identification is finished.
9. A wearing article identification device, comprising:
the image acquisition module is used for acquiring image frames;
the portrait recognition module is used for recognizing the portrait outline position characteristics from the image frames based on a pre-trained wearing object recognition model;
the wearing object identification module is used for determining wearing object position characteristics and wearing object classification characteristics according to the portrait contour position characteristics based on a pre-trained wearing object identification model;
and the result output module is used for combining and outputting the wearing object classification features and the corresponding wearing object position features as recognition results.
10. A computing device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the clothing identification method of any one of claims 1-8.
11. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the clothing identification method of any one of claims 1-8.
CN201910733545.6A 2019-08-09 2019-08-09 Wearing object identification method, device, equipment and storage medium Pending CN112347824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910733545.6A CN112347824A (en) 2019-08-09 2019-08-09 Wearing object identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910733545.6A CN112347824A (en) 2019-08-09 2019-08-09 Wearing object identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112347824A true CN112347824A (en) 2021-02-09

Family

ID=74367635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910733545.6A Pending CN112347824A (en) 2019-08-09 2019-08-09 Wearing object identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112347824A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239739A (en) * 2021-04-19 2021-08-10 深圳市安思疆科技有限公司 Method and device for identifying wearing article

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127117A (en) * 2016-06-16 2016-11-16 哈尔滨工程大学 Based on binocular vision quick high robust identification, location automatically follow luggage case
CN108319926A (en) * 2018-02-12 2018-07-24 安徽金禾软件股份有限公司 A kind of the safety cap wearing detecting system and detection method of building-site
CN109977977A (en) * 2017-12-28 2019-07-05 中移信息技术有限公司 A kind of method and corresponding intrument identifying potential user
CN110046574A (en) * 2019-04-15 2019-07-23 北京易达图灵科技有限公司 Safety cap based on deep learning wears recognition methods and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127117A (en) * 2016-06-16 2016-11-16 哈尔滨工程大学 Based on binocular vision quick high robust identification, location automatically follow luggage case
CN109977977A (en) * 2017-12-28 2019-07-05 中移信息技术有限公司 A kind of method and corresponding intrument identifying potential user
CN108319926A (en) * 2018-02-12 2018-07-24 安徽金禾软件股份有限公司 A kind of the safety cap wearing detecting system and detection method of building-site
CN110046574A (en) * 2019-04-15 2019-07-23 北京易达图灵科技有限公司 Safety cap based on deep learning wears recognition methods and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239739A (en) * 2021-04-19 2021-08-10 深圳市安思疆科技有限公司 Method and device for identifying wearing article

Similar Documents

Publication Publication Date Title
CN109508688B (en) Skeleton-based behavior detection method, terminal equipment and computer storage medium
CN112396658B (en) Indoor personnel positioning method and system based on video
CN110419048A (en) System for identifying defined object
CN110866515B (en) Method and device for identifying behaviors of objects in factory building and electronic equipment
CN111814775B (en) Target object abnormal behavior identification method, device, terminal and storage medium
CN108197575A (en) A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device
CN111062303A (en) Image processing method, system and computer storage medium
CN109724993A (en) Detection method, device and the storage medium of the degree of image recognition apparatus
CN110619324A (en) Pedestrian and safety helmet detection method, device and system
CN113269142A (en) Method for identifying sleeping behaviors of person on duty in field of inspection
CN112163497B (en) Construction site accident prediction method and device based on image recognition
CN111652046A (en) Safe wearing detection method, equipment and system based on deep learning
US20230186634A1 (en) Vision-based monitoring of site safety compliance based on worker re-identification and personal protective equipment classification
CN110737201A (en) monitoring method, device, storage medium and air conditioner
CN112528860A (en) Safety tool management method and system based on image recognition
CN112183219A (en) Public safety video monitoring method and system based on face recognition
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN114155492A (en) High-altitude operation safety belt hanging rope high-hanging low-hanging use identification method and device and electronic equipment
CN112347824A (en) Wearing object identification method, device, equipment and storage medium
CN112949785B (en) Object detection method, device, equipment and computer storage medium
CN113485277A (en) Intelligent power plant video identification monitoring management system and method
CN116002480A (en) Automatic detection method and system for accidental falling of passengers in elevator car
CN113778091A (en) Method for inspecting equipment of wind power plant booster station
CN113486743A (en) Fatigue driving identification method and device
CN111489123A (en) Violation sorting tracing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination