CN111461017B - High-precision recognition method for kitchen work clothes after catering in urban scale - Google Patents

High-precision recognition method for kitchen work clothes after catering in urban scale Download PDF

Info

Publication number
CN111461017B
CN111461017B CN202010250479.XA CN202010250479A CN111461017B CN 111461017 B CN111461017 B CN 111461017B CN 202010250479 A CN202010250479 A CN 202010250479A CN 111461017 B CN111461017 B CN 111461017B
Authority
CN
China
Prior art keywords
human body
rectangular frame
type
color
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010250479.XA
Other languages
Chinese (zh)
Other versions
CN111461017A (en
Inventor
吴晓晖
王书平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shizai Technology Co ltd
Original Assignee
Hangzhou Shizai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shizai Technology Co ltd filed Critical Hangzhou Shizai Technology Co ltd
Priority to CN202010250479.XA priority Critical patent/CN111461017B/en
Publication of CN111461017A publication Critical patent/CN111461017A/en
Application granted granted Critical
Publication of CN111461017B publication Critical patent/CN111461017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/12Hotels or restaurants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

A high-precision recognition method for kitchen work clothes after catering in urban scale belongs to the technical field of image recognition and comprises the following steps: step one, establishing a special data set; training human body key points to extract network OpenPose, and extracting human body key points in the image; step three, calculating images of the human body circumscribed rectangular frame, the upper body circumscribed rectangular frame and the lower body circumscribed rectangular frame according to the key points obtained in the step two; inputting the image of the rectangular frame area externally connected with the human body obtained in the step three into a style classification network for type identification, and comparing the image with a corresponding user database; step five, feeding back the comparison result to the corresponding store; the invention greatly improves the accuracy and the robustness of the algorithm. The problems in the kitchen scene are well solved; the network cascade structure ensures high running speed and relatively small resource cost, and can meet the requirements of on-site deployment.

Description

High-precision recognition method for kitchen work clothes after catering in urban scale
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a high-precision recognition method for kitchen work clothes after catering in urban scale.
Background
Sanitation is a fundamental condition by which restaurants survive. If the sanitation of the restaurant is not noticed, the health of customers can be affected, the whole society can be possibly affected, and the severity and the importance of the sanitation are all matters which are not visible to each restaurant operator. The primary condition of customers for restaurant consumption is the sanitary environment, and it is an important responsibility of restaurants to provide dining and beverages to customers that meet sanitary standards. For restaurants themselves, quality is life and hygiene is at least the most basic condition. The requirements of the national food epidemic prevention department or the health department on food sanitation implementation supervision of restaurants are very strict, the food supervision administration implements projects such as bright kitchen range, and the like, and the kitchen is required to be assembled with a monitoring camera after a restaurant and is connected into a monitoring system of the supervision administration to uniformly monitor the kitchen operation flow, kitchen personnel behaviors, dressing, kitchen environment and the like. However, manual monitoring and screening of kitchen personnel activities and dressing is very slow and time consuming. The need for automatically identifying the dressing of kitchen personnel by using computer vision is becoming more and more urgent.
Many computer vision methods are applied to detection of working clothes, but an urban food safety supervision platform needs to automatically analyze tens of thousands of monitoring camera videos (including various types of kitchen and school canteens) in the whole city by using an algorithm, and find out illegal operations and food safety hidden dangers. The kitchen scene has the following difficulties compared with other environments: 1. the color and the type of the kitchen clothes are particularly various, and the color and the type of the catering clothes are different from each other; 2. the scene is complex, the illumination change of the kitchen scene is more, and the kitchen operation can seriously influence the illumination condition to cause the image quality to be reduced; 3. the shielding of the kitchen is very serious, the kitchen space is narrow, and the shielding of the objects, equipment and people can occur more. The kitchen scene of the urban scale is complex and uncontrollable, high requirements are put on the robustness of the algorithm, and the existing algorithm has obvious defects: 1. the color and the type of the working clothes are directly judged by adopting a single network, and the simple rough ground method is only suitable for a single near scene, and can cause a large number of misjudgment and missed judgment in an urban scale environment due to the three problems; 2. the image segmentation algorithm is adopted to perform work clothes recognition, the algorithm is extremely sensitive to illumination and cannot cope with environments with complex illumination, the algorithm is extremely high in resource consumption and low in running speed, and great cost is caused for actual deployment.
Disclosure of Invention
The invention aims to provide a high-precision recognition method for kitchen work clothes after catering in urban scale, which aims to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a high-precision recognition method for kitchen work clothes after catering in urban scale comprises the following steps:
step one, establishing a special data set;
training human body key points to extract network OpenPose, and extracting human body key points in the image;
step three, calculating images of the human body circumscribed rectangular frame, the upper body circumscribed rectangular frame and the lower body circumscribed rectangular frame according to the key points obtained in the step two;
inputting the images of the rectangular frame region externally connected with the human body obtained in the step three into a style classification network for type recognition to obtain the type of the work clothes, inputting the images of the rectangular frame externally connected with the upper body and the rectangular frame externally connected with the lower body into a color classification network for recognition if the type of the clothes is a pair of trousers, and finally comparing the type and the color of the work clothes with corresponding user databases;
and fifthly, feeding back the comparison result to the corresponding store.
Preferably, the dedicated database in the first step contains more than 100 tens of thousands of total human targets.
Preferably, the specific content of the first step is as follows:
(1) data set of working clothes color 2 major classes comprising a coat color and a lower body garment color, wherein the coat color and the lower body garment color each comprise any pattern of white, red, blue, black, yellow, gray, green, purple, and non-solid colors; working clothes style DataSet style Comprises a clothes style and an apron style, wherein the apron style comprises a half body style, a full bag style, a shoulder hanging style and a neck hanging styleAnd not wearing aprons, the clothes patterns comprise coat trousers, overcoat, dress, coat skirt and coat shorts;
(2) the two data sets are adopted as a network training set to train an identification network for identifying the work clothes. Data set using style data set of working clothes style Training work clothes style classification network Net style Wherein, training work clothes style classification network Net style The method comprises the steps of selecting a resnet50 as a feature extraction network, then training by adopting a transfer learning method under a dark net framework, and specifically, using an initial learning rate l r The learning rate is updated by adopting a steps method, training data is enhanced by adopting random cutting, exposure, rotation, mirror image and other methods, and the accuracy of the network model is highest when iteration test iters=260000 after test; similarly, the workwear color DataSet DataSet color Training work clothes color classification network Net color The network selects a resnet18 as a feature extraction network, and trains the network by adopting a transfer learning method under a dark net framework, wherein the initial learning rate is set as l r The training data is enhanced by adopting methods of random clipping, exposure, rotation, mirroring and the like, and the accuracy of the network model is highest after testing iteration iters=180000 times.
Preferably, the specific content of the second step is as follows:
s1: firstly, adding a kitchen sample to retrain a human body key point extraction network OpenPose; extracting a human body key point coordinate matrix by using the network:
wherein n represents human body ID, 25 key points in human body are extracted in the invention, p n0 Represents the first key point, p, of the nth human body n24 Represents the 24 th key point of the nth human body, wherein p n24 =[x y]X, y are the abscissa and ordinate of the coordinate point.
Preferably, the human body circumscribed rectangular frame is calculated according to the key points:
wherein n is human body ID, x n The nth person is externally connected with the left upper point abscissa of the rectangular frame, y n The nth person is externally connected with the ordinate, w, of the upper left point of the rectangular frame n The width of the rectangle frame is externally connected with the nth body, h n The height of the rectangular frame is externally connected with the nth person, and at the moment, the upper body of the human body is externally connected with the rectangular frame:
the lower body of the human body is externally connected with a rectangular frame:
the calculation process of the human body circumscribed rectangle frame is to traverse keypoints and find the minimum abscissa x of each row min Maximum abscissa x max Minimum ordinate y min Maximum ordinate y max Coordinates of an upper left point and a lower right point of the circumscribed rectangular frame are calculated:
y leftup =y min
y rightbottom =y max
and then, obtaining:
x n =x leftup
y n =y leftup
w n =x rightbottom -x leftup
h n =y rightbottom -y leftup
similarly, the calculation process of the circumscribed rectangular frame of the upper body of the human body is to traverse keypoints to find the x coordinate of the key point of the neck of the human body neck 、y neck Finding the abscissa x of key points of buttocks midhip ,y midhip And then, obtaining:
x n =x leftup
y n =y neck
w n =x rightbottom -x leftup
h n =y midhip -y neck
the calculation process of the external rectangular frame of the lower body of the human body is as follows:
x n =x leftup ,y n =y midhip ,w n =x rightbottom -x leftup ,h n =y rightbottom -y midhip
preferably, the specific steps of the fourth step are as follows:
SS1: if the clothing type t is obtained n The images of the external rectangular frame areas of the upper body and the lower body of the human body are respectively intercepted in the input image, and are input into a color classification network for color recognition, so that a matrix of the colors of the upper garment and the trousers is obtained:
wherein c upn Representing the color of the coat, c downn Representing the color of the pants.
And SS2, summarizing the coat colors, the trousers colors, the clothes styles and the apron styles obtained by the style classification network and the color classification network to obtain a working clothes information matrix:
wherein t is n Representing the type of clothing, c n Representing the color of the garment, and inputting the information matrix into a store database for comparison to determine whether the rule is violated.
SS3: if the clothing type t is obtained n The garment type is directly compared with a store database without color recognition.
Compared with the prior art, the invention has the beneficial effects that: the invention establishes a huge special data set and provides guarantee for the accuracy of the network model; the method of cascading the human body key point network and the two classification networks is adopted for judgment, so that the accuracy and the robustness of the algorithm can be greatly improved. The problems in the kitchen scene are well solved; the application of the human body key point network solves the problem of serious shielding in a kitchen scene, and accurately segments the upper body and lower body boundaries, so as to reduce the difficulty of the subsequent classification network; the two classification networks are respectively responsible for identifying the dressing type and the dressing color, and the colors and the types are combined for decision making, so that the accuracy is improved; the network cascade structure has high running speed and relatively small resource cost, and can meet the requirement of on-site deployment.
Drawings
FIG. 1 is a schematic diagram of an architecture of the present invention;
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings, in which it is evident that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The high-precision recognition method for the kitchen work clothes after catering in the urban scale as shown in fig. 1 comprises a human body key point network, a first classification network and a second classification network; the method comprises the following specific steps of:
step one: creating a huge private data set comprising a data set of the coverall color data set DataSet color And a working garment style DataSet style Data set DataSet color And DataSet style The total number of human targets exceeds 100 ten thousand, and the accuracy of the working clothes recognition model and the detection model is improved.
(1) Data set of working clothes color 2 major classes comprising a coat color and a lower body garment color, wherein the coat color and the lower body garment color each comprise any pattern of white, red, blue, black, yellow, gray, green, purple, and non-solid colors; working clothes style DataSet style The coat type and apron type clothes comprise a coat type and an apron type, wherein the apron type comprises a half-body type, a full-bag type, a shoulder-hanging type, a neck-hanging type and a non-wearing apron type, and the coat type comprises coat trousers, a overcoat type, a one-piece dress, a coat skirt and coat shorts; it is worth mentioning that if the work clothes style is overcoat, dress, apron, then need not carry out the colour discernment.
(2) The two data sets are adopted as a network training set to train an identification network for identifying the work clothes. Data set using style data set of working clothes style Training work clothes style classification network Net style Wherein, training work clothes style classification network Net style The method comprises the steps of selecting a resnet50 as a feature extraction network, then training by adopting a transfer learning method under a dark net framework, and specifically, using an initial learning rate l r The learning rate is updated by adopting a steps method, training data is enhanced by adopting random cutting, exposure, rotation, mirror image and other methods, and the accuracy of the network model is highest when iteration test iters=260000 after test; similarly, the workwear color DataSet DataSet color Training work clothes color classification network Net color The network selects a resnet18 as a feature extraction network, and trains the network by adopting a transfer learning method under a dark net framework, wherein the initial learning rate is set as l r The training data is enhanced by adopting methods of random clipping, exposure, rotation, mirroring and the like, and the accuracy of the network model is highest after testing iteration iters=180000 times.
Step two: retraining the human body key points to extract the network OpenPose and extracting the human body key points in the image. The specific contents are as follows:
s1: firstly, adding a kitchen sample to retrain a human body key point extraction network OpenPose; extracting a human body key point coordinate matrix by using the network:
wherein n represents human body ID, 25 key points in human body are extracted in the invention, p n0 Represents the first key point, p, of the nth human body n24 Represents the 24 th key point of the nth human body, wherein,
p n24 =[x y]x, y are the abscissa and ordinate of the coordinate point.
S2, calculating the external rectangular frame of the human body and the external rectangular frame of the upper body and the external rectangular frame of the lower body according to the key points. Calculating a human body external rectangular frame according to the key points:
wherein n is human body ID, x n The nth person is externally connected with the left upper point abscissa of the rectangular frame, y n The nth person is externally connected with the ordinate, w, of the upper left point of the rectangular frame n The width of the rectangle frame is externally connected with the nth body, h n The nth person is externally connected with the height of the rectangular frame, at the moment,
the upper body of the human body is externally connected with a rectangular frame:
the lower body of the human body is externally connected with a rectangular frame:
the calculation process of the human body circumscribed rectangle frame is to traverse keypoints and find the minimum abscissa x of each row min Maximum abscissa x max Minimum ordinate y min Maximum ordinate y max Coordinates of an upper left point and a lower right point of the circumscribed rectangular frame are calculated:
y leftup =y min
y rightbottom =y max
further, find, x n =x leftup ,y n =y leftup ,w n =x rightbottom -x leftup ,h n =y rightbottom -y leftup The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the calculation process of the circumscribed rectangular frame of the upper body of the human body is to traverse keypoints to find the x coordinate of the key point of the neck of the human body neck 、y neck Finding the abscissa x of key points of buttocks midhip ,y midhip Further, x is obtained n =x leftup ,y n =y neck ,w n =x rightbottom -x leftup ,h n =y midhip -y neck The method comprises the steps of carrying out a first treatment on the surface of the The calculation process of the external rectangular frame of the lower body of the human body is x n =x leftup ,y n =y midhip ,w n =x rightbottom -x leftup ,h n =y rightbottom -y midhip
Step three: inputting an image, namely inputting the image obtained in the step of intercepting to a style classification network to perform type recognition, and obtaining a matrix of the type of the work clothes:
wherein t is n Representing the garment type.
SS1: if the clothing type t is obtained n The images of the external rectangular frame areas of the upper body and the lower body of the human body are respectively intercepted in the input image, and are input into a color classification network for color recognition, so that a matrix of the colors of the upper garment and the trousers is obtained:
wherein c upn Representing the color of the coat, c downn Representing the color of the pants.
And SS2, summarizing the coat colors, the trousers colors, the clothes styles and the apron styles obtained by the style classification network and the color classification network to obtain a working clothes information matrix:
wherein t is n Representing the type of clothing, c n Representing the color of the garment, and inputting the information matrix into a store database for comparison to determine whether the rule is violated.
SS3: if the clothing type t is obtained n The garment type is directly compared with a store database without color recognition.
Step four: and feeding back the comparison result to the corresponding store.
In the description of the present invention, it should be understood that the terms "center," "lateral," "upper," "lower," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, unless otherwise indicated, the meaning of "a number" is two or more. In addition, the term "include" and any variations thereof are intended to cover a non-exclusive inclusion.
The invention has been described in terms of embodiments, and the device can be modified and improved without departing from the principles of the invention. It should be noted that all technical solutions obtained by equivalent substitution or equivalent transformation fall within the protection scope of the present invention.

Claims (5)

1. A high-precision recognition method for kitchen work clothes after catering in urban scale is characterized by comprising the following steps of: the method comprises the following steps: step one, establishing a special data set; step two, training human body key point extraction networkExtracting key points of human bodies in the images; step three, calculating images of the human body circumscribed rectangular frame, the upper body circumscribed rectangular frame and the lower body circumscribed rectangular frame according to the key points obtained in the step two; inputting the images of the rectangular frame region externally connected with the human body obtained in the step three into a style classification network for type recognition to obtain the type of the work clothes, inputting the images of the rectangular frame externally connected with the upper body and the rectangular frame externally connected with the lower body into a color classification network for recognition if the type of the clothes is a pair of trousers, and finally comparing the type and the color of the work clothes with corresponding user databases; step five, feeding back the comparison result to the corresponding store; wherein according toThe key points calculate the external rectangular frame of the human body: />Wherein n is human body ID, < >>For the nth body, the left upper point of the rectangular frame is externally connected by the left upper point of the rectangular frame>The nth person is externally connected with the ordinate of the upper left point of the rectangular frame, and the +.>For the nth body, the width of the rectangle frame is externally connected, < >>The nth person is externally connected with the height of the rectangular frame, and at the moment, the upper body of the person is externally connected with the rectangular frameThe method comprises the steps of carrying out a first treatment on the surface of the External rectangular frame of lower body of human body>The method comprises the steps of carrying out a first treatment on the surface of the The calculation process of the human body external rectangular frame is traversal +.>Find the minimum abscissa of each row +.>Maximum abscissa +.>Minimum ordinate +.>Maximum ordinate +.>Coordinates of an upper left point and a lower right point of the circumscribed rectangular frame are calculated:, />;/>,/>further get->,/>,/> ,/>The method comprises the steps of carrying out a first treatment on the surface of the Similarly, the calculation process of the circumscribed rectangle frame of the upper body of the human body is traversal +.>Finding the abscissa of key points of the neck of the human body +.>、/>Finding the abscissa of key points of buttocks of human body +.>,/>Further obtain +.>,/> ,/>The calculation process of the external rectangular frame of the lower body of the human body is +.>,/> ,/>
2. The method for high-precision identification of post-restaurant kitchen work clothes on a municipal scale according to claim 1, wherein the method comprises the following steps: the dedicated database in the first step contains more than 100 tens of thousands of total human targets.
3. The method for high-precision identification of post-restaurant kitchen work clothes on a municipal scale according to claim 1, wherein the method comprises the following steps: the specific content of the first step is as follows: (1) working clothes data setComprises 2 general classes of coat color and lower body clothes color, wherein the coat color and lower body clothes color comprise white, red, blue, black, yellow, gray, green, purple and non-pure colorsA pattern; working clothes style data set->The coat type and apron type clothes comprise a coat type and an apron type, wherein the apron type comprises a half-body type, a full-bag type, a shoulder-hanging type, a neck-hanging type and a non-wearing apron type, and the coat type comprises coat trousers, a overcoat type, a one-piece dress, a coat skirt and coat shorts; (2) training an identification network for identifying the work clothes by adopting the two data sets as a network training set; adopt work clothes style data set +.>Training work clothes style classification networkWherein, training work clothes style classification network +.>Select->As a feature extraction network, then, atTraining under the framework by adopting a transfer learning method, in particular, the initial learning rate is +>Adopts->The method updates the learning rate, enhances the training data by random cutting, exposure, rotation, mirror image and other methods, and tests the training data to obtain the iterative test +.>When the network model is in the process, the accuracy rate of the network model is highest; similarly, a working garment color datasetTraining work clothes color classification network>The network is selected as->As a feature extraction network, inTraining the network under the framework by adopting a transfer learning method, wherein the initial learning rate is set to be +.>Training data enhancement is carried out by adopting methods of random cutting, exposure, rotation, mirror image and the like, and test iteration is carried out>The accuracy of the secondary network model is highest.
4. The method for high-precision identification of post-restaurant kitchen work clothes on a municipal scale according to claim 1, wherein the method comprises the following steps: the second specific content of the step is as follows: s1: first adding kitchen sample to retrain human body key point extraction networkThe method comprises the steps of carrying out a first treatment on the surface of the Extracting a human body key point coordinate matrix by using the network: />Wherein n represents human body ID, 25 key points in human body are extracted, and the key points are->Represents the first key point of the nth human body,>represents the 24 th key point of the nth human body, wherein +.>X, y are the abscissa and ordinate of the coordinate point.
5. The method for high-precision identification of post-restaurant kitchen work clothes on a municipal scale according to claim 1, wherein the method comprises the following steps: the specific steps of the fourth step are as follows: SS1: if a clothing type is obtainedThe images of the external rectangular frame areas of the upper body and the lower body of the human body are respectively intercepted in the input image and are input into a color classification network for color recognition to obtain a matrix of the colors of the upper garment and the trousers>Wherein->Indicating the color of the coat->Representing the color of trousers; SS2, summarizing the coat colors, trousers colors, clothes styles and apron styles obtained by the style classification network and the color classification network to obtain a working clothes information matrix +.>Wherein->Representing the type of clothing +.>Representing clothing color, inputting the information matrix into store database for comparison to determine whether toViolating rules; SS3: if get clothing type ++>The garment type is directly compared with a store database without color recognition.
CN202010250479.XA 2020-04-01 2020-04-01 High-precision recognition method for kitchen work clothes after catering in urban scale Active CN111461017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010250479.XA CN111461017B (en) 2020-04-01 2020-04-01 High-precision recognition method for kitchen work clothes after catering in urban scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010250479.XA CN111461017B (en) 2020-04-01 2020-04-01 High-precision recognition method for kitchen work clothes after catering in urban scale

Publications (2)

Publication Number Publication Date
CN111461017A CN111461017A (en) 2020-07-28
CN111461017B true CN111461017B (en) 2024-01-19

Family

ID=71685783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010250479.XA Active CN111461017B (en) 2020-04-01 2020-04-01 High-precision recognition method for kitchen work clothes after catering in urban scale

Country Status (1)

Country Link
CN (1) CN111461017B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860471B (en) * 2020-09-21 2021-01-15 之江实验室 Work clothes wearing identification method and system based on feature retrieval
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632368A (en) * 2013-11-29 2014-03-12 苏州有色金属研究院有限公司 Metal plate strip surface image defect merging method
KR20140114741A (en) * 2013-03-19 2014-09-29 삼성전자주식회사 Apparatus and method for human pose estimation
CN108229288A (en) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment
CN108805135A (en) * 2018-06-14 2018-11-13 深圳码隆科技有限公司 A kind of garment dimension data identification method, device and user terminal
WO2018220824A1 (en) * 2017-06-02 2018-12-06 三菱電機株式会社 Image discrimination device
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
US10321728B1 (en) * 2018-04-20 2019-06-18 Bodygram, Inc. Systems and methods for full body measurements extraction
CN110263605A (en) * 2018-07-18 2019-09-20 桂林远望智能通信科技有限公司 Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation
CN110334675A (en) * 2019-07-11 2019-10-15 山东大学 A kind of pedestrian's recognition methods again based on skeleton key point segmentation and column convolution
CN110493512A (en) * 2019-07-31 2019-11-22 幻想动力(上海)文化传播有限公司 Photography composition method, apparatus, photographic equipment, electronic device and storage medium
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN110610499A (en) * 2019-08-29 2019-12-24 杭州光云科技股份有限公司 Method for automatically cutting local detail picture in image
WO2020052169A1 (en) * 2018-09-12 2020-03-19 深圳云天励飞技术有限公司 Clothing attribute recognition detection method and apparatus
WO2020051959A1 (en) * 2018-09-10 2020-03-19 深圳码隆科技有限公司 Image-based costume size measurement method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469087B (en) * 2015-07-13 2017-04-19 百度在线网络技术(北京)有限公司 Method for identifying clothes image, and labeling method and device of clothes image
US9811762B2 (en) * 2015-09-22 2017-11-07 Swati Shah Clothing matching system and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140114741A (en) * 2013-03-19 2014-09-29 삼성전자주식회사 Apparatus and method for human pose estimation
CN103632368A (en) * 2013-11-29 2014-03-12 苏州有色金属研究院有限公司 Metal plate strip surface image defect merging method
WO2018220824A1 (en) * 2017-06-02 2018-12-06 三菱電機株式会社 Image discrimination device
CN108229288A (en) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment
US10321728B1 (en) * 2018-04-20 2019-06-18 Bodygram, Inc. Systems and methods for full body measurements extraction
CN108805135A (en) * 2018-06-14 2018-11-13 深圳码隆科技有限公司 A kind of garment dimension data identification method, device and user terminal
CN110263605A (en) * 2018-07-18 2019-09-20 桂林远望智能通信科技有限公司 Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation
WO2020051959A1 (en) * 2018-09-10 2020-03-19 深圳码隆科技有限公司 Image-based costume size measurement method and device
WO2020052169A1 (en) * 2018-09-12 2020-03-19 深圳云天励飞技术有限公司 Clothing attribute recognition detection method and apparatus
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
CN110334675A (en) * 2019-07-11 2019-10-15 山东大学 A kind of pedestrian's recognition methods again based on skeleton key point segmentation and column convolution
CN110493512A (en) * 2019-07-31 2019-11-22 幻想动力(上海)文化传播有限公司 Photography composition method, apparatus, photographic equipment, electronic device and storage medium
CN110555393A (en) * 2019-08-16 2019-12-10 北京慧辰资道资讯股份有限公司 method and device for analyzing pedestrian wearing characteristics from video data
CN110610499A (en) * 2019-08-29 2019-12-24 杭州光云科技股份有限公司 Method for automatically cutting local detail picture in image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于立体视觉的水果采摘机器人系统设计;周俊等;农业机械学报;第41卷(第06期);第158-162页 *
姿态检测网络在服装关键点检测中的应用;季晨颖;赵鸣博;李潮;中国科技论文;第15卷(第03期);第255-259页 *
款式特征描述符的服装图像细粒度分类方法;吴苗苗等;计算机辅助设计与图形学学报;第31卷(第05期);第780-791页 *
软间隔最大化在露天矿爆破区域人员疏散自动巡查中的应用研究;梁永春等;华北科技学院学报;第17卷(第01期);第45-51页 *

Also Published As

Publication number Publication date
CN111461017A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111461017B (en) High-precision recognition method for kitchen work clothes after catering in urban scale
CN108898047A (en) The pedestrian detection method and system of perception are blocked based on piecemeal
CN106228547B (en) A kind of profile and border detection algorithm of view-based access control model color theory and homogeneity inhibition
CN106934386B (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN109377703A (en) A kind of forest fireproofing early warning system and its method based on machine vision
CN107066972B (en) Natural scene Method for text detection based on multichannel extremal region
CN103632158B (en) Forest fire prevention monitor method and forest fire prevention monitor system
CN110321815A (en) A kind of crack on road recognition methods based on deep learning
CN102073841B (en) Poor video detection method and device
CN106250845A (en) Flame detecting method based on convolutional neural networks and device
CN109670591A (en) A kind of training method and image matching method, device of neural network
CN101702197A (en) Method for detecting road traffic signs
CN109241871A (en) A kind of public domain stream of people&#39;s tracking based on video data
CN108961675A (en) Fall detection method based on convolutional neural networks
CN110263609A (en) A kind of automatic identifying method of safety cap wear condition
CN105067638A (en) Tire fetal-membrane surface character defect detection method based on machine vision
CN108460407A (en) A kind of pedestrian&#39;s attribute fining recognition methods based on deep learning
CN107067412A (en) A kind of video flame smog detection method of Multi-information acquisition
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN109657612A (en) A kind of quality-ordered system and its application method based on facial image feature
CN105893962A (en) Method for counting passenger flow at airport security check counter
CN108319908A (en) A kind of untethered environment method for detecting human face based on Pixel-level Differential Characteristics
Gao et al. A method for accurately segmenting images of medicinal plant leaves with complex backgrounds
CN110033040A (en) A kind of flame identification method, system, medium and equipment
CN102609728B (en) Method for detecting special pornographic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant