CN112395999A - Wearing standard judging method based on image recognition and related equipment - Google Patents

Wearing standard judging method based on image recognition and related equipment Download PDF

Info

Publication number
CN112395999A
CN112395999A CN202011307480.8A CN202011307480A CN112395999A CN 112395999 A CN112395999 A CN 112395999A CN 202011307480 A CN202011307480 A CN 202011307480A CN 112395999 A CN112395999 A CN 112395999A
Authority
CN
China
Prior art keywords
wearing
feature
feature map
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011307480.8A
Other languages
Chinese (zh)
Inventor
贾梦晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202011307480.8A priority Critical patent/CN112395999A/en
Publication of CN112395999A publication Critical patent/CN112395999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application belongs to the technical field of artificial intelligence, is applied to the field of intelligent enterprises, and relates to a wearing standard judgment method based on image recognition, which comprises the steps of receiving a wearing image uploaded by financial industry personnel through a wearing acquisition device; segmenting the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram; identifying whether a preset feature region exists in the upper body feature map or not to obtain a region identification result; performing feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result; and inputting the characteristic extraction result into a trained wearing analysis model to obtain a wearing standard judgment result. In addition, the application also relates to a block chain technology, and the wearing image is also stored in the block chain. The method greatly reduces the calculation amount of the server.

Description

Wearing standard judging method based on image recognition and related equipment
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and an apparatus for determining a wear profile based on image recognition, a computer device, and a storage medium.
Background
The financial industry has certain specifications and standards for wearing of personnel, and business casual clothes such as neckband, sleeved shirt or lapel T-shirt, western style casual pants, casual leather shoes and the like need to be worn by financial companies, particularly employees in banks, or tools need to be worn according to requirements. Jeans, sportswear, home wear, vests, neckless shirts, shorts, camisoles, fancy dresses, "dew, head, short, broached" garments, various hole shoes and slippers for work. At present, the wearing specifications of financial companies are manually checked, the labor cost is high, and the effect is difficult to guarantee. In a traditional wearing identification method, an image to be identified is generally acquired from an acquired video, then the image is input into a model to perform key point detection, output is obtained, and the image is obtained according to a preset threshold value. And the trained models are used for detecting key points of the human body, identifying different dresses of employees, selecting different intelligent detection models based on different dresses, inputting target dresses into corresponding classifiers, and obtaining an output result whether the target dresses are standard or not.
The wearing identification mode needs model training aiming at different wearing types, and the data processing amount and the cost are huge.
Disclosure of Invention
Based on this, in order to solve the above technical problems, the present application provides a method and an apparatus for determining a wearing specification based on image recognition, a computer device, and a storage medium, so as to solve the technical problems of huge data processing amount and cost in the prior art.
A method of wear specification determination based on image recognition, the method comprising:
receiving a wearing image uploaded by financial industry personnel through a wearing acquisition device;
segmenting the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram;
identifying whether a preset feature area exists in the upper body feature map or not to obtain an area identification result, wherein the preset feature area is a distribution area of a specified pixel value which has a corresponding relation with a wearing specification on the upper body feature map;
performing feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result;
and inputting the characteristic extraction result into a trained wearing analysis model to obtain a wearing standard judgment result.
A wearing specification determination device based on image recognition, the device comprising:
the receiving module is used for receiving the wearing images uploaded by financial industry personnel through the wearing acquisition device;
the segmentation module is used for carrying out segmentation processing on the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram;
the identification module is used for identifying whether a preset characteristic region exists in the upper body characteristic diagram or not to obtain a region identification result;
the extraction module is used for carrying out feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result;
and the judging module is used for inputting the characteristic extraction result into the trained wearing analysis model to obtain a wearing standard judging result.
A computer device comprising a memory and a processor, and computer readable instructions stored in the memory and executable on the processor, the processor implementing the steps of the image recognition based dressing specification determination method when executing the computer readable instructions.
A computer readable storage medium storing computer readable instructions which, when executed by a processor, implement the steps of the above-described method for determining a wear specification based on image recognition.
According to the wearing specification judging method, the wearing specification judging device, the computer equipment and the storage medium based on the image recognition, the wearing image is segmented to obtain the upper body feature map, the lower body feature map and the foot feature map, the regional distribution of the specified pixel values corresponding to the wearing specification on the upper body feature map is detected, whether the preset feature region exists on the upper body feature map or not is determined, the features in the upper body feature map, the lower body feature map and the foot feature map are extracted according to the recognition result, and the features are input into the model for recognition. According to the technical scheme, the wearing standard detection under the specific scene is targeted, the features are extracted according to the preset feature area, and then the extracted features are input into the wearing recognition model to obtain the wearing recognition result. Through this kind of simple mode, can discern whether financial enterprise personnel dress accords with the formal dress standard fast, need not to wear to the type to carry out the model training to different, greatly reduced data processing volume and cost.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a wearing specification determination method based on image recognition;
FIG. 2 is a schematic flow chart of a method for determining a wear specification based on image recognition;
FIG. 3 is a schematic diagram of the scale feature of a human body;
FIG. 4 is a schematic diagram of a binarized image;
FIG. 5 is a diagram of a first predetermined pixel region;
FIG. 6 is a diagram illustrating a binarized image according to another embodiment;
FIG. 7 is a schematic diagram of a sample data set;
fig. 8 is a schematic view of a dressing specification determining apparatus based on image recognition;
FIG. 9 is a diagram of a computer device in one embodiment.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The wearing specification judging method based on image recognition provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. The application environment may include a terminal 102, a network for providing a communication link medium between the terminal 102 and the server 104, and a server 104, wherein the network may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may use the terminal 102 to interact with the server 104 over a network to receive or send messages, etc. The terminal 102 may have installed thereon various communication client applications, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal 102 may be various electronic devices having a display screen and supporting web browsing, including but not limited to a smart phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), a laptop portable computer, a desktop computer, and the like.
The server 104 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal 102.
It should be noted that the wearing specification determining method based on image recognition provided in the embodiments of the present application is generally executed by a server/terminal, and accordingly, a wearing specification determining apparatus based on image recognition is generally disposed in a server/terminal device.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The method and the device can be applied to the technical field of smart enterprises and smart cities, so that the construction of the smart cities is promoted, for example, the method and the device are used for judging the wearing standards of financial enterprises and checking the wearing standards in certain serious meetings of cities.
It should be understood that the number of terminals, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Wherein, the terminal 102 communicates with the server 104 through the network. The method comprises the steps of receiving a wearing image uploaded by a terminal 102, segmenting the wearing image, identifying a preset feature region in an upper body feature map obtained through segmentation, obtaining and extracting features according to a region identification result, and inputting the extracted features into a trained wearing analysis model to obtain a wearing specification judgment result. The terminal 102 and the server 104 are connected through a network, the network may be a wired network or a wireless network, the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for determining a wearing specification based on image recognition is provided, which is described by taking the method as an example applied to a server in fig. 1, and includes the following steps:
and 202, receiving the wearing image uploaded by the financial industry personnel through the wearing acquisition device.
The technical scheme of the application can be applied to scenes with special requirements on wearing of people, for example, the financial industry has certain specifications and standards on wearing of people. In financial companies, particularly practitioners in banks, must wear business casual clothes such as collars, shirt with sleeves, lapel T-shirts, casual pants in the format of western style, casual leather shoes and the like, or wear tools according to requirements. Jeans, sportswear, home wear, vests, neckless shirts, shorts and camisole, dress with a whistle, namely 'dew, head, short and broken hole' dress, various hole shoes and slippers for working.
In some embodiments, for example, it is applied to a special prescription mirror with a camera for capturing images of the person wearing it and a display screen for real-time wearing information by the real staff. Specifically, the staff member stands in front of the mirror according to the standard, and the camera collects the wearing image of the person and sends it to the server. The wearing image is generally a whole body image of the person collected from head to foot.
The server side can be a wearing identification server.
And 204, segmenting the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram.
After receiving the wearing image, the server side can perform segmentation processing on the wearing image according to the human body structure characteristics. In some embodiments, the anatomical feature is an actual scale feature of a person. Specifically, the method comprises the following steps:
identifying the actual proportion characteristics of the personnel in the wearing image, and segmenting the wearing image according to the actual proportion characteristics of the personnel to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram. The actual proportion features are body proportion features of the person in the wearing image, and include a leg-to-body ratio, a head-to-body ratio, a waist-to-hip ratio, a shoulder-to-hip ratio, a height-to-hip index, a height-to-leg index and the like. The proportion of the shoulder to the waist, the waist to the ankle and the ankle to the sole of a person is mainly detected. When the actual scale features are obtained, the actual scale features need to be obtained according to the existing preset scale features in combination with technologies such as binarization processing and contour detection in the image recognition technology. The preset scale features are human scale features in the human body structure diagram of fig. 3.
In some embodiments, the preset scale features are one or more of body scale features, body garment features. The human body scale is characterized by the scales from shoulder to waist, waist to ankle and ankle to sole in the existing data as shown in fig. 3. When an adult stands, the proportion of the human body is generally between seven heads and seven points of five heads, and when the standing arm is drooped, the finger tip is positioned at one half of the thigh; the third head of the human body from top to bottom is the position of the navel, and the height of the third head from the shoulder to the bottom is the position of the thigh root. Where the number (r-ninthly) in fig. 3 is a position representation of the human body structure, and the "circle" in fig. 3 represents the size of one head.
The human body proportion characteristic in the figure 3 can be directly applied to a person in a wearing image, the wearing image is cut in proportion, and an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram which correspond to the characteristic images of the three parts from shoulder to waist, from waist to ankle and from ankle to sole are obtained, wherein the upper body characteristic diagram comprises the image of the area from shoulder to waist of an employee as far as possible, the lower body characteristic diagram comprises the image from waist to ankle of the employee, and the foot characteristic diagram comprises the image of foot of the employee.
However, in reality, the collected wearing images are distorted and deformed to a certain extent due to different standing postures and different heights of the cameras, and the human body proportion characteristics in the preset proportion characteristics are relatively fixed values, while the human body proportion characteristics of men and women are slightly different, so that all the body proportions in reality cannot be met, and the accuracy of the obtained segmentation images is possibly low, thereby influencing subsequent wearing recognition. In order to obtain a more accurate segmentation characteristic diagram, on the basis of the preset proportion characteristic, the method can further comprise the following steps:
binarizing the wearing image, and carrying out edge detection on the binarized wearing image to determine the human body edge in the wearing image; determining a first preset pixel area in the wearing image based on the preset proportion characteristic, and traversing pixel points of the human body edge in the first preset pixel area to determine human body size coordinates of personnel in the wearing image; and determining the actual proportion characteristic according to the size coordinate point.
As shown in fig. 4, the binarization processing of the wearing image is to set the gray value of a pixel point on the image to 0 or 255, that is, to make the whole wearing image have an obvious visual effect of only black and white. The binarized image may then be edge detected by the canny edge detection algorithm and then the human body edges are found. The Canny edge detection algorithm is a multi-stage edge detection algorithm, can identify the actual edge of an object in an image to the maximum extent, has the smallest probability of missing detection of the actual edge and the smallest probability of false detection of a non-edge, and has high edge detection accuracy. The first preset pixel area is a rough pixel area based on preset proportion characteristics and coordinate positions of key parts of the personnel obtained by experience of the multi-sample image. Specifically, the first preset pixel area comprises a plurality of key coordinate ranges; in the proposal, the position of the first preset pixel region of the person in the binary image needs to be determined on the basis of the preset proportion characteristic, the vertex coordinate and the bottom point coordinate of the human body edge, for example, a preset proportion line is generated on the human body edge on the basis of the preset proportion characteristic, the preset proportion line is taken as the 0 axis of the abscissa, and the regions in the upper and lower m longitudinal axis coordinate ranges are taken as the key coordinate ranges. And then traversing pixel points on the human body edge in the wearing image within a key coordinate range, as shown in fig. 5, wherein s1, s2 and s3 are three different key table ranges in the first preset pixel region.
When the background color pixel value of the binarized image is 0, the pixel value of the pixel point H (xi, yi) is 255 detected on the human body edge within the key coordinate range, and under the condition that xi is not changed, the pixel value of the pixel point Hi (xi, yi ± m) within the key coordinate range is 0, the pixel point H (xi, yi) is taken as the size coordinate point within the current key coordinate range, which indicates that the coordinate is in the turning region of the human body edge, and generally there are contour included angles on the human body edge, such as points a, b and c in fig. 5. However, the detection method has large limitation, is generally suitable for people to stand and wear regular wearing images, and when the people meet special scenes of standing and wearing, a plurality of size coordinate points which are most suitable as segmentation points cannot be detected by the contour included angles. As shown in fig. 6, in a scene with a disordered background and many clothes folds, the detection and calculation of the size coordinate points by the above method is complicated, and the obtained accuracy also preferentially selects the pixel point with the smallest contour included angle where H (xi, yi) is located as the size coordinate point, which is a more cursive situation. Therefore, the average value x0 of the pixel coordinates of xi in the plurality of pixel points H (xi, yi) obtained by calculation can be selected, and H (x0, yi) is taken as the size pixel point. And determining the actual proportion characteristic of the personnel by taking xi in the obtained pixel points H (xi, yi) in each key pixel region as a dividing line, or determining the actual proportion characteristic by taking x0 in the pixel points H (x0, yi) as the dividing line.
That is, the wearing image is first segmented to obtain the upper body feature map, the lower body feature map, and the foot feature map for segmentation, as row pixels on xi of the obtained pixel point H (xi, yi). Through the method, the personnel in the wearing image can be more accurately segmented, an accurate segmentation result is obtained, and the subsequent identification efficiency is improved.
Further, besides the example proportion features, the human body clothing features of the person can be obtained by dividing according to the wearing of the person.
Identifying human body clothing characteristics of a person in the wearing image; and segmenting the wearing image according to the human body clothing characteristics to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram.
The segmentation method of this embodiment is substantially the same as the above-mentioned segmentation method, and the main difference is that a fitting curve feature in a second preset pixel region is detected, where the second preset pixel region is a preset pixel region defined according to the wearing characteristics of human body clothing. And then, recognizing the human body contour in a second preset pixel area to obtain a fitting curve characteristic, determining the human body clothing characteristic according to the fitting curve characteristic, and finally cutting the image according to the human body clothing characteristic. Specifically, the method comprises the following steps: determining a turning region to be determined in the sampling image based on a second preset pixel region; carrying out binarization processing on the wearing image of the turning region to be determined, carrying out edge detection, and determining the characteristic edge of the turning region to be determined; fitting the characteristic edge through a machine vision algorithm to obtain a fitting curve characteristic; inputting the fitting curve characteristics into a trained region division model to obtain a region identification result, and obtaining a turning region according to the region identification result; and obtaining the human body clothing characteristics according to the turning region.
Optionally, the feature edge may be fitted by an OpenCV platform to obtain a fitted curve feature.
Optionally, the image may be divided into a plurality of regions by a region division algorithm by detecting a color boundary of the clothing in the wearing image through an existing target detection model based on deep learning, and then the wearing image may be divided by using the boundary of different regions as a dividing line. The mode is simple and convenient, the identification is quick, and the wearing standard detection efficiency can be effectively improved.
Step 206, identifying whether a preset feature region exists in the upper body feature map to obtain a region identification result, wherein the preset feature region is a distribution region of a specified pixel value having a corresponding relationship with a wearing specification on the upper body feature map.
The preset feature area refers to a distribution area of a specified pixel value on the upper body feature map, wherein the specified pixel value has a corresponding relation with the wearing specification. In some embodiments, the pixel value of the tie or bow tie area on the chest of the suit wearer is generally set to 0 or 255, or other pixel values obtained by identification, and the area distribution of the designated pixel values has certain regularity or fixity, for example, has obvious geometric characteristics, such as a triangle, and is easier to obtain by means of contour detection and area identification. And because the wearing scene is a common phenomenon in the formal field, the preset feature area in the wearing image is identified before feature extraction, so that the subsequent data processing amount can be greatly reduced, and the wearing standard detection efficiency is improved. Specifically, the method comprises the following steps:
carrying out contour recognition on the wearing image to obtain a human body contour image; detecting whether a specific contour exists in the human body contour image; if the specific contour exists, calculating the coordinate position of the specific contour in the human body contour image according to a preset contour coordinate; and taking the area corresponding to the specific contour in the preset coordinate range as the preset characteristic area to obtain the area identification result.
Further, if the upper body feature image does not exist, the region identification result is obtained based on the fact that the upper body feature image does not exist. The wearing image is subjected to binarization processing to obtain a binarized image similar to that shown in the following figure, and then the contour edge of the binarized image is detected through a contour detection algorithm to obtain a contour image shown in figure 4:
the specific contour in the contour image can be detected through a Canny edge detection algorithm, then the coordinate relationship between the specific contour and the overall contour of the person is calculated, and if the center coordinate point of the specific contour meets the chest position in the overall contour of the person, namely the positions from (0, 3/5H) to (0, 4/5H), the region corresponding to the specific contour is considered to be the preset feature region. Wherein H is the overall height of the person in the wearing image.
As shown in fig. 6, for example, there are different types of suits, and there are also different types of ways for putting on the suits, some people will wear the suits in a standard manner, wear ties or bow ties, and some people will enjoy leisure time, and directly wear the suit jacket, so that the specific contours obtained by detection are different, and the center coordinate points of the circumscribed center circles of the specific contours are also different.
And 208, performing feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result.
If the area identification result is that the preset characteristic area is identified, the person can be considered as wearing one of formal suits and casual suits, and because holes are rarely formed in the suits, the person can determine whether the clothes are exposed or not and whether the flowers and whistles are formed or not according to the color types, the color types and the human body skin color ranges appearing in the image worn by the person emphatically according to the area identification result. Specifically, the method comprises the following steps:
if the region identification result indicates that the preset feature region exists, respectively extracting a first color feature and a second color feature in the upper body feature map and the lower body feature map, and extracting a third color feature in the lower body feature map as a feature extraction result; and if the area identification result indicates that the preset feature area does not exist, respectively extracting a first color feature, a second color feature and a third color feature in the upper body feature map, the lower body feature map and the foot feature map as the feature extraction result.
In this embodiment, if the preset feature region exists, the features of the color type and the color type in the upper body feature map and the foot feature map are extracted, and the feature of the color type human body skin color range in the lower body feature map is extracted. If the preset feature region does not exist, extracting the color types, the color types and the human body skin color ranges of the upper body feature map, the lower body feature map and the foot feature map, and further extracting the region areas of different color types. Such as: if the extracted green area accounts for 10% of the whole upper body feature map, the person is considered to wear a whistle too much.
Specifically, the first color feature, the second color feature and the third color feature can be obtained by identifying a hue in the feature map, calculating a connected region, and the hue type of different connected regions through the OpenCV platform.
I.e., such as shirts, T-shirts, sleeveless, vests, suits, etc.; establishing a plurality of monitoring points on a human body, respectively distributing the monitoring points on a collar, a chest, a waist and cuffs of the garment, and identifying whether the coat has a plurality of colors and is excessively whistled or not; the information about the trousers also needs to be collected about the color and style, whether the leg area is exposed (i.e. the length of the trousers).
According to the embodiment, the mode of firstly identifying the characteristics of the clothes reduces subsequent analysis processing, and the subsequent data processing amount can be greatly reduced.
And step 210, inputting the feature extraction result into a trained wearing analysis model to obtain a wearing standard judgment result.
Specifically, the first color feature is a color type, the second color feature is a color type, and the third color feature is a human skin color range.
And inputting the first color feature, the second color feature and the third color feature into a trained wearing analysis model to obtain a wearing recognition result. The wearing analysis model identifies clothes meeting the standard, such as shirts, suits and the like, or clothes not meeting the standard in the sample image through a large amount of model training, can be directly judged, and can be judged according to the judgment of a plurality of detection points, including whether sleeve-free clothes or fancy clothes exist or not, whether holes exist or not, and the like, so as to draw a conclusion whether the person meets the clothes standard or not, namely whether the sleeve-free clothes are worn or not, namely whether monitoring points are arranged at the positions of cuffs, namely the positions of shoulders of the person, and whether the detection points are skin color ranges or not (a color area is defined by skin color values) like the facial recognition facial features detection points; the fancy dress is judged by detecting color points of images of clothes and judging whether the images have a plurality of colors.
The wearing analysis model can be trained by using a TensorFlow platform, and firstly, a sample data set is obtained, as shown in FIG. 7:
the sample data set is an entrance level data image library for image recognition, and comprises a training set, a verifier and labeled labels. Data processing is performed next: the data must be preprocessed before the network is trained. The pixel value of the current data is between 0 and 255, dimensions of different data are unified, and analysis and calculation of the data are facilitated, namely normalization processing is performed on the data. The current data can be observed explicitly before processing. Then, building a model: the basic components of a neural network are layers that extract features from data input to them and process them accordingly. Most neural networks simply connect layers together, such as Keras (Keras is a high-level neural network API written in Python). And (3) evaluating the model: after the model training is finished, the embodiment also needs to judge whether the model is accurate, so that the picture can be predicted, and the result is compared and continuously optimized; the prediction result is an array of 10 numbers. They represent the degree of "confidence" of the model relative to the ten tags. It can be seen which label has the highest confidence value, and it is obvious that the 'confidence' degree of the 9 th digit is the highest (0-9), and finally, the performance of the training model in the whole test data set is obtained, the approximate precision in the test training set can be up to more than 88%, and in practical application, not only the wearing specification judgment can be obtained, but also the clothing type can be judged. The training of the wearing analysis model is generally performed in the existing manner, and details are not repeated in this embodiment.
It is emphasized that the wearing image may also be stored in a node of a blockchain in order to further ensure privacy and security of the person.
In the wearing specification judging method based on image recognition, an upper body feature map, a lower body feature map and a foot feature map are obtained by segmenting a wearing image, the regional distribution of specified pixel values corresponding to the wearing specification on the upper body feature map is detected, whether a preset feature region exists on the upper body feature map is determined, features in the upper body feature map, the lower body feature map and the foot feature map are extracted according to a recognition result, and the features are input into a model for recognition. According to the technical scheme, the wearing standard detection under the specific scene is targeted, the features are extracted according to the preset feature area, and then the extracted features are input into the wearing recognition model to obtain the wearing recognition result. Through this kind of simple mode, can discern whether financial enterprise personnel dress accords with formal dress standard fast.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided an image recognition-based dressing specification determination device, which corresponds one-to-one to the image recognition-based dressing specification determination method in the above-described embodiment. This dress norm decision device based on image recognition includes:
the receiving module 802 is used for receiving the wearing image uploaded by the financial industry personnel through the wearing acquisition device;
a segmentation module 804, configured to perform segmentation processing on the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram, and a foot characteristic diagram;
an identifying module 806, configured to identify whether a preset feature region exists in the upper body feature map, to obtain a region identification result;
an extraction module 808, configured to perform feature extraction on the upper body feature map, the lower body feature map, and the foot feature map based on the region identification result to obtain a feature extraction result;
and the judging module 810 is used for inputting the feature extraction result into the trained wearing analysis model to obtain a wearing standard judging result.
Further, the cutting module 804 includes:
the first identification submodule is used for identifying the actual proportion characteristics of the personnel in the wearing image;
and the second segmentation sub-module is used for segmenting the wearing image according to the actual proportion characteristic to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram.
Further, the first identification submodule includes:
the contour unit is used for binarizing the wearing image, carrying out edge detection on the binarized wearing image and determining the human body edge in the wearing image;
the coordinate unit is used for determining a first preset pixel area in the wearing image based on the preset proportion characteristics, and traversing pixel points of the human body edge in the first preset pixel area to determine the human body size coordinate of the person in the wearing image;
and the characteristic unit is used for determining the actual proportion characteristic according to the size coordinate point.
Further, the cutting module 804 further includes:
the second identification submodule is used for identifying human body clothing characteristics of the person in the wearing image;
and the second segmentation sub-module is used for segmenting the wearing image according to the human body clothing characteristics to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram.
Further, the identifying module 806 includes:
the contour identification submodule is used for carrying out contour identification on the wearing image to obtain a human body contour image;
the detection submodule detects whether a specific contour exists in the human body contour image by hand;
the coordinate calculation submodule is used for calculating the coordinate position of the specific contour in the human body contour image according to preset contour coordinates if the specific contour exists; and are
And the result identification submodule is used for taking the area corresponding to the specific contour in the preset coordinate range as the preset characteristic area to obtain the area identification result.
Further, the extraction module 808 includes:
a first extraction module, configured to, if the region identification result indicates that the preset feature region exists, respectively extract a first color feature and a second color feature in the upper body feature map and the lower body feature map, and extract a third color feature in the lower body feature map as a feature extraction result;
and the second extraction module is used for respectively extracting the first color feature, the second color feature and the third color feature in the upper body feature map, the lower body feature map and the foot feature map as the feature extraction result if the region identification result shows that the preset feature region does not exist.
It should be emphasized that, in order to further ensure the privacy and security of the personnel information, the wearing image may also be stored in a node of a block chain.
The wearing specification judging device based on image recognition obtains the upper body feature map, the lower body feature map and the foot feature map by segmenting the wearing image, detects the regional distribution of the specified pixel values corresponding to the wearing specification on the upper body feature map, determines whether the upper body feature map has the preset feature region, extracts the features in the upper body feature map, the lower body feature map and the foot feature map according to the recognition result, and inputs the features into the model for recognition. According to the technical scheme, the wearing standard detection under the specific scene is targeted, the features are extracted according to the preset feature area, and then the extracted features are input into the wearing recognition model to obtain the wearing recognition result. Through this kind of simple mode, can discern whether financial enterprise personnel dress accords with formal dress standard fast.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the non-volatile storage medium. The database of the computer device is used for storing the wearing image. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions, when executed by a processor, implement a method of wear specification determination based on image recognition. The method comprises the steps of obtaining an upper body feature map, a lower body feature map and a foot feature map by segmenting a wearing image, detecting the regional distribution of specified pixel values corresponding to wearing specifications on the upper body feature map, determining whether a preset feature region exists on the upper body feature map, extracting features in the upper body feature map, the lower body feature map and the foot feature map according to recognition results, and inputting the features into a model for recognition. According to the technical scheme, the wearing standard detection under the specific scene is targeted, the features are extracted according to the preset feature area, and then the extracted features are input into the wearing recognition model to obtain the wearing recognition result. Through this kind of simple mode, can discern whether financial enterprise personnel dress accords with formal dress standard fast.
As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer-readable storage medium is provided, on which computer-readable instructions are stored, and when executed by a processor, implement the steps of the wearing specification determination method based on image recognition in the above-described embodiment, such as step 202 to step 210 shown in fig. 2, or when executed by a processor, implement the functions of the modules/units of the wearing specification determination device based on image recognition in the above-described embodiment, such as the functions of modules 802 to 810 shown in fig. 8.
The method comprises the steps of obtaining an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram by segmenting a wearing image, detecting the regional distribution of a designated pixel value corresponding to a wearing specification on the upper body characteristic diagram, determining whether a preset characteristic region exists on the upper body characteristic diagram, extracting the characteristics in the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram according to a recognition result, and inputting the characteristics into a model for recognition. According to the technical scheme, the wearing standard detection is carried out in a specific scene, the characteristics are extracted in a targeted mode according to the preset characteristic area, and then the extracted characteristics are input into the wearing recognition result. Through this kind of simple mode, can discern whether financial enterprise personnel dress accords with formal dress standard fast.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a non-volatile computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, without departing from the spirit and scope of the present invention, several changes, modifications and equivalent substitutions of some technical features may be made, and these changes or substitutions do not make the essence of the same technical solution depart from the spirit and scope of the technical solution of the embodiments of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A wearing specification judging method based on image recognition is characterized by comprising the following steps:
receiving a wearing image uploaded by financial industry personnel through a wearing acquisition device;
segmenting the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram;
identifying whether a preset feature area exists in the upper body feature map or not to obtain an area identification result, wherein the preset feature area is a distribution area of a specified pixel value which has a corresponding relation with a wearing specification on the upper body feature map;
performing feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result;
and inputting the characteristic extraction result into a trained wearing analysis model to obtain a wearing standard judgment result.
2. The method according to claim 1, wherein the human body structural feature is a human actual scale feature, and the segmenting process is performed on the wearing image according to the human body structural feature to obtain an upper body feature map, a lower body feature map and a foot feature map, including:
identifying actual scale features of the person in the wearing image;
and segmenting the wearing image according to the actual proportion characteristic to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram.
3. The method of claim 2, wherein the identifying actual scale features of the person in the wearing image comprises:
binarizing the wearing image, and carrying out edge detection on the binarized wearing image to determine the human body edge in the wearing image;
determining a first preset pixel area in the wearing image based on the preset proportion characteristic, and traversing pixel points of the human body edge in the first preset pixel area to determine human body size coordinates of personnel in the wearing image;
and determining the actual proportion characteristic according to the size coordinate point.
4. The method according to claim 1, wherein the human body structural feature is a human body clothing feature, and the segmenting process is performed on the wearing image according to the human body structural feature to obtain an upper body feature map, a lower body feature map and a foot feature map, including:
identifying human body clothing characteristics of a person in the wearing image;
and segmenting the wearing image according to the human body clothing characteristics to obtain the upper body characteristic diagram, the lower body characteristic diagram and the foot characteristic diagram.
5. The method according to claim 1, wherein the identifying a preset feature region in the upper body feature map to obtain a region identification result comprises:
carrying out contour recognition on the wearing image to obtain a human body contour image;
detecting whether a specific contour exists in the human body contour image;
if the specific contour exists, calculating the coordinate position of the specific contour in the human body contour image according to a preset contour coordinate; and are
And taking the area corresponding to the specific contour in the preset coordinate range as the preset characteristic area to obtain the area identification result.
6. The method according to claim 1, wherein the performing feature extraction on the upper body feature map, the lower body feature map, and the foot feature map based on the region identification result to obtain a feature extraction result includes:
if the region identification result indicates that the preset feature region exists, respectively extracting a first color feature and a second color feature in the upper body feature map and the lower body feature map, and extracting a third color feature in the lower body feature map as a feature extraction result;
and if the area identification result indicates that the preset feature area does not exist, respectively extracting a first color feature, a second color feature and a third color feature in the upper body feature map, the lower body feature map and the foot feature map as the feature extraction result.
7. The method of claim 6,
the first color characteristic is a color type,
the second color characteristic is a color class,
the third color is characterized by a range of human skin colors.
8. A wearing specification determination device based on image recognition is characterized by comprising:
the receiving module is used for receiving the wearing images uploaded by financial industry personnel through the wearing acquisition device;
the segmentation module is used for carrying out segmentation processing on the wearing image according to the human body structure characteristics to obtain an upper body characteristic diagram, a lower body characteristic diagram and a foot characteristic diagram;
the identification module is used for identifying whether a preset characteristic region exists in the upper body characteristic diagram or not to obtain a region identification result, wherein the preset characteristic region is a distribution region of a specified pixel value which has a corresponding relation with a wearing specification on the upper body characteristic diagram;
the extraction module is used for carrying out feature extraction on the upper body feature map, the lower body feature map and the foot feature map based on the region identification result to obtain a feature extraction result;
and the judging module is used for inputting the characteristic extraction result into the trained wearing analysis model to obtain a wearing standard judging result.
9. A computer device comprising a memory and a processor, the memory storing computer readable instructions, wherein the processor when executing the computer readable instructions implements the steps of the method of any one of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon, which when executed by a processor implement the steps of the method of any one of claims 1 to 7.
CN202011307480.8A 2020-11-19 2020-11-19 Wearing standard judging method based on image recognition and related equipment Pending CN112395999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011307480.8A CN112395999A (en) 2020-11-19 2020-11-19 Wearing standard judging method based on image recognition and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011307480.8A CN112395999A (en) 2020-11-19 2020-11-19 Wearing standard judging method based on image recognition and related equipment

Publications (1)

Publication Number Publication Date
CN112395999A true CN112395999A (en) 2021-02-23

Family

ID=74606753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011307480.8A Pending CN112395999A (en) 2020-11-19 2020-11-19 Wearing standard judging method based on image recognition and related equipment

Country Status (1)

Country Link
CN (1) CN112395999A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240695A (en) * 2021-06-02 2021-08-10 四川轻化工大学 Electric power operation personnel wearing identification method based on posture perception

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214293A (en) * 2018-08-07 2019-01-15 电子科技大学 A kind of oil field operation region personnel wearing behavioral value method and system
CN109241847A (en) * 2018-08-07 2019-01-18 电子科技大学 The Oilfield Operation District safety monitoring system of view-based access control model image
CN110188701A (en) * 2019-05-31 2019-08-30 上海媒智科技有限公司 Dress ornament recognition methods, system and terminal based on the prediction of human body key node
WO2019237721A1 (en) * 2018-06-14 2019-12-19 深圳码隆科技有限公司 Garment dimension data identification method and device, and user terminal
CN110705520A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Object detection method, device, computer equipment and computer readable storage medium
CN110751125A (en) * 2019-10-29 2020-02-04 秒针信息技术有限公司 Wearing detection method and device
CN111401301A (en) * 2020-04-07 2020-07-10 上海东普信息科技有限公司 Personnel dressing monitoring method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237721A1 (en) * 2018-06-14 2019-12-19 深圳码隆科技有限公司 Garment dimension data identification method and device, and user terminal
CN109214293A (en) * 2018-08-07 2019-01-15 电子科技大学 A kind of oil field operation region personnel wearing behavioral value method and system
CN109241847A (en) * 2018-08-07 2019-01-18 电子科技大学 The Oilfield Operation District safety monitoring system of view-based access control model image
CN110188701A (en) * 2019-05-31 2019-08-30 上海媒智科技有限公司 Dress ornament recognition methods, system and terminal based on the prediction of human body key node
CN110705520A (en) * 2019-10-22 2020-01-17 上海眼控科技股份有限公司 Object detection method, device, computer equipment and computer readable storage medium
CN110751125A (en) * 2019-10-29 2020-02-04 秒针信息技术有限公司 Wearing detection method and device
CN111401301A (en) * 2020-04-07 2020-07-10 上海东普信息科技有限公司 Personnel dressing monitoring method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240695A (en) * 2021-06-02 2021-08-10 四川轻化工大学 Electric power operation personnel wearing identification method based on posture perception

Similar Documents

Publication Publication Date Title
CN109657631B (en) Human body posture recognition method and device
US20210287091A1 (en) Neural network training method and image matching method and apparatus
US8983142B1 (en) Programmatic silhouette attribute determination
CN107679448B (en) Eyeball action-analysing method, device and storage medium
CN110188701A (en) Dress ornament recognition methods, system and terminal based on the prediction of human body key node
CN109614925A (en) Dress ornament attribute recognition approach and device, electronic equipment, storage medium
CN108629319B (en) Image detection method and system
WO2021174941A1 (en) Physical attribute recognition method, system, computer device, and storage medium
CN109215091B (en) Clothing fashion color automatic extraction method based on graph representation
JP2010262425A (en) Computer execution method for recognizing and classifying clothes
CN112905889A (en) Clothing searching method and device, electronic equipment and medium
CN112395999A (en) Wearing standard judging method based on image recognition and related equipment
CN108764232B (en) Label position obtaining method and device
CN116129473B (en) Identity-guide-based combined learning clothing changing pedestrian re-identification method and system
CN108416298A (en) A kind of scene judgment method and terminal
CN112528855B (en) Electric power operation dressing standard identification method and device
CN116189311A (en) Protective clothing wears standardized flow monitoring system
KR20200095632A (en) Method for Providing Complex Typed Style Coordination
CN115082669A (en) Garment fabric recommendation method and device, electronic equipment and storage medium
CN114359997A (en) Service guiding method and system
CN112925941A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN113392741A (en) Video clip extraction method and device, electronic equipment and storage medium
CN111126179A (en) Information acquisition method and device, storage medium and electronic device
CN112353033A (en) Human body data batch measurement system based on deep learning
CN113538074A (en) Method, device and equipment for recommending clothes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination