CN111242016A - Clothes management method, control device, wardrobe and computer-readable storage medium - Google Patents

Clothes management method, control device, wardrobe and computer-readable storage medium Download PDF

Info

Publication number
CN111242016A
CN111242016A CN202010026862.7A CN202010026862A CN111242016A CN 111242016 A CN111242016 A CN 111242016A CN 202010026862 A CN202010026862 A CN 202010026862A CN 111242016 A CN111242016 A CN 111242016A
Authority
CN
China
Prior art keywords
clothes
image
information
target object
worn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010026862.7A
Other languages
Chinese (zh)
Inventor
马啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010026862.7A priority Critical patent/CN111242016A/en
Publication of CN111242016A publication Critical patent/CN111242016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The invention discloses a clothes management method, which comprises the following steps: acquiring a first image of a target object; extracting first image information including different clothes worn by the target object from the first image; determining first characteristic information of each piece of clothes worn by the target object according to the first image information; determining first evaluation information of the clothes collocation of the target object according to the first characteristic information; and outputting the first evaluation information. The invention also discloses a control device, a wardrobe and a computer readable storage medium. The invention aims to realize that a user knows the matching effect of clothes of the user so as to ensure the correct utilization of the clothes of the user and improve the image of the user.

Description

Clothes management method, control device, wardrobe and computer-readable storage medium
Technical Field
The present invention relates to the field of clothes management technologies, and in particular, to a clothes management method, a control device, a wardrobe, and a computer-readable storage medium.
Background
With the improvement of living standard of people, the quantity of clothes purchased by people is more and more. At present, when a user selects clothes to wear, the clothes are too much and the aesthetic knowledge is insufficient, the user often does not know how to correctly match the clothes, only can randomly match the clothes according to the user's feeling, and does not know how to go out after matching. Therefore, not only is the correct utilization of the clothes of the user not facilitated, but also the image of the user is easily affected due to improper matching.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a clothes management method, aiming at realizing that a user knows the matching effect of clothes of the user so as to ensure the correct utilization of the clothes of the user and improve the image of the user.
To achieve the above object, the present invention provides a clothes management method, comprising the steps of:
acquiring a first image of a target object;
extracting first image information including different clothes worn by the target object from the first image;
determining first characteristic information of each piece of clothes worn by the target object according to the first image information;
determining first evaluation information of the clothes collocation of the target object according to the first characteristic information;
and outputting the first evaluation information.
Optionally, the step of determining first characteristic information of each piece of clothes worn by the target object according to the first image information comprises:
analyzing the first image information based on a clothes database, wherein the clothes database comprises second image information of a plurality of kinds of clothes and corresponding second characteristic information;
determining second image information matched with the first image information in the clothing database as target image information;
and taking the second characteristic information corresponding to the target image information as the first characteristic information of each piece of clothes worn by the target object.
Optionally, after the step of determining the first feature information of each piece of clothing worn by the target object according to the first image information, the method further includes:
selecting at least one piece of clothing worn by the target object as first clothing;
acquiring a plurality of clothes combinations based on first characteristic information corresponding to the first clothes; the clothes combination comprises the first clothes and a second clothes matched with the first clothes;
determining second evaluation information corresponding to each clothes combination;
and outputting collocation suggestion information according to the second evaluation information and the corresponding clothes combination.
Optionally, the step of determining second evaluation information corresponding to each combination of clothes includes:
acquiring reference evaluation information of each clothes combination, and acquiring channel characteristic information acquired by each clothes combination; the channel characteristic information comprises an acquisition source, an access amount and/or aging information;
generating the weight of each clothes combination correspondingly according to the channel characteristic information;
and determining second evaluation information corresponding to each clothes combination according to the standard evaluation information and the corresponding weight.
Optionally, the step of extracting first image information including different clothes worn by the target object from the first image comprises:
analyzing a first image area in which a plurality of characteristic parts of the target object are located in the first image;
and extracting image information corresponding to each first image area as image information corresponding to the clothing worn by each characteristic part to obtain the first image information.
Optionally, the step of analyzing a first image region in which a number of feature portions of the target object are located in the first image includes:
performing a first human-based target detection algorithm on the first image;
determining a first human body image area where the target object is located according to the detection result of the first target detection algorithm;
and dividing the first human body image area according to a proportional relation to obtain each first image area.
Optionally, before the step of dividing the first human body image region according to a proportional relationship to obtain each first image region, the method further includes:
acquiring the ratio of the length to the width of the first human body image area;
when the ratio is within a threshold range, executing the step of dividing the first human body image area according to the proportional relation to obtain each first image area;
and when the ratio is beyond the threshold range, sending first prompt information to prompt the target object to adjust the posture.
Optionally, the step of extracting first image information including different clothes worn by the target object from the first image comprises:
identifying skeletal feature points of a human body in the first image;
determining a second human body image area where the target object is located according to the bone feature points;
dividing the second human body image region based on the bone characteristic point to obtain
A second image area where a plurality of characteristic parts of the target object are located;
and extracting image information corresponding to each second image region as image information corresponding to the clothing worn by each feature part to obtain the first image information.
Optionally, before the step of determining the second human image region where the target object is located according to the bone feature points, the method further includes:
judging whether the bone feature points comprise bone key points or not;
if so, executing the step of determining a second human body image area where the target object is located according to the bone feature points;
if not, sending second prompt information to prompt the target object to adjust the posture or the position.
Optionally, when the bone feature points include bone key points, before the step of determining the second human image region where the target object is located according to the bone feature points, the method further includes:
acquiring image position information corresponding to each bone feature point;
judging whether the relative position of each bone feature point meets the posture requirement or not according to the image position information;
if yes, executing the step of determining a second human body image area where the target object is located according to the bone feature points;
if not, sending out third prompt information to prompt the target object to adjust the posture.
Optionally, the step of extracting first image information including different clothes worn by the target object from the first image comprises:
analyzing a third image area in the first image, where various clothes worn by the target object are located;
and extracting image information corresponding to each third image area as the first image information.
Optionally, the step of analyzing a third image region in the first image in which various clothes worn by the target object are located includes:
performing a second laundry-based target detection algorithm on the first image;
and determining each third image area according to the detection result of the second target detection algorithm.
Optionally, the step of analyzing a third image region in the first image in which various clothes worn by the target object are located includes:
analyzing image contours of various clothes worn by the target object in the first image:
and taking the image area formed by each image contour as the third image area.
Optionally, the step of analyzing image contours of various clothes worn by the target object in the first image comprises:
performing pixel-level segmentation on the first image to obtain each image contour; or the like, or, alternatively,
identifying contour feature points corresponding to various clothes worn by the target object in the first image;
and determining each image contour according to the contour feature points.
Optionally, after the step of determining the first feature information of each piece of clothing worn by the target object according to the first image information, the method further includes:
inquiring a clothes storage database according to the first characteristic information, and judging whether the corresponding clothes have storage positions in the wardrobe or not; the clothes storage database comprises characteristic information of a plurality of clothes and storage positions of the characteristic information in the wardrobe;
if not, determining the clothes which do not have the storage positions as third clothes;
outputting storage prompt information corresponding to the third clothes;
acquiring storage position information returned based on the storage prompt information;
and storing the information to the clothes storage database in a correlation manner according to the storage position information and the first characteristic information.
Optionally, after the step of associating and storing the information of the storage location and the first characteristic information to the clothing storage database, the method further includes:
when a position query instruction including first characteristic information is received, querying the clothes storage database based on the first characteristic information;
determining storage position information associated with the first characteristic information as first target position information in the clothes storage database;
outputting the first target position information; and/or the presence of a gas in the atmosphere,
after the step of querying a clothes storage database according to the first characteristic information and judging whether the corresponding clothes have storage positions in the wardrobe, the method further comprises the following steps of:
determining the clothes having the storage positions as fourth clothes when all the clothes worn by the target object have the storage positions in the wardrobe or when part of the clothes worn by the target object have the storage positions in the wardrobe;
determining existing position information associated with the first characteristic information of the fourth clothing as second target position information in the clothing storage database;
and outputting the second target position information.
Further, in order to achieve the above object, the present application also proposes a control device including: a memory, a processor and a laundry management program stored on the memory and executable on the processor, the laundry management program when executed by the processor implementing the steps of the laundry management method as defined in any of the above.
In addition, in order to achieve the above object, the present application also proposes a wardrobe including:
a camera and/or mirror; and
the control device as described above.
Furthermore, in order to achieve the above object, the present application also proposes a computer readable storage medium having a laundry management program stored thereon, which when executed by a processor implements the steps of the laundry management method as recited in any one of the above.
The invention provides a clothes management method, which comprises the steps of obtaining a first image of a target object, analyzing the first image, extracting first image information corresponding to different clothes worn by the target object, determining first characteristic information of each piece of clothes worn by the target object according to the first image information, determining first evaluation information of clothes matching of the target object according to the first characteristic information, and outputting the first evaluation information, so that a user can know how the matching effect of the clothes tried currently is based on the output first evaluation information when trying on the clothes, the user can know the matching effect of the clothes, the user can correctly match the clothes, the clothes can be correctly utilized by the user, and the user image is improved.
Drawings
FIG. 1 is a schematic structural view of an embodiment of a wardrobe according to the present invention;
FIG. 2 is a schematic diagram of the hardware involved in the operation of an embodiment of the control apparatus of the present invention;
FIG. 3 is a schematic flow chart of a first embodiment of a clothes management method according to the present invention;
FIG. 4 is a schematic flow chart of a clothes management method according to a second embodiment of the present invention;
FIG. 5 is a schematic flow chart of a clothes management method according to a third embodiment of the present invention;
fig. 6 is a schematic flow chart of a clothes management method according to a fourth embodiment of the present invention;
FIG. 7 is a schematic flow chart of a fifth embodiment of a clothes management method according to the present invention;
fig. 8 is a flowchart illustrating a laundry management method according to a sixth embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring a first image of a target object; extracting first image information including different clothes worn by the target object from the first image; determining first characteristic information of each piece of clothes worn by the target object according to the first image information; determining first evaluation information of the clothes collocation of the target object according to the first characteristic information; and outputting the first evaluation information.
In the prior art, when a user selects clothes to wear, the clothes are too much and the aesthetic knowledge is insufficient, so that the user often does not know how to correctly match the clothes, can only randomly match the clothes according to the user's feeling, and does not know how to go out after matching. Therefore, not only is the correct utilization of the clothes of the user not facilitated, but also the image of the user is easily affected due to improper matching.
The invention provides the solution, and aims to realize that a user knows the matching effect of clothes of the user, so as to ensure the correct utilization of the clothes of the user and improve the image of the user.
The invention provides a wardrobe which is used for storing clothes.
In the present embodiment, referring to fig. 1, the wardrobe specifically includes a camera 1, a mirror 2, and a control device 3. In other embodiments, the wardrobe may comprise the camera 1 and the control device 3, without the mirror 2; the wardrobe can also comprise a mirror 2 and a control device 3, and the lens 2 can not belong to the wardrobe but be an image acquisition device on other equipment. In addition, the wardrobe may also include a display device 4 that may be used to output a prompt.
In the present embodiment, the camera 1 is specifically provided above the mirror 2. The camera 1 is specifically configured to acquire a first image of a target object. The mirror 2 may be used by the target object to assist in its organisation. No matter the camera 1 is a part of the wardrobe or an image acquisition module on other equipment (such as a mobile phone, a computer, monitoring equipment and the like), the camera 1 is in communication connection with the control device 3.
Specifically, the control device 3 may be disposed independently of the wardrobe body for storing the clothes, or may be mounted inside the wardrobe body for storing the clothes. In an embodiment of the present invention, referring to fig. 2, the control device includes: a processor 3001, such as a CPU, memory 3002, or the like. The memory 3002 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 3002 may alternatively be a storage device separate from the processor 3001.
The processor 3001 is connected to the memory 3002. In addition, the processor 3001 is also in communication connection with the camera 1, and is configured to obtain image data collected by the camera 1. The processor 3001 may also be connected to the display device 4 to control the display device 4 to output prompt information.
Those skilled in the art will appreciate that the configuration of the device shown in fig. 2 is not intended to be limiting of the device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 2, a laundry management program may be included in the memory 3002, which is a type of readable storage medium. In the apparatus shown in fig. 2, the processor 3001 may be configured to call a laundry management program stored in the memory 3002 and perform operations of the related steps of the laundry management method in the following embodiments.
The invention also provides a clothes management method.
With reference to fig. 3, a first embodiment of a laundry management method of the present invention is proposed, comprising:
step S10, acquiring a first image of the target object;
in this embodiment, the target object is specifically a human, and in other embodiments, the target object may also be an animal, which may be set according to actual needs. The target object may be specifically a preset specific object, and an object arbitrarily appearing in the shooting range of the camera may also be the target object. The number of target objects may be one or more than one.
The first image is specifically image data which is collected by a camera and contains a target object image. Specifically, human body recognition can be continuously performed on image data acquired by the camera, and when human body information corresponding to a human body or a specific user appears in the image data, the image data is used as a first image.
A step S20 of extracting first image information including different clothes worn by the target object from the first image;
the different clothes worn by the target object specifically refer to clothes or articles worn by the target object at different positions, specifically refer to all clothes worn by the target object, and also refer to at least two clothes specified in the clothes worn by the target object. The clothing may include a jacket, a shirt, a hat, shoes, a watch, gloves, etc.
Each piece of clothes can correspondingly form first image information. Specifically, human body recognition can be performed on the first image, and based on the result of the human body recognition, first image information corresponding to different clothes worn by the target object in the first image is further extracted; the first image may also be directly subjected to clothing recognition, and first image information corresponding to different clothing worn by the target object in the first image may be further extracted based on a result of the clothing recognition, and so on. In this embodiment, a deep neural network is generated mainly by collecting a large amount of sample data, the first image is input to the deep neural network for analysis, human body recognition or clothing recognition is performed on the first image through the deep neural network, and the first image information is obtained based on the recognition result.
The first image information may be specifically an image including clothes, or may be a feature vector extracted from an image including clothes, or the like.
Step S30 of determining first characteristic information of each piece of clothing worn by the target object based on the first image information;
the first image information corresponding to each piece of clothes is analyzed, and each piece of first image information can be correspondingly analyzed to obtain first characteristic information of the corresponding piece of clothes. The first feature information specifically indicates attribute information of clothing worn by the target object corresponding to the first image information. The first characteristic information may specifically include the color (e.g., red, yellow, etc.) of the clothing, the material (e.g., wool, cotton, etc.), the brand, the type (e.g., jacket, dress, shorts, pants, etc.), and the like.
Specifically, a preset algorithm may be adopted to directly extract image features in the first image information, and the first feature information may be determined based on the extracted image features. In addition, the first image information can be analyzed based on a local or cloud clothes database, and the clothes attribute matched with the first image information in the clothes database is used as the first characteristic information corresponding to the first image information.
Step S40, determining first evaluation information of the target object clothes matching according to the first characteristic information;
the first evaluation information may be specifically an evaluation score, an evaluation grade, or the like. In the present embodiment, the first evaluation information is specifically an evaluation score. Specifically, some basic matching rules can be set in advance based on the first characteristic information of different clothes, and the matching rules include combinations of different first characteristic information and corresponding scores thereof. Based on the collocation rule, the evaluation score corresponding to the first characteristic information can be obtained at present. The evaluation score can be used as first evaluation information of the current collocation effect representation of different clothes worn by the target object.
Step S50, outputting the first evaluation information.
Specifically, the first evaluation information may be output by a display, voice, or the like.
According to the clothes management method provided by the embodiment of the invention, the first image of the target object is obtained, the first image is analyzed, the first image information corresponding to different clothes worn by the target object is extracted, the first characteristic information of each piece of clothes worn by the target object is determined according to the first image information, the first evaluation information of clothes matching of the target object is determined according to the first characteristic information, and the first evaluation information is output, so that when a user tries on clothes, the matching effect of the clothes tried on at present can be known based on the output first evaluation information, the user can know the good matching effect of the clothes, the user can correctly match the clothes, the correct utilization of the clothes of the user is ensured, and the user image is improved.
Specifically, in the first embodiment, the step S30 includes:
step S31, analyzing the first image information based on a clothes database, wherein the clothes database comprises second image information of a plurality of kinds of clothes and corresponding second characteristic information;
the clothing database may be a local database, or may be a cloud database. Wherein, for the follow-up first characteristic information of confirming more accurate, the clothing database adopts the high in the clouds database. The clothing database comprises second image information corresponding to a plurality of types (different brands, styles, materials, colors and the like) of clothing, and in addition, different second image information corresponds to different second characteristic information in the clothing database. The first feature information specifically refers to attribute information of the clothing corresponding to the second image information. The second characteristic information may specifically include the color (e.g., red, yellow, etc.) of the clothing, the material (e.g., wool, cotton, etc.), the brand, the type (e.g., jacket, dress, shorts, pants, etc.), and the like.
The second image information may be specifically an image including clothes, or may be a feature vector extracted from an image including clothes, or the like.
A step S32 of determining second image information matched with the first image information in the clothing database as target image information;
the similarity between the first image information and the second image information is greater than or equal to a preset threshold, and it can be determined that the first image information matches, so that the second image information having a similarity greater than or equal to the preset threshold with the first image information can be used as the target image information.
In addition, the first image information is a feature vector obtained by extracting features of the corresponding clothes image, and the second image information is a feature vector obtained by extracting features of the corresponding clothes image. The feature vector can be obtained by extracting features of the image by adopting a Scale-invariant feature transform (SIFT) algorithm or a convolutional neural network (for example, a vector is formed by combining VGGNET-16 convolutional layers in an SSD network). Based on this, the distance (for example, euclidean distance, cosine distance, or the like) between two feature vectors corresponding to the first image information and the second image information can be calculated, and the second image information having the shortest distance from the first image information is set as the target image information. In addition, a distance threshold value may be set in advance, and after the distance between the two feature vectors is calculated, the second image information corresponding to the obtained distance greater than or equal to the distance threshold value may be used as the target image information.
Step S33 is to use the second feature information corresponding to the target image information as the first feature information for each piece of clothing worn by the target object.
In this embodiment, the first image is analyzed based on the clothing database, and based on the analysis result, the first characteristic information of each piece of clothing worn by the target object can be accurately analyzed, so that the accuracy of matching evaluation of the target object is improved.
Further, based on the first embodiment, a second embodiment of the clothes management method is provided. In the second embodiment, referring to fig. 4, after the step S30, the method further includes:
step S60, selecting at least one garment worn by the target object as a first garment;
among the clothes worn by the target object, one or more clothes may be selected as the first clothes. Specifically, clothes with a score higher than or equal to a preset score in clothes worn by the target object can be selected as the first clothes. For example, when the target object wears a jacket, a skirt, a hat, and shoes, one of the jacket, the skirt, the hat, and the shoes may be selected as the first clothing. Or when the grade of the matching of the hat and the shoes is higher than or equal to the preset grade, the hat and the shoes are used as the first clothes.
Step S70, acquiring a plurality of clothes combinations based on first characteristic information corresponding to the first clothes; the clothes combination comprises the first clothes and a second clothes matched with the first clothes;
specifically, according to a preset matching rule, second feature information matched with the first feature information can be determined, and the first clothes corresponding to the first feature information and the second clothes corresponding to the second feature information form the obtained clothes combination. In addition, a plurality of preset clothes combinations with preset configuration can be obtained, each preset clothes combination comprises at least two preset clothes and corresponding preset characteristic information, the preset characteristic information matched with the first characteristic information in the preset clothes combinations is determined, and the preset clothes combination corresponding to the preset characteristic information is used as the obtained clothes combination. Wherein, the obtained clothes combination can have one or more than one according to the actual matching condition.
Step S80 of determining second evaluation information corresponding to each clothing combination;
the second evaluation information can be determined in various manners, and can be determined according to the acquisition manner of the clothes combination, and different acquisition manners adopt different manners to determine the second evaluation information. Specifically, when the clothing combination is obtained based on the brand party self-recommendation mode, the score of the brand party for the clothing combination evaluation can be obtained and used as second evaluation information corresponding to the clothing combination; if the clothes combination is given by the system based on the preset collocation rule, the obtained clothes combination can be pushed to experts (such as fashion experts, designers, models and the like), the evaluation score returned by the experts is obtained, and the score is used as second evaluation information corresponding to the clothes combination.
And step S90, outputting collocation suggestion information according to the second evaluation information and the corresponding clothes combination.
Specifically, the obtained clothes combination and the corresponding second evaluation information thereof can be used as collocation suggestion information, so that the user can know the advantages and disadvantages of different clothes collocation effects. In addition, one or more clothes combinations with the highest evaluation can be determined based on the second evaluation information, and the one or more clothes combinations with the highest evaluation are directly used as collocation suggestion information, or the one or more clothes combinations with the highest evaluation are combined with the corresponding second evaluation information to be used as the collocation suggestion information. Specifically, the collocation advice information may be output in a manner of image display, voice prompt, or the like.
Further, the second garment may be a garment already owned by the user, or may be a garment not owned by the user. Specifically, the clothing storage database may be queried based on the feature information corresponding to the second clothing, and whether the second clothing has a storage location in the wardrobe is determined, if yes, the second clothing is considered to be the clothing owned by the user, and if not, the second clothing is considered to be the clothing not owned by the user. When the second clothes is clothes that the user does not have, the matching suggestion information is output, and the purchasing channel information (such as shopping link) corresponding to the second clothes is output, so that the user can purchase the second clothes.
In this embodiment, the collocation suggestion information is output in the above manner, so that the user can acquire the collocation condition of the currently worn clothes and other clothes from the collocation suggestion information, and know how to make the collocation effect of the currently worn clothes better. When the clothes combination with higher evaluation is output from the collocation suggestion information, the user can intuitively know how to collocate the clothes.
Specifically, in the second embodiment, step S80 includes:
step S81, acquiring the reference evaluation information of each clothes combination, and acquiring the channel characteristic information acquired by each clothes combination; the channel characteristic information comprises an acquisition source, an access amount and/or aging information;
specifically, the clothes combination can be scored based on a preset collocation rule to obtain the reference evaluation information of the clothes combination; or directly acquiring the grade of a brand party providing the clothes combination as the reference evaluation information of the clothes combination; or the evaluation score returned by the expert about the clothes combination is acquired as the reference evaluation information of the clothes combination.
Step S82, generating the weight of each clothing combination according to the channel characteristic information;
different channel characteristic information may correspond to different weights. Specifically, the weight corresponding to the source of the authentication website is greater than the weight corresponding to the source of the non-authentication website; the larger the access amount corresponding to the clothes combination is, the larger the corresponding weight value is; the shorter the time generated by the clothes combination is from the current time, the larger the corresponding weight value is. Specifically, the weight corresponding to the acquisition source, the visit amount, and/or the aging information of each clothing combination may be calculated based on the acquired weight.
Step S83 is to determine second evaluation information corresponding to each clothing combination based on the reference evaluation information and the weight corresponding thereto.
Specifically, the reference evaluation information may be a reference evaluation score. The product of the reference evaluation score of the clothing combination and the corresponding weight can be used as the second evaluation information corresponding to the clothing combination.
In this embodiment, the second evaluation information is obtained in the above manner, so that the accuracy of the obtained second evaluation information of the clothes combination can be improved, and more accurate and effective collocation recommendation information is provided for the user.
Further, based on any of the above embodiments, a third embodiment of the clothes management method is provided. In the third embodiment, referring to fig. 5, the step S20 includes:
step S21, analyzing a first image area in which a plurality of characteristic parts of the target object are located in the first image;
and analyzing the first image through a deep neural network to obtain a first image area where a plurality of characteristic parts of the target object are located. The characteristic part herein specifically refers to a part of the target object that can be used for wearing clothes, and specifically may include a head, an upper torso, thighs, calves, feet, forearms, hind arms, and the like. The characteristic parts can be divided according to actual requirements. Features may also include head, upper body, lower body, feet, etc. in other embodiments.
The human body image area where the target object is located in the first image can be identified, and the first image area corresponding to each characteristic part is identified in the human body image area. Specifically, step S21 may include:
step S211, executing a first target detection algorithm based on human body on the first image;
for example, a first object detection algorithm such as YOLO, SSD, fast RCNN, etc. may be performed on the first image to detect the human body image in the first image.
Step S212, determining a first human body image area where the target object is located according to the detection result of the first target detection algorithm;
the first target detection algorithm detects a circumscribed rectangle frame in the first image to represent a first human body image area where the target object is located.
Step S213, dividing the first human body image region according to a proportional relationship to obtain each first image region.
The first human body image area is divided according to preset proportional relations of the set different characteristic parts in the human body image area, for example, the first human body image area is divided according to the preset proportional relations among the head, the upper body, the lower body and the feet to obtain four first image areas.
In step S22, image information corresponding to each of the first image regions is extracted as image information corresponding to clothing worn by each of the characteristic portions, so as to obtain the first image information.
Specifically, the image in each first image region can be directly used as the corresponding image information; or a feature vector obtained by extracting features of the image in each first image region may be used as the corresponding image information. Since the characteristic portion is a portion of the garment worn by the target object, the image information in the first image region corresponding to the characteristic portion necessarily includes the first image information corresponding to the garment worn at the portion, and therefore the first image information of the corresponding garment can be represented by the image information corresponding to the first image region.
In this embodiment, by identifying the first image corresponding to the characteristic portion of the target object in the first image and representing the first image information of the clothing worn by the characteristic portion by using the image information corresponding to the first image of the characteristic portion, the effective and accurate identification of the first image information corresponding to different clothing worn by the target object in the first image is realized, so as to ensure the accurate evaluation of clothing matching of the target object.
The first human body image area is a rectangular area, namely, a circumscribed rectangular frame obtained from a detection result of the first target detection algorithm. Before step S213, the method further includes: acquiring the ratio of the length to the width of the first human body image area; when the ratio is within a threshold range, executing the step of dividing the first human body image area according to the proportional relation to obtain each first image area; and when the ratio is beyond the threshold range, sending first prompt information to prompt the target object to adjust the posture. Specifically, when the ratio is within the threshold range, the target object is in a standing posture, and at the moment, the first human body image area is divided; if the ratio is not within the threshold range, it indicates that the target object is not in a standing posture, so that the first human body image area is not divided, first prompt information is sent, after the posture of the target object is adjusted, the step S10 is returned to, and the image of the target object is obtained and identified again. Further, when the ratio is within the threshold range, before executing step S213, the method further includes determining whether the number of the first human body image areas of which the ratio is within the preset range is only one, and if so, executing step S213; if not, sending out prompt information to prompt the user to adjust the number, then returning to execute the step S10, and obtaining and identifying the image of the target object again. The accuracy of the first image corresponding to each characteristic part of the target object extracted from the first image can be further improved through the limitation of the length ratio and/or the number of the first human body image areas, so that the accuracy of the clothes matching evaluation of the target object is improved.
Further, based on any of the above embodiments, a fourth embodiment of the clothes management method is provided. In the fourth embodiment, referring to fig. 6, the step S20 includes:
step S201, identifying skeleton characteristic points of a human body in the first image;
the bone feature points are image points representing positions of different bones of a human body. Specifically, a human bone point detection algorithm can be used to detect the bone feature points of the human body in the first image. The human body bone point detection algorithm can specifically comprise an openposition algorithm, an alphaposition algorithm and the like.
Step S202, determining a second human body image area where the target object is located according to the bone feature points;
specifically, the maximum external rectangular frame of all the skeletal feature points is analyzed, the obtained maximum external rectangular frame is directly used as the second human body image area where the target object is located in the first image, or the preset distance can be enlarged by taking the center of the maximum external rectangular frame as a reference, and the image area corresponding to the enlarged rectangular frame in the first image is used as the second human body image area where the target object is located.
Step S203, dividing the second human body image area based on the bone characteristic points to obtain second image areas where a plurality of characteristic parts of the target object are located;
specifically, the key positions corresponding to different parts of the human body in the second human body image region can be determined based on the bone feature points, and the second human body image region is divided based on the key positions to obtain the second image region where different feature parts of the target object are located. For example, the position of the shoulders, the position of the pelvis, the position of the ankle, and the like of the human body in the second human body image region are determined based on the skeletal feature points, and the second human body image region is divided into four second image regions in which the head, the upper body, the lower body, and the feet are located based on these several key positions.
In step S204, image information corresponding to each of the second image regions is extracted as image information corresponding to clothing worn by each of the feature portions, so as to obtain the first image information.
Specifically, the image in each second image region can be directly used as the corresponding image information; or a feature vector obtained by extracting features of the images in the second image regions may be used as the corresponding image information. Since the characteristic portion is a portion of the garment worn by the target object, the image information in the second image region corresponding to the characteristic portion necessarily includes the first image information corresponding to the garment worn at the portion, and therefore the first image information of the corresponding garment can be represented by the image information corresponding to the second image region.
In this embodiment, the human body image area where the target object is located is identified and divided through the bone feature points to obtain a second image area where the feature part of the target object is located, and the first image information of the clothing worn by the part is represented by the image information corresponding to the second image area, so that the effective and accurate identification of the first image information corresponding to different clothing worn by the target object in the first image is realized, and the accurate evaluation of clothing matching of the target object is ensured.
Further, in the fourth embodiment, step S202 is preceded by: judging whether the bone feature points comprise bone key points or not; if yes, go to step S202; if not, sending second prompt information to prompt the target object to adjust the posture or the position. The number of bone feature points is greater than or equal to the number of bone keypoints. The skeleton characteristic points are image points representing positions of different skeletons of a human body; bone key points are image points that characterize where the edges of the entire bone of the human body lie. Specifically, a connecting point of a neck and a body can be used as a first skeletal key point, a vertex can be used as a second skeletal key point, any one of soles of left and right feet can be used as a third skeletal key point, shoulders can be used as skeletal key points, and the like. If the skeleton feature points comprise all the skeleton key points, the fact that the first image comprises a whole body image of the target object is shown, at the moment, a second human body image area can be determined based on the skeleton feature points, and further clothes matching evaluation information worn by the target object is obtained based on the second human body image area; if the bone feature point does not include one of the above bone key points, it indicates that the first image does not include the whole-body image of the target object, and at this time, the first image is not further processed, but a second prompt message is sent to prompt the user to adjust the posture or position, and then the process returns to step S10 to re-acquire the first image of the target object and further identify the target object.
Still further, when the bone feature points include bone key points, before performing step S202, the method further includes: acquiring image position information corresponding to each bone feature point, judging whether the relative position of each bone feature point meets the posture requirement or not according to the image position information, if so, executing the step S202, and if not, sending third prompt information to prompt a target object to adjust the posture. For example, image position information of a hand bone feature point, a shoulder-neck connected bone feature point, a skull bone key point, and the like is acquired, whether the position of the hand bone feature point is higher than the position of the shoulder bone feature point is judged based on the image position information, whether a difference value between the shoulder-neck connected bone feature point and an included angle formed by the two shoulder bone feature points is larger than or equal to a preset threshold value or not is judged based on the image position information, whether the two shoulder bone feature points are located on the same horizontal line or not is judged based on the image position information, whether the position of the skull bone feature point is lower than the shoulder-neck connected bone feature point or not is judged based on the image position information, and the like, and the relative position between the bone feature points when the human body is in the standard standing posture is taken as a posture requirement. Based on the above, when the position of the hand skeleton characteristic point is judged to be lower than the position of the shoulder skeleton characteristic point based on the image position information, the difference value between the included angle formed by the shoulder and neck connected skeleton characteristic points and the two shoulder skeleton characteristic points and 180 degrees is smaller than a preset threshold value, the two shoulder skeleton characteristic points are judged to be located on the same horizontal line, and the position of the skull skeleton characteristic point is judged to be higher than the shoulder and neck connected skeleton characteristic points, the relative position of each skeleton characteristic point is judged to meet the posture requirement; and if the relative position of each bone feature point does not meet the posture requirement, judging that the relative position of each bone feature point does not meet the posture requirement.
In the above manner, by judging whether the bone feature points include bone key points and/or judging whether the relative positions of the bone feature points meet posture conditions, the second human body image area is further determined and analyzed based on the bone feature points when the bone feature points include the bone key points and meet the posture conditions, otherwise, the image is obtained again for recognition after the target object is adjusted, so that the accuracy of the obtained first image information of the target object wearing clothes can be further improved, and the accuracy of the matching evaluation of the clothes of the target object is further improved.
Further, based on any of the above embodiments, a fifth embodiment of the clothes management method of the present application is provided. In the fifth embodiment, referring to fig. 7, the step S20 further includes:
a step S20a of analyzing a third image area in which various clothes worn by the target object are located in the first image;
and directly carrying out clothes identification on the first image, and determining a third image area where various clothes worn by the target object are located according to the identification result. Specifically, a second target detection algorithm based on clothes may be performed on the first image to identify clothes in the first image, and each of the third image regions may be determined according to a detection result of the second target detection algorithm. The second target detection algorithm may be embodied as an SSD algorithm. Specifically, the second target detection algorithm detects a circumscribed rectangle in the first image to represent a third image region where clothing worn by the target object is located.
Further, by analyzing the image contours of various clothes worn by the target object in the first image: and taking the image area formed by each image contour as the third image area. Specifically, the first image may be subjected to pixel level segmentation based on a deep neural network to obtain each image contour; in addition, contour feature points corresponding to various clothes worn by the target object in the first image can be identified based on the deep neural network, and the contour of each image can be determined according to the contour feature points. In step S20b, image information corresponding to each of the third image areas is extracted as the first image information. Specifically, the image in each third image region can be directly used as the corresponding image information; or a feature vector obtained by extracting features of the image in each third image region may be used as the corresponding image information.
In the embodiment, by analyzing the third image area where the clothes worn by the target object in the first image are located, the corresponding first image information is extracted based on the third image area, so that the accurate evaluation of the current clothes collocation of the target object is realized based on the first image information.
Based on the third to fifth embodiments described above, in executing step S20, one of the third to fifth embodiments may be selected for extracting first image information of different clothes worn by the target object in the first image.
Further, based on the above-described third to fifth embodiments, in executing step S20, it is also possible to extract image information of different clothes worn by the target object in the first image in accordance with the schemes in at least two embodiments in the third to fifth embodiments, thereby obtaining extraction results of image information of at least two sets. In at least two groups of extraction results, the image information corresponding to the same clothes can be fitted, and the fitting result is used as the first image information corresponding to the clothes. For example, the third image information corresponding to the second image region where the feature of the target object is located is extracted by the skeleton point recognition method of the fourth embodiment, the fourth image information corresponding to the third image region where various clothes worn by the target object are located is extracted by the clothes recognition method of the fifth embodiment, the third image information and the fourth image information representing the same clothes can be determined based on the association relationship between the feature and the clothes worn by the feature, and the image information of which the matching degree with the fourth image information is greater than or equal to the preset threshold value in the third image information is taken as the first image information. According to the mode, the image information corresponding to different clothes worn by the target object in the first image is extracted in multiple modes, the extraction results of the multiple modes are fitted to obtain the first image information corresponding to each clothes, and the first image information is applied to analysis of clothes matching of the target object, so that the accuracy of the obtained first image information is effectively improved, and the accuracy of clothes matching evaluation analysis is further improved.
Based on any of the above embodiments, a sixth embodiment of the clothing management method of the present application is provided. In the sixth embodiment, referring to fig. 8, after step S30, the method further includes:
step S01, inquiring a clothes storage database according to the first characteristic information, and judging whether the corresponding clothes have storage positions in the wardrobe; the clothes storage database comprises characteristic information of a plurality of clothes and storage positions of the characteristic information in the wardrobe; if not, step S02 is executed.
The clothes storage database is a pre-configured information base about the position information and the characteristic information of the clothes stored in the wardrobe.
A step S02 of determining the laundry not having the storage location as a third laundry;
and determining that the clothes which do not have the storage positions in the wardrobe are the third clothes in all the clothes corresponding to the first characteristic information.
Step S03, outputting storage prompt information corresponding to the third clothes;
after receiving the storage prompt message, the user may decide whether to store the third laundry. If the user decides to store the third laundry, information about the storage location thereof may be input.
Step S04, obtaining the storage position information returned based on the storage prompt information;
when the presence position of the third laundry is included in the information returned based on the storage prompt information, the corresponding information is extracted as the storage position information.
And step S05, storing the information into the clothes storage database according to the storage position information and the first characteristic information in a related mode.
In the embodiment, when the clothes worn by the target object do not have the corresponding storage positions in the wardrobe, the user is prompted to store, the storage position information set by the user is acquired to be associated with the first characteristic information of the clothes, and the clothes storage database is updated, so that the user can conveniently manage the new clothes and the old clothes. The user can know the position of the clothes by looking up the clothes storage database.
Further, when all the clothes worn by the target object already have storage positions in the wardrobe, or when part of the clothes worn by the target object already have storage positions in the wardrobe, determining the clothes already having storage positions as fourth clothes; determining existing position information associated with the first characteristic information of the fourth clothing as second target position information in the clothing storage database; and outputting the second target position information. The second target position information of the clothes corresponding to the first characteristic information is output, so that the user can know the placement position of the clothes in the wardrobe, and the clothes can be conveniently stored by the user.
Based on this, step S05 is followed by: when a position query instruction including first characteristic information is received, querying the clothes storage database based on the first characteristic information; determining storage position information associated with the first characteristic information as first target position information in the clothes storage database; and outputting the first target position information. For example, the user can directly input the first characteristic information to inquire the position of the clothes, and a position inquiry instruction is formed. When a position query instruction is received, the storage position information associated with the first characteristic information in the clothes storage database is queried and serves as first target position information and is output, and therefore when a user forgets which position of the wardrobe the clothes are stored in, the position of the clothes needed by the user can be searched through the first characteristic information.
Furthermore, an embodiment of the present invention further provides a readable storage medium, in which a clothing management program is stored, and the clothing management program, when executed by a processor, implements the relevant steps of any embodiment of the jacket management method.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a wardrobe, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (14)

1. A clothes management method, characterized in that the clothes management method comprises the steps of:
acquiring a first image of a target object;
extracting first image information including different clothes worn by the target object from the first image;
determining first characteristic information of each piece of clothes worn by the target object according to the first image information;
determining first evaluation information of the clothes collocation of the target object according to the first characteristic information;
and outputting the first evaluation information.
2. The clothing management method according to claim 1, wherein the step of determining first characteristic information of each piece of clothing worn by the target object based on the first image information comprises:
analyzing the first image information based on a clothes database, wherein the clothes database comprises second image information of a plurality of kinds of clothes and corresponding second characteristic information;
determining second image information matched with the first image information in the clothing database as target image information;
and taking the second characteristic information corresponding to the target image information as the first characteristic information of each piece of clothes worn by the target object.
3. The clothes management method according to claim 1, wherein after the step of determining first characteristic information of each piece of clothes worn by the target object based on the first image information, further comprising:
selecting at least one piece of clothing worn by the target object as first clothing;
acquiring a plurality of clothes combinations based on first characteristic information corresponding to the first clothes; the clothes combination comprises the first clothes and a second clothes matched with the first clothes;
determining second evaluation information corresponding to each clothes combination;
and outputting collocation suggestion information according to the second evaluation information and the corresponding clothes combination.
4. The laundry management method according to claim 3, wherein the determining of the second evaluation information corresponding to each of the laundry combinations comprises:
acquiring reference evaluation information of each clothes combination, and acquiring channel characteristic information acquired by each clothes combination; the channel characteristic information comprises an acquisition source, an access amount and/or aging information;
generating the weight of each clothes combination correspondingly according to the channel characteristic information;
and determining second evaluation information corresponding to each clothes combination according to the standard evaluation information and the corresponding weight.
5. The clothing management method according to any one of claims 1 to 4, wherein the step of extracting first image information including different clothing worn by the target object from the first image includes:
analyzing a first image area in which a plurality of characteristic parts of the target object are located in the first image;
and extracting image information corresponding to each first image area as image information corresponding to the clothing worn by each characteristic part to obtain the first image information.
6. The clothes management method according to claim 5, wherein the step of analyzing a first image region in which a plurality of characteristic portions of the target object are located in the first image comprises:
performing a first human-based target detection algorithm on the first image;
determining a first human body image area where the target object is located according to the detection result of the first target detection algorithm;
and dividing the first human body image area according to a proportional relation to obtain each first image area.
7. The clothing management method according to any one of claims 1 to 4, wherein the step of extracting first image information including different clothing worn by the target object from the first image includes:
identifying skeletal feature points of a human body in the first image;
determining a second human body image area where the target object is located according to the bone feature points;
dividing the second human body image region based on the bone characteristic points to obtain a second image region where a plurality of characteristic parts of the target object are located;
and extracting image information corresponding to each second image region as image information corresponding to the clothing worn by each feature part to obtain the first image information.
8. The clothing management method according to any one of claims 1 to 4, wherein the step of extracting first image information including different clothing worn by the target object from the first image includes:
analyzing a third image area in the first image, where various clothes worn by the target object are located;
and extracting image information corresponding to each third image area as the first image information.
9. The clothing management method according to claim 8, wherein the step of analyzing a third image area in the first image in which various clothing worn by the target object is located includes:
analyzing image contours of various clothes worn by the target object in the first image:
and taking the image area formed by each image contour as the third image area.
10. The clothes management method according to any one of claims 1 to 4, wherein after the step of determining first characteristic information of each piece of clothes worn by the target object according to the first image information, further comprising:
inquiring a clothes storage database according to the first characteristic information, and judging whether the corresponding clothes have storage positions in the wardrobe or not; the clothes storage database comprises characteristic information of a plurality of clothes and storage positions of the plurality of clothes in the wardrobe;
if not, determining the clothes which do not have the storage positions as third clothes;
outputting storage prompt information corresponding to the third clothes;
acquiring storage position information returned based on the storage prompt information;
and storing the information to the clothes storage database in a correlation manner according to the storage position information and the first characteristic information.
11. The clothes management method according to claim 10, wherein after the step of associating and storing the information to the clothes storage database according to the storage location information and the first characteristic information, further comprising:
when a position query instruction including first characteristic information is received, querying the clothes storage database based on the first characteristic information;
determining storage position information associated with the first characteristic information as first target position information in the clothes storage database;
outputting the first target position information; and/or the presence of a gas in the atmosphere,
after the step of querying a clothes storage database according to the first characteristic information and judging whether the corresponding clothes have storage positions in the wardrobe, the method further comprises the following steps of:
determining the clothes having the storage positions as fourth clothes when all the clothes worn by the target object have the storage positions in the wardrobe or when part of the clothes worn by the target object have the storage positions in the wardrobe;
determining existing position information associated with the first characteristic information of the fourth clothing as second target position information in the clothing storage database;
and outputting the second target position information.
12. A control device, characterized in that the control device comprises: memory, processor and laundry management program stored on the memory and executable on the processor, the laundry management program when executed by the processor implementing the steps of the laundry management method according to any of claims 1 to 11.
13. A wardrobe, comprising:
a camera and/or mirror; and
the control device of claim 12, the camera being connected to the control device.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium has a laundry management program stored thereon, which, when executed by a processor, implements the steps of the laundry management method according to any one of claims 1 to 11.
CN202010026862.7A 2020-01-10 2020-01-10 Clothes management method, control device, wardrobe and computer-readable storage medium Pending CN111242016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010026862.7A CN111242016A (en) 2020-01-10 2020-01-10 Clothes management method, control device, wardrobe and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010026862.7A CN111242016A (en) 2020-01-10 2020-01-10 Clothes management method, control device, wardrobe and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111242016A true CN111242016A (en) 2020-06-05

Family

ID=70868398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010026862.7A Pending CN111242016A (en) 2020-01-10 2020-01-10 Clothes management method, control device, wardrobe and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111242016A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591555A (en) * 2021-06-18 2021-11-02 青岛海尔科技有限公司 Clothes management method and device, storage medium and wardrobe

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106139564A (en) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 Image processing method and device
CN107038455A (en) * 2017-03-22 2017-08-11 腾讯科技(深圳)有限公司 A kind of image processing method and device
JP2019009752A (en) * 2017-06-20 2019-01-17 一般社団法人 日本画像認識協会 Image processing device
JP6596804B1 (en) * 2018-08-24 2019-10-30 独立行政法人日本スポーツ振興センター Position tracking system and position tracking method
JP2019191696A (en) * 2018-04-19 2019-10-31 株式会社ディースピリット Evaluation apparatus and evaluation system
WO2019217903A1 (en) * 2018-05-11 2019-11-14 Visionairy Health, Inc. Automated screening of medical data
CN110648186A (en) * 2018-06-26 2020-01-03 杭州海康威视数字技术股份有限公司 Data analysis method, device, equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106139564A (en) * 2016-08-01 2016-11-23 纳恩博(北京)科技有限公司 Image processing method and device
CN107038455A (en) * 2017-03-22 2017-08-11 腾讯科技(深圳)有限公司 A kind of image processing method and device
JP2019009752A (en) * 2017-06-20 2019-01-17 一般社団法人 日本画像認識協会 Image processing device
JP2019191696A (en) * 2018-04-19 2019-10-31 株式会社ディースピリット Evaluation apparatus and evaluation system
WO2019217903A1 (en) * 2018-05-11 2019-11-14 Visionairy Health, Inc. Automated screening of medical data
CN110648186A (en) * 2018-06-26 2020-01-03 杭州海康威视数字技术股份有限公司 Data analysis method, device, equipment and computer readable storage medium
JP6596804B1 (en) * 2018-08-24 2019-10-30 独立行政法人日本スポーツ振興センター Position tracking system and position tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王珂, 王巍, 尹宝才: "基于CHN的骨龄自动评价方法研究", 计算机研究与发展, no. 07 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591555A (en) * 2021-06-18 2021-11-02 青岛海尔科技有限公司 Clothes management method and device, storage medium and wardrobe

Similar Documents

Publication Publication Date Title
US10747826B2 (en) Interactive clothes searching in online stores
US10789454B2 (en) Image processing device, image processing method, and computer program product
Hara et al. Fashion apparel detection: the role of deep convolutional neural network and pose-dependent priors
US11416905B2 (en) Information processing device, information processing method, and information processing program for associating categories with items using feature points of a reference image
WO2020119311A1 (en) Neural network training method and image matching method and device
CN110678878B (en) Apparent feature description attribute identification method and device
US11475500B2 (en) Device and method for item recommendation based on visual elements
CN113191843A (en) Simulation clothing fitting method and device, electronic equipment and storage medium
WO2016139964A1 (en) Region-of-interest extraction device and region-of-interest extraction method
CN114723860B (en) Method, device and equipment for generating virtual image and storage medium
US11727463B2 (en) Systems and methods of image-based neural network apparel recommendation
CN111242016A (en) Clothes management method, control device, wardrobe and computer-readable storage medium
CN106557489B (en) Clothing searching method based on mobile terminal
CN114201681A (en) Method and device for recommending clothes
Cushen et al. Mobile visual clothing search
CN112766209A (en) Dressing detection method, dressing detection system, and computer-readable storage medium
JP6387290B2 (en) Image search device, image registration device, image feature selection device, method, and program
CN111429210A (en) Method, device and equipment for recommending clothes
KR20160112664A (en) Apparatus and Method for Providing Advertisement Using User's Characteristic Information
JP2011039854A (en) Display terminal unit, and program and display method therefor
CN115830712A (en) Gait recognition method, device, equipment and storage medium
CN113127663B (en) Target image searching method, device, equipment and computer readable storage medium
JP2016218578A (en) Image search device, image search system, image search method and image search program
CN113538074A (en) Method, device and equipment for recommending clothes
KR102307095B1 (en) The Automatic Recommendation System and Method of the Fashion Coordination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination