CN112950245A - Data processing method, device, equipment and medium for analyzing clothes trend - Google Patents

Data processing method, device, equipment and medium for analyzing clothes trend Download PDF

Info

Publication number
CN112950245A
CN112950245A CN201911263185.4A CN201911263185A CN112950245A CN 112950245 A CN112950245 A CN 112950245A CN 201911263185 A CN201911263185 A CN 201911263185A CN 112950245 A CN112950245 A CN 112950245A
Authority
CN
China
Prior art keywords
image
offline
target
sub
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911263185.4A
Other languages
Chinese (zh)
Inventor
陈标龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN201911263185.4A priority Critical patent/CN112950245A/en
Publication of CN112950245A publication Critical patent/CN112950245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data

Landscapes

  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a data processing method for analyzing a fashion trend. The method comprises the following steps: and acquiring an offline image, and identifying the target clothes in the offline image. A sub-image containing the target apparel is then segmented from the offline image based on the identified location of the target apparel in the offline image. And matching the segmented sub-images with a plurality of on-line images, and generating an analysis report about the off-line clothing trend based on the matching result. The present disclosure also provides a data processing apparatus for analyzing a fashion flow, a computer device, and a computer-readable storage medium.

Description

Data processing method, device, equipment and medium for analyzing clothes trend
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, apparatus, device, and medium for analyzing a fashion flow.
Background
Currently, methods for analyzing clothing trend mainly include offline questionnaires, offline store surveys and other methods, for example, an investigator inquires of a shopping guide about current clothing trend data through an offline store. However, these methods are inefficient in that the survey costs are high, and the data collection is performed manually by the surveyor in a large amount of time. Furthermore, the fashion trend data acquired by these methods come from the answers of passers-by and store shopping guide personnel, and the investigation results are often biased due to the limitations of subjective factors (for example, different people have different aesthetics, different fashions, etc.) and objective factors (for example, different geographical locations, the requirement for confidentiality of sales data of offline stores, etc.) of each person.
Disclosure of Invention
In view of the above, the present disclosure provides a data processing method, apparatus, device and medium for analyzing a fashion trend.
One aspect of the present disclosure provides a data processing method for analyzing a fashion flow, including: and acquiring an offline image, and identifying the target clothes in the offline image. A sub-image containing the target apparel is then segmented from the offline image based on the identified location of the target apparel in the offline image. And matching the segmented sub-images with a plurality of on-line images, and generating an analysis report about the off-line clothing trend based on the matching result.
According to an embodiment of the present disclosure, the matching the divided sub-image with the plurality of on-line images includes: for any one of the plurality of on-line images, the any one on-line image is mapped to a second feature vector, and the divided sub-image is mapped to a first feature vector. Then, the distance between the first feature vector and the second feature vector is calculated, and the matching degree between the divided sub-image and the image on any line is determined based on the distance.
According to an embodiment of the present disclosure, the generating of the analysis report about the offline fashion flow based on the matching result includes: it is determined whether the degree of match between the sub-image containing a target apparel and the plurality of on-line images is less than a first predetermined threshold. If so, it is determined to generate an analysis report for the target apparel.
According to an embodiment of the present disclosure, the above acquiring the offline image includes: an offline image from a first geographic area is obtained. The method further comprises the following steps: and pushing the generated analysis report to service terminals of the e-commerce distributed in the first geographical area.
According to an embodiment of the present disclosure, the generating of the analysis report about the offline fashion flow based on the matching result includes: it is determined whether a degree of matching between a sub-image containing a target apparel and an on-line image belonging to a second geographic area of the plurality of on-line images is less than a second predetermined threshold. If so, an analysis report is generated for the target apparel. The method further comprises the following steps: and pushing the generated analysis report to service terminals of the e-commerce distributed in the second geographic area.
According to an embodiment of the present disclosure, the method further includes: and performing attribute identification on the segmented sub-image by using a deep learning algorithm, and determining one or more attributes of the target clothes contained in the sub-image. The generating of the analysis report about the target apparel includes: an analysis report is generated regarding one or more attributes of the target apparel.
According to an embodiment of the present disclosure, the one or more attributes include at least one of: shape, color, type, style, length, accessories, and materials.
According to an embodiment of the present disclosure, the above acquiring the offline image includes: offline images are acquired with cameras distributed over at least one geographic area.
Another aspect of the present disclosure provides a data processing apparatus for analyzing a fashion flow, including: the device comprises an acquisition module, a target identification module, an image segmentation module, a matching module and a reporting module. The acquisition module is used for acquiring an offline image. The target identification module is used for identifying the target clothes in the offline image. The image segmentation module is used for segmenting sub-images containing the target clothes from the offline image based on the position of the identified target clothes in the offline image. The matching module is used for matching the segmented sub-images with the images on the plurality of lines. And the report module is used for generating an analysis report about the offline clothing trend based on the matching result.
Another aspect of the present disclosure provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, target recognition is carried out on the off-line image, the sub-image containing the target clothes is segmented from the off-line image, and the difference between the off-line clothes and the on-line clothes is obtained by matching the sub-image with the plurality of on-line images, so that the analysis report about the off-line clothes trend is generated. The process is convenient and efficient, the principle is simple, and the development dynamic of the off-line clothes trend relative to the on-line clothes can be accurately obtained without manually collecting survey data, so that references are provided for the design, the sale and the like related to the on-line clothes.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an exemplary system architecture to which data processing methods and apparatus for analyzing a flow of apparel may be applied, according to an embodiment of the present disclosure;
figure 2 schematically shows a flow diagram of a data processing method for analyzing a power flow of a garment according to an embodiment of the present disclosure;
figure 3A schematically illustrates a flow diagram of a data processing method for analyzing a power flow of a garment, according to another embodiment of the present disclosure;
figure 3B schematically shows an example schematic of a target apparel identification result in accordance with an embodiment of the present disclosure;
FIG. 3C schematically shows an example architecture diagram of a garment retrieval module according to an embodiment of the present disclosure;
figure 4 schematically shows a block diagram of a data processing apparatus for analyzing a trend of clothing, in accordance with an embodiment of the present disclosure; and
FIG. 5 schematically shows a block diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a data processing method and device for analyzing clothes trend. The method can comprise the following steps: an acquisition process, a target identification process, an image segmentation process, a matching process and a reporting process. In the acquisition process, an offline image is acquired. And then carrying out a target identification process to identify the target clothes in the offline image. And then, carrying out an image segmentation process, and segmenting a sub-image containing the target clothes from the offline image based on the position of the identified target clothes in the offline image. And then, carrying out a matching process, and matching the segmented sub-images with a plurality of on-line images so that the reporting process generates an analysis report about the off-line clothing trend based on the matching result.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which data processing methods and apparatus for analyzing a flow of apparel may be applied, according to an embodiment of the disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal apparatuses 101, 102, 103 communicate with the server 105 through the network 104 to receive or transmit messages and the like. The terminal devices 101, 102, 103 may have installed thereon client applications having various functions, such as music-like applications, shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices including, but not limited to, smart speakers, smart phones, tablet computers, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server providing support for various client applications in the terminal devices 101, 102, 103. The background management server may receive the request message sent by the terminal device 101, 102, 103, perform a response such as analysis processing on the received request message, and feed back a response result (for example, a web page, information, or data generated according to the request message or the like) to the terminal device 101, 102, 103, where the terminal device 101, 102, 103 outputs the response result to the user.
The data processing method for analyzing the clothing trend according to the embodiment of the disclosure can be implemented in the terminal devices 101, 102 and 103, and accordingly, the data processing device for analyzing the clothing trend according to the embodiment of the disclosure can be arranged in the terminal devices 101, 102 and 103. Alternatively, the data processing method for analyzing the clothing trend according to the embodiment of the present disclosure may also be implemented in the server 105, and accordingly, the data processing apparatus for analyzing the clothing trend according to the embodiment of the present disclosure may be disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired.
Apparel design is an industry with very short variation cycles, and people's pursuit of fashion changes very quickly, sometimes even on a daily basis. Therefore, how to closely follow the changes of clothes trend is an important issue facing clothes design practitioners, e-commerce selling clothes and the like. If the current trend of the clothes trend can be effectively obtained, a clothes design practitioner can design the trend clothes, and an e-commerce can quickly make a decision to select a certain potential trend clothes for sale.
Currently, methods for analyzing clothing trend mainly include offline questionnaires, offline store surveys and other methods, for example, an investigator inquires of a shopping guide about current clothing trend data through an offline store. However, these methods are inefficient in that the survey costs are high, and the data collection is performed manually by the surveyor in a large amount of time. Furthermore, the fashion trend data acquired by these methods come from the answers of passers-by and store shopping guide personnel, and the investigation results are often biased due to the limitations of subjective factors (for example, different people have different aesthetics, different fashions, etc.) and objective factors (for example, different geographical locations, the requirement for confidentiality of sales data of offline stores, etc.) of each person.
According to the embodiment of the present disclosure, a data processing method for analyzing a fashion flow is provided, and the method is described below by way of an illustration. It should be noted that the sequence numbers of the respective operations in the following methods are merely used as representations of the operations for description, and should not be construed as representing the execution order of the respective operations. The method need not be performed in the exact order shown, unless explicitly stated.
Fig. 2 schematically shows a flow chart of a data processing method for analyzing a fashion flow according to an embodiment of the present disclosure.
As shown in fig. 2, the method may include operations S201 to S205 as follows.
In operation S201, an offline (offline) image is acquired.
The offline image corresponds to an online (online) image, and the online image refers to an image that is widely spread and displayed by taking the internet as a medium, such as an image displayed by an e-commerce in the internet. The offline image refers to an image, which is obtained by shooting for an actual offline scene, has real-time performance, and is not widely spread and displayed, except for the online image, for example, one or more images obtained by shooting with a camera on the roadside.
Then, in operation S202, a target apparel in the offline image is identified.
For example, one or more target apparel, such as "hat," "shoe," "bag," "coat," "under coat," "dress," etc., may be predefined, without limitation. The operation S202 can use some target detection algorithm based on deep learning to identify the target apparel, such as R-CNN (Region Convolutional Neural Network), Fast R-CNN (Fast Region Convolutional Neural Network), Fast R-CNN (Faster Region Convolutional Neural Network), etc., without limitation. The target detection algorithm can solve the problem of identification, namely whether any target clothing exists in the offline image, on the one hand, and can solve the problem of positioning, namely, the position information of the identified target clothing in the offline image is determined, on the other hand.
Then, in operation S203, a sub-image containing the identified target apparel is segmented from the offline image based on the position of the target apparel in the offline image.
For example, the above operation S202, after identifying a target apparel "shoe", simultaneously outputs four coordinate parameters related to the target apparel to characterize the box position where the target apparel is located. In operation S203, an image segmentation (segmentation) technique is used to segment the frame where the target garment is located based on the four coordinate parameters, and the segmented frame is used as a sub-image containing the target garment. For other target apparel, the same principle is not repeated here. Each sub-image contains only one target apparel, and one or more sub-images may be segmented from an offline image.
Next, in operation S204, the divided sub-images are matched with a plurality of on-line images.
For example, for any sub-image obtained by segmentation, the sub-image is matched with a plurality of on-line images to measure the difference between the sub-image and the on-line images, so as to evaluate the difference between the target clothes contained in the sub-image and the on-line images.
Next, in operation S205, an analysis report regarding the offline fashion flow is generated based on the matching result.
Wherein, as the matching result can represent the difference between the target clothes and the on-line image, the off-line clothes trend is analyzed through the difference to generate a corresponding analysis report as a reference guide for the on-line clothes development.
As will be appreciated by those skilled in the art, typically, apparel worn by off-line fashion people may often serve as a vane in the fashion flow of the apparel. Based on the rule, the method shown in fig. 2 performs target identification on the off-line image, divides the sub-image containing the target clothing from the off-line image, and matches the sub-image with a plurality of on-line images to obtain the difference between the off-line clothing and the on-line clothing, thereby generating an analysis report about the off-line clothing trend. The process is convenient and efficient, the principle is simple, and the development dynamic of the off-line clothes trend relative to the on-line clothes can be accurately obtained without manually collecting survey data, so that references are provided for the design, the sale and the like related to the on-line clothes.
According to an embodiment of the present disclosure, the matching the divided sub-image with the plurality of on-line images includes: one sub-image is matched with each of the plurality of on-line images, respectively. Illustratively, when matching one sub-image a with one on-line image X, the sub-image a is mapped to a first feature vector a, and the on-line image X is mapped to a second feature vector X. Then, a distance L between the first feature vector a and the second feature vector X is calculateda,x. Then, based on the distance La,xAnd determining the matching degree between the divided sub-image a and the on-line image x. Illustratively, the greater the distance between the sub-image a and the image x on the line, the smaller the matching between the two representations, whereas the smaller the distance between the sub-image a and the image x on the line, the larger the matching between the two representations.
On this basis, when the matching degrees between one sub-image and the images on the multiple lines are all small, it is indicated that the images on the multiple lines are all not similar to the sub-image, and the images on the multiple lines do not contain the target clothes contained in the sub-image. Thus indicating that the target apparel contained in the sub-image has appeared below the line, but not above the line. In an embodiment of the present disclosure, the generating of the analysis report about the offline fashion flow based on the matching result includes: it is determined whether the degree of match between the sub-image containing a target apparel and the plurality of on-line images is less than a first predetermined threshold. If so, it is determined to generate an analysis report for the target apparel.
Further, the analysis report generated above regarding the target apparel may be used as a reference for the e-commerce to sell apparel for a future period of time. Since different geographic regions may have different clothing fashion trends, the analysis reports generated above about the offline clothing trend may be analyzed and pushed according to the geographic regions. Illustratively, the acquiring the offline image includes: an offline image from a first geographic area is obtained. The data processing method for analyzing the clothes trend according to the embodiment of the disclosure may further include, after generating the analysis report: and pushing the generated analysis report to service terminals of the e-commerce distributed in the first geographical area.
In another embodiment of the present disclosure, a data processing method for analyzing a fashion flow according to an embodiment of the present disclosure may analyze an on-line image belonging to a specific geographic area. Illustratively, the generating of the analysis report about the offline fashion flow based on the matching result may include: it is determined whether a degree of matching between a sub-image containing a target apparel and an on-line image belonging to a second geographic area of the plurality of on-line images is less than a second predetermined threshold. If yes, the target clothes contained in the sub-image are clothes types which do not appear in the on-line image belonging to the second geographic area. An analysis report about the target apparel is generated and pushed to the service terminals of the e-commerce distributed in the second geographic area.
Further, the data processing method for analyzing the fashion trend according to the embodiment of the present disclosure may further include: after the sub-image containing the target clothes is segmented from the offline image, the segmented sub-image is subjected to attribute identification by using a deep learning algorithm, and one or more attributes of the target clothes contained in the sub-image are determined. On this basis, the generating of the analysis report about the target apparel comprises: an analysis report is generated regarding one or more attributes of the target apparel. Illustratively, the one or more attributes include at least one of: shape, color, type, style, length, accessories, and materials. In addition, when matching the sub-image containing the target clothing with the plurality of on-line images, in order to improve the matching efficiency, for example, the sub-image may be matched with the plurality of on-line images containing the same kind of target clothing. For example, if the target apparel in the sub-image is a shoe, the sub-image is matched to all of the on-line images that contain the shoe.
According to an embodiment of the present disclosure, the process of acquiring an offline image of the present disclosure may include: offline images are acquired with cameras distributed over at least one geographic area.
The above-described embodiments of the present disclosure are exemplarily described below with reference to fig. 3A, 3B, and 3C in conjunction with specific examples. Note that fig. 3A, 3B, and 3C are only illustrations for explaining the principle of implementation of the embodiments of the present disclosure, and do not limit the present disclosure.
Fig. 3A schematically shows a flow chart of a data processing method for analyzing a fashion flow according to another embodiment of the present disclosure.
As shown in fig. 3A, the method may include operations S301 to S307 as follows.
In operation S301, an offline user image is acquired.
The offline user image refers to an offline image whose captured content is an actual user. For example, the offline user image is a street image of a woman, and may be captured by a camera deployed in a public place, for example.
Then, in operation S302, target apparel identification is performed on the under-line user image.
For example, a garment detection engine is constructed. The offline user image is input to a clothing detection engine to identify the target clothing contained in the offline user image, and the identification result refers to fig. 3B, which includes the classification result of the target clothing and the position information of the target clothing, for subsequent further algorithm processing. The clothing detection engine can be constructed by using a plurality of target detection algorithms based on deep learning, such as Faster R-CNN and the like.
Figure 3B schematically shows an example schematic of a target apparel identification result, according to an embodiment of the disclosure. As shown in fig. 3B, the present operation S302 recognizes the following objects in the user image below the line shown on the left side: the "woman", "bag", "shoe", "upper body", "lower body", "jacket", "under coat", "cap", and the like are marked in the form of a box in the drawing, as shown in table 1. In this example, "bag", "shoes", "jacket", "under coat", and "hat" may be targeted apparel.
TABLE 1
Serial number Annotating data
1 Woman
2 Bag (bag)
3 Shoes with air-permeable layer
4 Shoes with air-permeable layer
5 Upper half body
6 Lower body
7 Jacket
8 Lower clothes
9 Cap (hat)
Next, in operation S303, a sub-image including the target apparel is segmented from the offline user image.
Following the example shown in fig. 3B, the present operation S303 may segment the location area representing each target apparel in the recognition result, for example, to obtain: a sub-image containing the target apparel "pack", corresponding to box 2; a sub-image containing the target apparel "shoe", corresponding to box 3 or 4; a sub-image containing the target apparel "jacket", corresponding to box 7; a sub-image containing the "lower garment" of the target garment, corresponding to box 8; a sub-image containing the target apparel "hat" corresponds to box 9.
Next, in operation S304, attribute recognition is performed on each sub-image.
For example, a clothing attribute identification module is constructed, and the clothing attribute identification module is used for identifying the attribute of the target clothing contained in the sub-image. The module may identify specific attributes of the apparel, such as collar type, color, type, style, length, accessories, materials, etc. of the apparel. The recognition algorithm of the clothing attribute recognition module may use CNN (Convolutional Neural Network) in deep learning or other machine vision algorithms, which is not limited herein.
Next, in operation S305, match detection is performed on each sub-image.
For example, a clothing retrieval module is constructed, which can map each sub-image to a high-dimensional vector, such as a 512-dimensional floating-point vector, referred to as a first feature vector. And mapping each online image (such as a dress image provided by an e-commerce on an internet platform, etc.) into a high-dimensional vector based on the same conversion principle, which is called a second feature vector. And judging the similarity degree between the target clothes contained in the sub-image and the clothes contained in the on-line image by comparing the Euclidean distance between the first characteristic vector and the second characteristic vector. System architecture of garment retrieval module referring to fig. 3C, a comparison of the degree of similarity of an off-line target garment to an overall on-line garment may be performed.
Fig. 3C schematically illustrates an example architecture diagram of a garment retrieval module according to an embodiment of the present disclosure. As shown in fig. 3C, a sub-image of the offline image is input to the convolutional neural network, and a first feature vector of the sub-image is output. And inputting the image on the line into a convolutional neural network to obtain a second feature vector of the image on the line. The first feature vector is then matched with the second feature vector to determine a degree of similarity between the first feature vector and the second feature vector. For example, when the distance between the first feature vector and the second feature vector is smaller than d (d may be set as required), the target apparel contained in the sub-image may be considered to be consistent with the apparel contained in the on-line image. The larger d is, the larger the difference between the two is. Therefore, each piece of clothing worn by the offline user can be matched with clothing in the massive online images through the operation S305, and then the distribution difference between the online and offline clothing is analyzed.
Next, in operation S306, the matching result is analyzed.
For example, an online-offline garment similarity analysis module is constructed. In the above, the clothing retrieval module is used to judge the similarity degree between the off-line target clothing and the on-line clothing according to the distance between the first feature vector and the second feature vector. Meanwhile, the clothing attribute identification module can be used for identifying the attributes of the off-line target clothing and the on-line clothing. And combining the output results of the two modules, the on-line and off-line clothes similarity degree analysis module can analyze the attribute distribution of the on-line clothes and the attribute distribution of the off-line target clothes.
When one piece of off-line target clothing cannot find a matched on-line image (namely, the distances between the second feature vectors of all the on-line images and the first feature vector containing the target clothing are all larger than d), the off-line target clothing is considered to have an on-line missing phenomenon. The reasons for the occurrence of the on-line dropout phenomenon mainly include: 1. the target clothes are smaller; 2. the target clothes are emerging tidal current clothes, and the response speed of online merchants to the tidal current clothes is lower than that of offline merchants, and no new tidal current clothes are laid in stores. The reasons can be obtained from the analysis of the times of the target clothes missing phenomenon. The degree of scarcity of the attribute combination of the offline target clothes on the online can be judged through the process.
Next, in operation S307, an analysis report regarding the offline clothing trend is generated.
For example, the present operation S307 may generate analysis reports such as an offline clothing fashion trend report, an online clothing scarcity report, and the like, and may push to a user who needs to know the difference between the online and offline clothing fashion trends. The fashion trend of the clothes under the line can be easily analyzed through the operations, and the fashion areas of different geographical areas can be analyzed through analyzing the data collected by the cameras of the different geographical areas. In general, the popularity of apparel is gradually spreading, for example, a certain apparel that may be popular in M regions is popular in N regions over several months. The e-commerce and the costume designing institute can sensitively grasp the fashion trend of the whole costume industry according to the analysis, and the data processing method for analyzing the costume trend according to the embodiment of the disclosure has the characteristics of low cost and high accuracy.
Fig. 4 schematically shows a block diagram of a data processing apparatus for analyzing a fashion flow according to an embodiment of the present disclosure.
As shown in fig. 4, the data processing apparatus 400 for analyzing a fashion flow may include: an acquisition module 410, a target recognition module 420, an image segmentation module 430, a matching module 440, and a reporting module 450.
The acquisition module 410 is used to acquire an offline image.
The target identification module 420 is used to identify the target apparel in the offline image.
The image segmentation module 430 is configured to segment a sub-image containing the identified target garment from the offline image based on the identified position of the target garment in the offline image.
The matching module 440 is configured to match the segmented sub-image with a plurality of on-line images.
The reporting module 450 is used to generate an analysis report about the offline fashion flow based on the matching result.
It should be noted that the implementation, solved technical problems, implemented functions, and achieved technical effects of each module/unit/subunit and the like in the apparatus part embodiment are respectively the same as or similar to the implementation, solved technical problems, implemented functions, and achieved technical effects of each corresponding step in the method part embodiment, and are not described herein again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the acquisition module 410, the target recognition module 420, the image segmentation module 430, the matching module 440, and the reporting module 450 may be combined in one module to be implemented, or any one of them may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 410, the target identifying module 420, the image segmenting module 430, the matching module 440, and the reporting module 450 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the acquisition module 410, the target recognition module 420, the image segmentation module 430, the matching module 440 and the reporting module 450 may be at least partially implemented as a computer program module, which when executed, may perform corresponding functions.
Fig. 5 schematically shows a block diagram of a computer device adapted to implement the above described method according to an embodiment of the present disclosure. The computer device shown in fig. 5 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 5, a computer device 500 according to an embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the apparatus 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, device 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The device 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A data processing method for analyzing a fashion flow, comprising:
acquiring an offline image;
identifying a target apparel in the offline image;
segmenting a sub-image containing the target apparel from the offline image based on the position of the target apparel in the offline image;
matching the sub-image with a plurality of on-line images; and
and generating an analysis report about the offline clothing trend based on the matching result.
2. The method of claim 1, wherein the matching the sub-image with a plurality of on-line images comprises, for any one of the plurality of on-line images:
mapping the sub-image into a first feature vector;
mapping the image on any line into a second feature vector;
calculating a distance between the first feature vector and the second feature vector; and
and determining the matching degree between the sub-image and the image on any line based on the distance.
3. The method of claim 2, wherein the generating an analysis report about an offline clothing trend based on the matching results comprises:
determining whether the matching degrees between the sub-image and the images on the plurality of lines are all less than a first predetermined threshold; and
if so, generating an analysis report about the target apparel.
4. The method of claim 3, wherein the obtaining an offline image comprises: obtaining an offline image from a first geographic area;
the method further comprises the following steps: and pushing the analysis report to service terminals of the e-commerce distributed in the first geographical area.
5. The method of claim 2, wherein the generating an analysis report about an offline clothing trend based on the matching results comprises:
determining whether the matching degrees between the sub-image and an on-line image belonging to a second geographic area in the plurality of on-line images are all smaller than a second predetermined threshold; and
if so, generating an analysis report about the target apparel;
the method further comprises the following steps: and pushing the analysis report to service terminals of the e-commerce distributed in the second geographical area.
6. The method of claim 3 or 5, further comprising: performing attribute identification on the sub-images by using a deep learning algorithm, and determining one or more attributes of the target clothes;
the generating an analysis report for the target apparel comprises: generating an analysis report regarding one or more attributes of the target apparel.
7. The method of claim 6, wherein the one or more attributes comprise at least one of:
shape, color, type, style, length, accessories, and materials.
8. The method of claim 1, wherein the obtaining an offline image comprises:
offline images are acquired with cameras distributed over at least one geographic area.
9. A data processing apparatus for analyzing a flow of apparel, comprising:
the acquisition module is used for acquiring an offline image;
the target identification module is used for identifying target clothes in the offline image;
an image segmentation module for segmenting a sub-image containing the target garment from the offline image based on the position of the target garment in the offline image;
a matching module for matching the sub-image with a plurality of on-line images; and
and the report module is used for generating an analysis report about the offline clothing trend based on the matching result.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing:
the method of any one of claims 1 to 8.
11. A computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform:
the method of any one of claims 1 to 8.
CN201911263185.4A 2019-12-10 2019-12-10 Data processing method, device, equipment and medium for analyzing clothes trend Pending CN112950245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911263185.4A CN112950245A (en) 2019-12-10 2019-12-10 Data processing method, device, equipment and medium for analyzing clothes trend

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911263185.4A CN112950245A (en) 2019-12-10 2019-12-10 Data processing method, device, equipment and medium for analyzing clothes trend

Publications (1)

Publication Number Publication Date
CN112950245A true CN112950245A (en) 2021-06-11

Family

ID=76225984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911263185.4A Pending CN112950245A (en) 2019-12-10 2019-12-10 Data processing method, device, equipment and medium for analyzing clothes trend

Country Status (1)

Country Link
CN (1) CN112950245A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376372A (en) * 2022-08-26 2022-11-22 广东粤鹏科技有限公司 Multimedia teaching method and teaching system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331417A (en) * 2014-10-09 2015-02-04 深圳码隆科技有限公司 Matching method for personnel garments of user
TWI524286B (en) * 2014-10-07 2016-03-01 Chunghwa Telecom Co Ltd Popular with the recommended system
CN108629661A (en) * 2018-04-28 2018-10-09 东莞市华睿电子科技有限公司 A kind of Products Show method applied to unmanned boutique
CN108734557A (en) * 2018-05-18 2018-11-02 北京京东尚科信息技术有限公司 Methods, devices and systems for generating dress ornament recommendation information
US20180352117A1 (en) * 2017-06-06 2018-12-06 Seiko Epson Corporation Profile adjustment method and profile adjustment system
CN109063758A (en) * 2018-07-19 2018-12-21 深圳码隆科技有限公司 Clothes recognition methods and device based on picture
CN109597907A (en) * 2017-12-07 2019-04-09 深圳市商汤科技有限公司 Dress ornament management method and device, electronic equipment, storage medium
CN110413825A (en) * 2019-06-21 2019-11-05 东华大学 Clap recommender system in street towards fashion electric business

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI524286B (en) * 2014-10-07 2016-03-01 Chunghwa Telecom Co Ltd Popular with the recommended system
CN104331417A (en) * 2014-10-09 2015-02-04 深圳码隆科技有限公司 Matching method for personnel garments of user
US20180352117A1 (en) * 2017-06-06 2018-12-06 Seiko Epson Corporation Profile adjustment method and profile adjustment system
CN109597907A (en) * 2017-12-07 2019-04-09 深圳市商汤科技有限公司 Dress ornament management method and device, electronic equipment, storage medium
CN108629661A (en) * 2018-04-28 2018-10-09 东莞市华睿电子科技有限公司 A kind of Products Show method applied to unmanned boutique
CN108734557A (en) * 2018-05-18 2018-11-02 北京京东尚科信息技术有限公司 Methods, devices and systems for generating dress ornament recommendation information
CN109063758A (en) * 2018-07-19 2018-12-21 深圳码隆科技有限公司 Clothes recognition methods and device based on picture
CN110413825A (en) * 2019-06-21 2019-11-05 东华大学 Clap recommender system in street towards fashion electric business

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全辰永: "基于街拍分析的中、韩休闲男装风格研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376372A (en) * 2022-08-26 2022-11-22 广东粤鹏科技有限公司 Multimedia teaching method and teaching system

Similar Documents

Publication Publication Date Title
US20210256320A1 (en) Machine learning artificialintelligence system for identifying vehicles
WO2019233258A1 (en) Method, apparatus and system for sending information, and computer-readable storage medium
Rahman et al. Smartphone-based hierarchical crowdsourcing for weed identification
WO2020253357A1 (en) Data product recommendation method and apparatus, computer device and storage medium
TW201443807A (en) Visual clothing retrieval
TW201905733A (en) Multi-source data fusion method and device
US11348166B2 (en) Systems and methods for analysis of wearable items of a clothing subscription platform
CN110210457A (en) Method for detecting human face, device, equipment and computer readable storage medium
CN111738199B (en) Image information verification method, device, computing device and medium
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN110619807A (en) Method and device for generating global thermodynamic diagram
KR20170016578A (en) Clothes Fitting System And Operation Method of Threof
TW201903705A (en) System and method for providing recommendations based on seed-supervised learning
CN115115825A (en) Method and device for detecting object in image, computer equipment and storage medium
CN112783468A (en) Target object sorting method and device
CN112950245A (en) Data processing method, device, equipment and medium for analyzing clothes trend
US11107098B2 (en) System and method for content recognition and data categorization
CN113779276A (en) Method and device for detecting comments
KR20220041319A (en) Method and system for product search based on deep-learning
CN116910102A (en) Enterprise query method and device based on user feedback and electronic equipment
CA3056868C (en) Media content tracking
CN115758271A (en) Data processing method, data processing device, computer equipment and storage medium
CN114360057A (en) Data processing method and related device
US10904346B2 (en) Weighted digital image object tagging
Monino et al. The algorithm of the snail: an example to grasp the window of opportunity to boost big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination