WO2018149237A1 - 物品数据处理方法、装置和计算机可读存储介质 - Google Patents

物品数据处理方法、装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2018149237A1
WO2018149237A1 PCT/CN2017/119516 CN2017119516W WO2018149237A1 WO 2018149237 A1 WO2018149237 A1 WO 2018149237A1 CN 2017119516 W CN2017119516 W CN 2017119516W WO 2018149237 A1 WO2018149237 A1 WO 2018149237A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
collocation
feature
collocated
items
Prior art date
Application number
PCT/CN2017/119516
Other languages
English (en)
French (fr)
Inventor
葛彦昊
刘巍
陈宇
翁志
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Publication of WO2018149237A1 publication Critical patent/WO2018149237A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present disclosure relates to the field of image processing technologies, and in particular, to an item data processing method, an item data processing apparatus, and a computer readable storage medium.
  • the product collocation in e-commerce aims to match the products currently viewed or purchased by the user according to certain rules or characteristic rules, thereby recommending other products that form a collocation relationship with the product.
  • a well-targeted and effective collocation system can not only improve the click and purchase rate of the products being matched, but also bring additional conversion rate to the currently viewed products. Therefore, in the process of product collocation, according to the characteristics of existing products, Constructing an effective commodity data processing method for users with other high matching products is the core technical issue in the field.
  • the related technology mainly performs product matching based on user history browsing records, management rules, or similar user scoring.
  • the inventors of the present disclosure have found that there is a problem in the above related art that the product matching is based on structured data such as the user's historical behavior characteristics or the product's own tag characteristics. On the one hand, there is a lack of intuitive connection, resulting in low matching of the matching products. On the other hand, the products that have not been visited or have not been purchased are not in the matching database, resulting in low coverage of the matching products.
  • the present disclosure proposes a technical solution for item data processing, which can be applied not only to product matching, but also to various items matching with high matching and high coverage.
  • an item data processing method including: extracting a feature vector of an item to be collocated in a picture to be matched, determining a category of the item to be collocated; determining a target in response to a user's collocation requirement a category of the item; a reference item in the feature database matching the same item of the item to be matched and matching with the feature as an analog item; and selecting and referring to the collocation feature database according to the collocation relationship corresponding to the analog item
  • the reference item of the target item is used as a matching result; the collocation feature database includes a feature vector corresponding to each reference item and a collocation relationship between each reference item.
  • the method further includes: extracting feature vectors of each of the reference items in the plurality of reference pictures, determining a category of the reference items, and establishing the collocation relationship according to the reference items of the categories Match the feature database.
  • the reference item in the collocation feature database is used as a candidate item for the same purpose as the item to be collocated; and calculating the feature vector corresponding to each candidate item and the corresponding item to be matched.
  • the Euclidean distance between the feature vectors is described, and the candidate items closest to the Euclidean distance between the items to be collocated are selected as the analog items.
  • calculating a hash Hamming distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be matched, and selecting a hash between the item to be matched Hi Haiming is configured to form a candidate set from the nearest first number of the candidate items; calculating an EU between the feature vector corresponding to the candidate item in the candidate set and the feature vector corresponding to the item to be collocated a distance, and selecting a second quantity of the candidate item closest to the Euclidean distance between the items to be collocated as the analog item; the first quantity being greater than the second quantity.
  • a Faster-RCNN Faster Region Convolutional Neural Network
  • a Faster-RCNN Faster Region Convolutional Neural Network
  • detecting the image area to determine that an image area of the reference item does exist as a target area, and performing feature extraction on the reference item in the target area to generate a corresponding item of the reference item
  • Determining a feature vector determining a category of the reference item according to the feature vector, and acquiring a collocation relationship between the reference items of the various categories, thereby establishing the collocation feature database.
  • the Faster-RCNN to extract the pixel features of the to-be-matched picture, generating a plurality of coordinate sets, each of the coordinate sets corresponding to an image area where the item to be matched may exist; and detecting the image area, Determining that an image area of the item to be collocated is present as a target area, and performing feature extraction on the item to be collocated in the target area to generate the feature vector corresponding to the item to be collocated; according to the feature vector Determining the category of the item to be matched.
  • an item data processing apparatus including: an item matching unit to be used for extracting a feature vector of an item to be matched in a picture to be matched, and determining a category of the item to be matched; An item determining unit, configured to determine a category of the target item in response to the user's matching requirement; the analog item determining unit is configured to use the reference item in the matching feature database that matches the item to be matched and matches the feature thereof as an analog item
  • the collocation feature database includes a collocation relationship between the feature vector corresponding to each reference item and each reference item; the item collocation unit is configured to select and match the collocation feature database according to the collocation relationship corresponding to the analog item
  • the reference item of the target item of the same kind is used as a result of the matching.
  • the device further includes: a collocation feature database establishing unit, configured to extract feature vectors of each of the reference items in the plurality of reference pictures, determine a category of the reference items, and reference items according to each category The collocation relationship between the traits is established.
  • a collocation feature database establishing unit configured to extract feature vectors of each of the reference items in the plurality of reference pictures, determine a category of the reference items, and reference items according to each category The collocation relationship between the traits is established.
  • the analogy item determining unit includes: a candidate item determining subunit, configured to use all the reference items in the collocation feature database that are similar to the item to be collocated as a candidate item; a feature distance determining subunit, And calculating a Euclidean distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be collocated, and selecting a plurality of European distances closest to the item to be collocated
  • the candidate item is described as the analog item.
  • the collocation feature database establishing unit includes: a reference item area determining subunit, configured to extract pixel features of the reference picture, generate a plurality of coordinate sets, and each of the coordinate sets corresponds to one possible reference item Image area, detecting the image area to determine that an image area of the reference item does exist as a target area; and a reference item category determining subunit for performing feature extraction on the reference item in the target area Generating the feature vector corresponding to the reference item, determining a category of the reference item according to the feature vector, and acquiring a collocation relationship between the reference items of the various categories, thereby establishing the collocation feature database;
  • the to-be-matched item determining unit includes: a to-be-matched item area determining sub-unit, configured to extract a pixel feature of the to-be-matched picture, and generate a plurality of coordinate sets, each of the coordinate sets corresponding to one of the possible items to be matched An image area, the image area is detected to determine that the presence does exist
  • the image area of the item is
  • an article data processing apparatus comprising: a memory and a processor coupled to the memory, the processor being configured to execute based on an instruction stored in the memory device The item data processing method as described above.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the item data processing method of any of the above embodiments.
  • the article and the feature vector in the picture are identified and extracted by the Faster-RCNN, and the item matching feature database is established, and the degree of matching between the items is measured by the distance between the feature vectors, thereby achieving high Matching, high coverage items.
  • FIG. 1 illustrates an exemplary schematic diagram of an item data processing method in accordance with some embodiments of the present disclosure
  • FIG. 2 illustrates an exemplary flow chart of an item data processing method in accordance with further embodiments of the present disclosure
  • FIG. 3 illustrates an exemplary flowchart of an item data processing method in accordance with further embodiments of the present disclosure
  • FIG. 4 illustrates an exemplary flow chart of an item data processing method in accordance with still further embodiments of the present disclosure
  • FIG. 5 illustrates an exemplary block diagram of an item data processing apparatus in accordance with some embodiments of the present disclosure
  • FIG. 6 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure
  • FIG. 7 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure
  • FIG. 8 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure.
  • FIG. 1 illustrates an exemplary schematic diagram of an item data processing method in accordance with some embodiments of the present disclosure.
  • the collocation feature database 11 includes reference objects in the reference pictures 1 to N and collocation relationships between the reference items, for example, top loading 1 to N, bottom loading 1 to N, and shoes 1 to N, top loading 2 with bottoms 2 and shoes 2 and so on.
  • the item to be matched in the picture 12 to be matched is extracted - the X is loaded, and the category of the target item is determined to be the bottom according to the user's needs.
  • the top loading X is compared with the top loading 1 to N in the collocation feature database 11 to select the top garment 2 closest to the top loading X feature. According to the collocation relationship between the top wear 2 and the bottom wear 2, it is determined that the recommended item 13 is the bottom mount 2.
  • FIG. 2 illustrates an exemplary flow chart of an item data processing method in accordance with further embodiments of the present disclosure.
  • the method includes: step 201, determining a category of the item to be collocated; step 202, determining a category of the target item; step 203, determining an analog item; and step 204, determining a collocation result.
  • the feature vector of the item to be matched in the picture to be matched stored in the database is extracted, and the category of the item to be matched is determined.
  • the feature vector may be a vector that is characterized by a depth learning model that can characterize the texture, material, illumination, or shape of the item.
  • the category of the item may be a jacket, pants, shoes or accessories.
  • step 202 the category of the target item is determined in response to the user's matching needs.
  • step 203 the reference item in the matching feature database that matches the item to be matched and matches its feature is used as an analog item.
  • the collocation feature database includes feature vectors and collocation relationships corresponding to the reference items.
  • the collocation feature database includes a combination of various tops, trousers, and shoes, and a feature vector that can represent features such as colors, materials, and styles of the reference items.
  • the analog item and the item to be matched can be the same as the top, and the materials, textures or styles of the two are close.
  • step 204 according to the collocation relationship corresponding to the analog item, the reference item of the same kind as the target item is selected as the matching result in the collocation feature database.
  • the present disclosure compares the feature vector of the item to be matched with the feature vector of the item in the collocation feature database to find the closest reference item, and determines the recommended item according to the collocation relationship, thereby improving the collocation. suitability.
  • the recommended item of the present disclosure is not limited to items viewed by the user, but is excavated from a large number of pictures in the database, thereby improving the coverage of the item with the recommendation.
  • FIG. 3 illustrates an exemplary flow chart of an item data processing method in accordance with further embodiments of the present disclosure.
  • the method includes:
  • step 301 extracting a feature vector of the item to be matched in the picture to be matched, and determining a category of the item to be matched;
  • step 302 determining a category of the target item in response to the user's matching requirement
  • step 303 all reference items in the feature database with the same purpose as the item to be matched are used as candidate items;
  • step 304 the Euclidean distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be matched is calculated respectively;
  • a plurality of candidate items closest to the Euclidean distance between the items to be collocated are selected as analog items. For example, you can select 10 candidate items with the shortest Euclidean distance as an analog item.
  • the hash Hamming distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be matched is separately calculated, and N (for example, 50, 100, 120, 150, or 200) pieces are selected. The closest candidate to the Hash Hamming distance between the item to be matched. Then calculate the Euclidean distance between the feature vector corresponding to the N candidate items and the feature vector corresponding to the item to be matched, and select M (M ⁇ N, for example, 10, 20, 30 or 40) pieces to be matched with the item to be matched. The nearest candidate item between the Euclidean distances is used as the analog item.
  • N for example, 50, 100, 120, 150, or 200
  • step 306 according to the collocation relationship corresponding to the analog item, the reference item of the same kind as the target item is selected as the matching result in the collocation feature database.
  • the present disclosure determines the analogy item by calculating the Euclidean distance between the item to be collocated in the image and the reference item in the plurality of database pictures, and determines the recommended item according to the collocation relationship of the analog item, thereby improving the coverage ratio of the item collocation.
  • the use of Hashi Hamming distance-European distance rearrangement technology to coarsely screen and select reference pictures greatly reduces the number of calculations, thereby improving the timeliness of item matching.
  • FIG. 4 illustrates an exemplary flow chart of an item data processing method in accordance with further embodiments of the present disclosure.
  • the method includes: step 401, determining a target area; step 402, determining a category of the item to be collocated; step 403, determining a category of the target item; step 404, determining a candidate item; and step 405, calculating the OU
  • the distance is determined; in step 406, an analogy item is determined; and in step 407, the result of the matching is determined.
  • step 401 the target area in which the item to be matched is to be matched is determined by using Faster-RCNN.
  • the pixel features of the to-be-matched picture are extracted according to the depth learning model of the Faster-RCNN, and a plurality of coordinate sets that may appear to be collocated are generated. Each set of coordinates determines a specific area in the picture to be matched. Then, these areas are detected and classified in turn. If the detection result is that there is an item to be matched in the area, the confidence level of the item category corresponding to the area is raised. Otherwise, the confidence level of the item category corresponding to the area is lowered. Finally, locate the item to be matched from the area with high confidence.
  • step 402 the feature vector of the item to be collocated in the target area is extracted, and the category of the item to be collocated is determined.
  • the feature vector of the item to be collocated is extracted according to the pre-trained deep learning model, and the feature vector can represent the texture, material, illumination, shape, and the like of the picture of the item to be matched.
  • step 403 the category of the target item is determined in response to the user's matching needs.
  • step 404 all reference items in the feature database that are similar to the item to be matched are used as candidate items.
  • the feature extraction is performed on the massive reference pictures by using the above method, and the collocation feature database is established according to the obtained feature vectors and collocation relationships of the reference items.
  • step 405 the Euclidean distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be matched is calculated respectively.
  • step 406 a plurality of candidate items closest to the Euclidean distance between the items to be collocated are selected as analog items.
  • step 407 according to the collocation relationship corresponding to the analog item, the reference item of the same kind as the target item is selected as the matching result in the collocation feature database.
  • the individual of the items existing in the item picture is identified and extracted by using the Faster-RCNN, and the feature vector of the item response is obtained to represent the item feature.
  • the method automatically generates feature vectors by using the deep learning model to represent the feature of the item, without artificially specifying the feature of the item, and can excavate the item combination in the massive reference picture, thereby improving the coverage and matching degree of the item matching.
  • FIG. 5 illustrates an exemplary block diagram of an item data processing apparatus in accordance with some embodiments of the present disclosure.
  • the apparatus includes a to-be-matched item determining unit 51, a target item determining unit 52, an analogy item determining unit 53, and an item matching unit 54.
  • the collocation item determining unit 51 extracts the feature vector of the item to be collocated in the picture to be matched, and determines the category of the item to be collocated.
  • the target item determining unit 52 determines the category of the target item in response to the user's matching demand. For example, if the picture to be matched is a top-loading photo, the user wants to match the top-loading item in the photo, and the item-determining unit 51 extracts the item features in the photo, and determines that the item to be matched is the top.
  • the target item determining unit 52 determines that the category of the target item is the download.
  • the analogy item determining unit 53 uses the reference item in the matching feature database that matches the item to be matched and matches its feature as the analog item.
  • the item collocation unit 54 selects a reference item of the same type as the target item as a collocation result according to the collocation relationship corresponding to the analog item. For example, the analogy item determining unit 53 compares the features of the tops in the above-mentioned photos with the features of all the tops in the matching feature database, selects the top that is closest to the top feature in the above photo, and recalls both the top and the A reference picture of the bottom item (can be a model display photo).
  • the item determining unit of the object to be matched extracts the feature vector of the item to be matched
  • the feature vector of the item in the matching feature database is compared to find the closest reference item, and the item matching unit determines according to the matching relationship.
  • Recommended items improved matching.
  • the collocation items recommended by the present disclosure are not limited to items that have been viewed by the user, but are excavated from a large number of pictures in the database, thereby improving the coverage of the items with the recommended items.
  • FIG. 6 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure.
  • the apparatus includes a collocation feature database establishing unit 60, an item to be collocated unit 51, a target item determining unit 52, an analogy item determining unit 63, and an item collocation unit 54.
  • the analog item determining unit 63 includes a candidate item determining sub-unit 631 and a feature distance determining sub-unit 632.
  • the functions of the collocation item determining unit 51, the target item determining unit 52, and the item collocation unit 54 may be referred to the corresponding description of the above embodiments, and will not be described again for the sake of brevity.
  • the collocation feature database establishing unit 60 extracts feature vectors of each reference item in the plurality of reference pictures, determines the category of each reference item, and establishes a collocation feature database according to the collocation relationship between the various reference items. For example, the image detection model of the Faster-RCNN is used to detect a model display image containing the item of the category, and the positions of the plurality of item entities included in each picture are obtained, and the category to which the item belongs is determined. Then create a collocation relationship for the multiple items included in each picture to create a matching feature database.
  • the candidate item determining sub-unit 631 will use all of the reference items in the feature database that are similar to the item to be matched as the candidate item.
  • the feature distance determining sub-unit 632 respectively calculates the Euclidean distance between the feature vector corresponding to each candidate item and the feature vector corresponding to the item to be collocated, and selects a plurality of candidate items closest to the Euclidean distance between the items to be collocated as the analog items.
  • the feature distance determination sub-unit 632 filters the candidate items using a hash Hamming distance-European distance rearrangement technique to determine an analog item. For example, first calculate the hash Hamming distance between the feature vector of the item to be collocated and all reference items of the same category, and select N (for example, 50, 100, 120, 150, or 200) pieces of hashish The closest reference item. Then calculating the Euclidean distance between the item to be collocated and the feature vector of the N pieces of reference item, and selecting M (M ⁇ N, for example, 10, 20, 30 or 40) pieces of the nearest European reference item as an analog item .
  • N for example, 50, 100, 120, 150, or 200
  • the feature distance determining subunit reduces the high dimensional feature vector to the low dimensional feature vector by using the hash Hamming distance-European distance rearrangement technique, thereby greatly reducing the number of calculations of the distance between the feature vectors, thereby improving the article. Timeliness of matching.
  • FIG. 7 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure.
  • the apparatus includes a collocation feature database establishing unit 70, an item to be collocated unit 71, a target item determining unit 52, an analogy item determining unit 63, and an item collocation unit 54.
  • the analog item determining unit 63 includes a candidate item determining sub-unit 631 and a feature distance determining sub-unit 632.
  • the collocation feature database establishing unit 70 includes a reference item region determining subunit 701 and a reference item category determining subunit 702.
  • the to-be-matched item determining unit 71 includes a to-be-matched item area determining sub-unit 711 and an item-to-match item category determining sub-unit 712.
  • the functions of the target item determining unit 52, the analogy item determining unit 63, and the item matching unit 54 can be referred to the corresponding description of the above embodiments, and will not be described here for the sake of brevity.
  • the reference item region determining sub-unit 701 extracts pixel features of the reference picture, and generates a number of coordinate sets. Each set of coordinates corresponds to an image area where a reference item may be present. The image area is detected to determine the image area where the reference item does exist as the target area.
  • the reference item region determining sub-unit 701 extracts pixel features of the reference picture using the Faster-RCNN deep learning model to generate a set of coordinates of the target object that may be present, wherein each set of coordinates locates a particular region. Then, the reference item region determining sub-unit 701 sequentially detects and classifies the positioned regions, and enhances the confidence level of the item category (such as clothing, bottoms, or accessories) corresponding to the region in which the target object does exist. Otherwise, the confidence is reduced. Finally, each area with a higher degree of confidence is detected to locate the target object of the corresponding item category.
  • the reference item category determining sub-unit 702 performs feature extraction on the reference item in the target area, generates a feature vector corresponding to the reference item, determines a category of the reference item according to the feature vector, and obtains a collocation relationship between the various reference items, thereby Create a collocation feature database.
  • the to-be-matched item area determining sub-unit 711 extracts pixel features of the picture to be matched, and generates a plurality of coordinate sets. Each set of coordinates corresponds to an image area where there may be items to be matched. The image area is detected to determine that there is an image area of the item to be collocated as the target area.
  • the item category determination sub-unit 721 performs feature extraction on the item to be collocated in the target area, and generates a feature vector corresponding to the item to be collocated. The category of the item to be matched is determined according to the feature vector.
  • the item category determination sub-unit 721 to perform feature extraction on the target area using the pre-trained Faster-RCNN depth learning feature model, and the item to be matched in the target area corresponds to a 1024-dimensional feature vector.
  • the feature vector characterizes the image features such as texture, material, illumination and shape of the image of the article.
  • the features of these deep learning feature models do not need to be specified by humans, but are automatically learned by the model to determine which features best characterize the image.
  • the individual of the items existing in the item picture is identified and extracted by using the Faster-RCNN, and the feature vector of the item response is obtained to represent the item feature.
  • the method automatically generates feature vectors by using the deep learning model to represent the feature of the item, without artificially specifying the feature of the item, and can excavate the item combination in the massive reference picture, thereby improving the coverage and matching degree of the item matching.
  • FIG. 8 illustrates an exemplary block diagram of an item data processing apparatus in accordance with further embodiments of the present disclosure.
  • the apparatus 80 of this embodiment includes a memory 801 and a processor 802 coupled to the memory 801, the processor 802 being configured to perform any of the implementations of the present disclosure based on instructions stored in the memory 801.
  • the item data processing method in the example is not limited to:
  • Memory 801 can include, for example, system memory, fixed non-volatile storage media, and the like.
  • the system memory stores, for example, an operating system, an application, a boot loader, a database, and other programs.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the item data processing method of any of the above embodiments.
  • the computer readable storage medium is a non-transitory computer readable storage medium.
  • the methods and systems of the present disclosure may be implemented in a number of ways.
  • the methods and systems of the present disclosure may be implemented in software, hardware, firmware, or any combination of software, hardware, or firmware.
  • the above-described sequence of steps for the method is for illustrative purposes only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless otherwise specifically stated.
  • the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine readable instructions for implementing a method in accordance with the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Finance (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种物品数据处理方法和装置,涉及图像处理技术领域。该方法包括:提取待搭配图片中待搭配物品的特征向量,确定待搭配物品的类目(201);响应于用户的搭配需求,确定目标物品的类目(202);将搭配特征数据库中与待搭配物品同类目且与其特征匹配的参考物品作为类比物品(203);根据类比物品对应的搭配关系,在搭配特征数据库中选取与目标物品同类目的参考物品作为搭配结果(204);搭配特征数据库中包括参考物品对应的特征向量和搭配关系。该方法和装置实现了高匹配度、高覆盖率的物品搭配。

Description

物品数据处理方法、装置和计算机可读存储介质
相关申请的交叉引用
本申请是以CN申请号为201710089504.9,申请日为2017年2月20日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,特别涉及一种物品数据处理方法、物品数据处理装置和计算机可读存储介质。
背景技术
电子商务中的商品搭配,旨在根据一定规则或特征规律对用户当前查看或购买的商品进行匹配,从而为用户推荐与此商品形成搭配关系的其他商品。一套针对性强且有效的搭配系统不仅能够提升被搭配商品的点击和购买率,而且能够为当前被浏览商品带来额外转化率,因此在商品搭配过程中,如何根据现有商品的特征,构建有效的商品数据处理方法为用户搭配匹配度较高的其他商品是目前该领域研究的核心技术问题。
相关技术主要是基于用户历史浏览记录、管理规则或者相似用户打分来进行商品搭配。
发明内容
本公开的发明人发现上述相关技术中存在如下问题:商品搭配依据用户的历史行为特征或商品自身标签特征等结构化数据。这样做一方面缺乏直观联系,导致搭配商品的匹配度低,另一方面未被访问过或未被购买过的商品不在搭配数据库中,导致搭配商品的覆盖率低。针对所述问题中的至少一个问题,本公开提出了一种物品数据处理的技术方案,不但能够适用于商品搭配,而且还能够实现高匹配度、高覆盖率的各种物品搭配。
根据本公开的一些实施例,提供了一种物品数据处理方法,包括:提取待搭配图片中待搭配物品的特征向量,确定所述待搭配物品的类目;响应于用户的搭配需求,确定目标物品的类目;将搭配特征数据库中与所述待搭配物品同类目且与其特征匹配的参考物品作为类比物品;根据所述类比物品对应的搭配关系,在所述搭配特征数据 库中选取与所述目标物品同类目的参考物品作为搭配结果;所述搭配特征数据库中包括各参考物品对应的特征向量和各参考物品之间的搭配关系。
可选地,该方法还包括:提取多张参考图片中各所述参考物品的特征向量,确定所述各参考物品的类目,根据各类目所述参考物品之间的搭配关系建立所述搭配特征数据库。
可选地,将所述搭配特征数据库中所有与所述待搭配物品同类目的所述参考物品作为候选物品;分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧式距离,并选取若干与所述待搭配物品之间的欧式距离最近的所述候选物品作为所述类比物品。
可选地,分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的哈希海明距离,并选取与所述待搭配物品之间的哈希海明距离最近的第一数量所述候选物品构成候选集合;计算所述候选集合中的所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧氏距离,并选取与所述待搭配物品之间的欧式距离最近的第二数量所述候选物品作为所述类比物品;所述第一数量大于第二数量。
可选地,利用Faster-RCNN(Faster Region Convolutional Neural Network,快速区域卷积神经网络)提取所述参考图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述参考物品的图像区域;对所述图像区域进行检测,以确定确实存在所述参考物品的图像区域作为目标区域,并对所述目标区域中所述参考物品进行特征提取,生成所述参考物品对应的所述特征向量;根据所述特征向量确定所述参考物品的类目,并获取各类目所述参考物品之间的搭配关系,从而建立所述搭配特征数据库。
可选地,利用Faster-RCNN提取所述待搭配图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述待搭配物品的图像区域;对所述图像区域进行检测,以确定确实存在所述待搭配物品的图像区域作为目标区域,并对所述目标区域中所述待搭配物品进行特征提取,生成所述待搭配物品对应的所述特征向量;根据所述特征向量确定所述待搭配物品的类目。
根据本公开的另一些实施例,提供一种物品数据处理装置,包括:待搭配物品确定单元,用于提取待搭配图片中待搭配物品的特征向量,确定所述待搭配物品的类目;目标物品确定单元,用于响应于用户的搭配需求,确定目标物品的类目;类比物品确 定单元,用于将搭配特征数据库中与所述待搭配物品同类目且与其特征匹配的参考物品作为类比物品,所述搭配特征数据库中包括各参考物品对应的特征向量和各参考物品之间的搭配关系;物品搭配单元,用于根据所述类比物品对应的搭配关系,在所述搭配特征数据库中选取与所述目标物品同类目的参考物品作为搭配结果。
可选地,该装置还包括:搭配特征数据库建立单元,用于提取多张参考图片中各所述参考物品的特征向量,确定所述各参考物品的类目,根据各类目所述参考物品之间的搭配关系建立所述搭配特征数据库。
可选地,所述类比物品确定单元包括:候选物品确定子单元,用于将所述搭配特征数据库中所有与所述待搭配物品同类目的所述参考物品作为候选物品;特征距离确定子单元,用于分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧式距离,并选取若干与所述待搭配物品之间的欧式距离最近的所述候选物品作为所述类比物品。
可选地,所述搭配特征数据库建立单元包括:参考物品区域确定子单元,用于提取所述参考图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述参考物品的图像区域,对所述图像区域进行检测,以确定确实存在所述参考物品的图像区域作为目标区域;参考物品类目确定子单元,用于对所述目标区域中所述参考物品进行特征提取,生成所述参考物品对应的所述特征向量,根据所述特征向量确定所述参考物品的类目,并获取各类目所述参考物品之间的搭配关系,从而建立所述搭配特征数据库;所述待搭配物品确定单元包括:待搭配物品区域确定子单元,用于提取所述待搭配图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述待搭配物品的图像区域,对所述图像区域进行检测,以确定确实存在所述待搭配物品的图像区域作为目标区域;待搭配物品类目确定子单元,用于对所述目标区域中所述待搭配物品进行特征提取,生成所述待搭配物品对应的所述特征向量,根据所述特征向量确定所述待搭配物品的类目。
根据本公开的又一些实施例,提供一种物品数据处理装置,包括:存储器以及耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器设备中的指令,执行如上所述的物品数据处理方法。
根据本公开的再一些实施例,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一个实施例所述的物品数据处理方法。
在上述实施例中,利用Faster-RCNN识别并提取了图片中的物品个体和特征向 量,建立了物品搭配特征数据库,并以特征向量之间的距离衡量物品之间的匹配程度,从而实现了高匹配度、高覆盖率的物品搭配。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的不当限定。在附图中:
图1示出根据本公开的一些实施例的物品数据处理方法的示例性示意图;
图2示出根据本公开的另一些实施例的物品数据处理方法的示例性流程图;
图3示出根据本公开的又一些实施例的物品数据处理方法的示例性流程图;
图4示出根据本公开的再一些实施例的物品数据处理方法的示例性流程图;
图5示出根据本公开的一些实施例的物品数据处理装置的示例性框图;
图6示出根据本公开的另一些实施例的物品数据处理装置的示例性框图;
图7示出根据本公开的又一些实施例的物品数据处理装置的示例性框图;
图8示出根据本公开的再一些实施例的物品数据处理装置的示例性框图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。
在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
图1示出根据本公开的一些实施例的物品数据处理方法的示例性示意图。
如图1所示,搭配特征数据库11中包含参考图片1~N中的参考物品以及这些参考物品之间的搭配关系,例如:上装1~N、下装1~N和鞋1~N,上装2搭配下装2和鞋2等。提取待搭配图片12中的待搭配物品——上装X,并根据用户的需求,确定目标物品的类目为下装。将上装X与搭配特征数据库11中的上装1~N进行特征对比,选出与上装X特征最接近的上装2。根据上装2与下装2的搭配关系,确定推荐物品13为下装2。
图2示出根据本公开的另一些实施例的物品数据处理方法的示例性流程图。
该方法包括:步骤201,确定待搭配物品的类目;步骤202,确定目标物品的类目;步骤203,确定类比物品;步骤204,确定搭配结果。
如图2所示,在步骤201中,提取数据库中存储的待搭配图片中待搭配物品的特征向量,确定待搭配物品的类目。例如,特征向量可以是根据深度学习模型确定的能够表征物品的纹理、材质、光照或形状等特征的向量。物品的类目可以是上衣、裤子、鞋子或配饰等。
在步骤202中,响应于用户的搭配需求,确定目标物品的类目。
在步骤203中,将搭配特征数据库中与待搭配物品同类目且与其特征匹配的参考物品作为类比物品。
在一些实施例中,搭配特征数据库包括参考物品对应的特征向量和搭配关系。例如,搭配特征数据库中包含了各种上衣、裤子和鞋子等参考物品的搭配方案以及能够表示这些参考物品的颜色、材质和款式等特征的特征向量。类比物品与待搭配物品可以同为上衣,且二者的材质、纹理或款式接近。
在步骤204中,根据类比物品对应的搭配关系,在搭配特征数据库中选取与目标物品同类目的参考物品作为搭配结果。
上述实施例中,一方面,本公开通过提取待搭配图片中物品的特征向量,与搭配特征数据库中物品的特征向量进行比较,找到最接近的参考物品,根据搭配关系确定推荐物品,提高了搭配匹配度。另一方面,本公开的推荐物品不限于用户浏览过的物品,而是从数据库中的海量图片中进行挖掘,从而提高了搭配推荐的物品覆盖率。
图3示出根据本公开的又一些实施例的物品数据处理方法的示例性流程图。
如图3所示,该方法包括:
在步骤301中,提取待搭配图片中待搭配物品的特征向量,确定待搭配物品的类目;
在步骤302中,响应于用户的搭配需求,确定目标物品的类目;
在步骤303中,将搭配特征数据库中所有与待搭配物品同类目的参考物品作为候选物品;
在步骤304中,分别计算各候选物品对应的特征向量与待搭配物品对应的特征向量之间的欧式距离;
在步骤305中,选取若干与待搭配物品之间的欧式距离最近的候选物品作为类比物品。例如,可以选择欧氏距离最短的10件候选物品作为类比物品。
在一些实施例中,分别计算各候选物品对应的特征向量与待搭配物品对应的特征向量之间的哈希海明距离,并选取N(例如,50,100,120,150,或200)件与待搭配物品之间的哈希海明距离最近的候选物品。然后分别计算这N件候选物品对应的特征向量与待搭配物品对应的特征向量之间的欧氏距离,并选取M(M<N,例如,10、20、30或40)件与待搭配物品之间的欧式距离最近的候选物品作为所述类比物品。
在步骤306中,根据类比物品对应的搭配关系,在搭配特征数据库中选取与目标物品同类目的参考物品作为搭配结果。
上述实施例中,本公开通过计算图像中待搭配物品与大量数据库图片中的参考物品之间的欧式距离确定了类比物品,并根据类比物品的搭配关系确定推荐物品,提高了物品搭配的覆盖率。利用哈希海明距离-欧式距离重排技术对参考图片进行粗筛和精选,大大减少了计算次数,从而提高了物品搭配的时效性。
图4示出根据本公开的再一些实施例的物品数据处理方法的示例性流程图。
如图4所示,该方法包括:步骤401,确定目标区域;步骤402,确定待搭配物品的类目;步骤403,确定目标物品的类目;步骤404,确定候选物品;步骤405,计算欧氏距离;步骤406,确定类比物品;步骤407,确定搭配结果。
在步骤401中,利用Faster-RCNN确定待搭配图片中待搭配物品所在的目标区域。
在一些实施例中,首先,根据Faster-RCNN的深度学习模型提取待搭配图片的像素特征,生成若干可能出现待搭配物品的坐标集合。每一组坐标集合均确定待搭配图片中的一个具体区域。然后,依次对这些区域进行检测和分类,如果检测结果为该区域内存在待搭配物品,则将该区域对应的物品类目的置信度提升。否则,则将该区域 对应的物品类目的置信度降低。最后,从具有高置信度的区域中定位待搭配物品。
在步骤402中,提取目标区域中待搭配物品的特征向量,确定待搭配物品的类目。
在一些实施例中,根据预先训练好的深度学习模型来提取待搭配物品的特征向量,该特征向量可以表征待搭配物品图片的纹理、材质、光照和形状等。
在步骤403中,响应于用户的搭配需求,确定目标物品的类目。
在步骤404中,将搭配特征数据库中所有与待搭配物品同类目的参考物品作为候选物品。
在一些实施例中,利用上述方法对海量参考图片分别进行特征提取,根据获取的各参考物品的特征向量和搭配关系建立搭配特征数据库。
在步骤405中,分别计算各候选物品对应的特征向量与待搭配物品对应的特征向量之间的欧式距离。
在步骤406中,选取若干与待搭配物品之间的欧式距离最近的候选物品作为类比物品。
在步骤407中,根据类比物品对应的搭配关系,在搭配特征数据库中选取与目标物品同类目的参考物品作为搭配结果。
在上述实施例中,利用Faster-RCNN对物品图片中存在的物品个体进行识别和提取,获取物品响应的特征向量来表征物品特征。本方法通过深度学习模型自动生成特征向量来表征物品特征,而无需人为指定物品特征,可以挖掘海量参考图片中的物品搭配,从而提高了物品搭配的覆盖率和匹配度。
图5示出根据本公开的一些实施例的物品数据处理装置的示例性框图。
如图5所示,该装置包括:待搭配物品确定单元51、目标物品确定单元52、类比物品确定单元53和物品搭配单元54。
待搭配物品确定单元51提取待搭配图片中待搭配物品的特征向量,确定待搭配物品的类目。目标物品确定单元52响应于用户的搭配需求,确定目标物品的类目。例如,待搭配图片为一张上装照片,用户希望给该照片中的上装搭配下装物品,待搭配物品确定单元51提取该照片中物品特征,确定待搭配物品的类目为上衣。目标物品确定单元52确定目标物品的类目为下装。
类比物品确定单元53将搭配特征数据库中与待搭配物品同类目且与其特征匹配的参考物品作为类比物品。物品搭配单元54根据类比物品对应的搭配关系,在搭配特征数据库中选取与目标物品同类目的参考物品作为搭配结果。例如,类比物品确定 单元53将上述照片中的上衣的特征与搭配特征数据库中所有的上衣的特征进行对比检索,选出与上述照片中的上衣特征最接近的上衣,并召回同时包含该上衣和下装物品的参考图片(可以是模特展示照片)。
上述实施例中,一方面,待搭配物品确定单元提取待搭配图片中物品的特征向量后,与搭配特征数据库中物品的特征向量进行比较,找到最接近的参考物品,物品搭配单元根据搭配关系确定推荐物品,提高了搭配匹配度。另一方面,本公开推荐的搭配物品不限于有用户浏览过的物品,而是从数据库中的海量图片中进行挖掘,从而提高了搭配推荐的物品覆盖率。
图6示出根据本公开的另一些实施例的物品数据处理装置的示例性框图。
如图6所示,该装置包括:搭配特征数据库建立单元60、待搭配物品确定单元51、目标物品确定单元52、类比物品确定单元63和物品搭配单元54。类比物品确定单元63包括:候选物品确定子单元631和特征距离确定子单元632。待搭配物品确定单元51、目标物品确定单元52和物品搭配单元54的功能可以参照上述实施例的对应描述,为简洁起见在此不再描述。
搭配特征数据库建立单元60提取多张参考图片中各参考物品的特征向量,确定各参考物品的类目,根据各类目参考物品之间的搭配关系建立搭配特征数据库。例如,利用Faster-RCNN的图像检测模型对包含给类目物品的模特展示图片进行检测,获取每张图片中所包含的多个物品实体的位置,判断这些物品所属的类目。然后为每张图片中包含的多物品建立搭配关系,从而建立搭配特征数据库。
候选物品确定子单元631将搭配特征数据库中所有与待搭配物品同类目的参考物品作为候选物品。特征距离确定子单元632分别计算各候选物品对应的特征向量与待搭配物品对应的特征向量之间的欧式距离,并选取若干与待搭配物品之间的欧式距离最近的候选物品作为类比物品。
在一些实施例中,特征距离确定子单元632采用哈希海明距离-欧式距离重排技术对候选物品进行筛选,从而确定类比物品。例如,首先计算待搭配物品和所有相同类目的参考物品的特征向量之间的哈希海明距离,并从中选取N(例如,50,100,120,150,或200)件哈希海明距离最近的参考物品。然后计算待搭配物品和该N件参考物品的特征向量之间的欧氏距离,并从中选取M(M<N,例如,10、20、30或40)件欧式距离最近的参考物品作为类比物品。
上述实施例中,特征距离确定子单元利用哈希海明距离-欧式距离重排技术将高 维特征向量降为低维特征向量,大大减少了特征向量之间距离的计算次数,从而提高了物品搭配的时效性。
图7示出根据本公开的又一些实施例的物品数据处理装置的示例性框图。
如图7所示,该装置包括:搭配特征数据库建立单元70、待搭配物品确定单元71、目标物品确定单元52、类比物品确定单元63和物品搭配单元54。类比物品确定单元63包括:候选物品确定子单元631和特征距离确定子单元632。搭配特征数据库建立单元70包括:参考物品区域确定子单元701和参考物品类目确定子单元702。待搭配物品确定单元71包括:待搭配物品区域确定子单元711和待搭配物品类目确定子单元712。目标物品确定单元52、类比物品确定单元63和物品搭配单元54的功能可以参照上述实施例的对应描述,为简洁起见在此不再描述。
参考物品区域确定子单元701提取参考图片的像素特征,生成若干坐标集合。每个坐标集合对应一个可能存在参考物品的图像区域。对图像区域进行检测,以确定确实存在参考物品的图像区域作为目标区域。
在一些实施例中,参考物品区域确定子单元701利用Faster-RCNN深度学习模型提取参考图片的像素特征,生成若干可能出现目标物体的坐标集合,其中每组坐标集合均定位一片具体区域。然后,参考物品区域确定子单元701依次对这些定位出的区域进行检测和分类,将确实存在目标物体的区域对应的物品类目(如上衣、下装或配饰等)置信度提升。否则,将置信度降低。最后,对每个具有较高置信度的区域进行检测从而定位相应物品类目的目标物体。
参考物品类目确定子单元702对目标区域中参考物品进行特征提取,生成参考物品对应的特征向量,根据特征向量确定参考物品的类目,并获取各类目参考物品之间的搭配关系,从而建立搭配特征数据库。
待搭配物品区域确定子单元711提取待搭配图片的像素特征,生成若干坐标集合。每个坐标集合对应一个可能存在待搭配物品的图像区域。对图像区域进行检测,以确定确实存在待搭配物品的图像区域作为目标区域。待搭配物品类目确定子单元721对目标区域中待搭配物品进行特征提取,生成待搭配物品对应的特征向量。根据特征向量确定待搭配物品的类目。
在一些实施例中,待搭配物品类目确定子单元721利用预先训练好的Faster-RCNN深度学习特征模型对目标区域进行特征提取,目标区域中的待搭配物品对应一个1024维的特征向量。该特征向量表征了物品图片的纹理、材质、光照和形 状等图片特征,这些深度学习特征模型的特征不需要人为指定,而是通过模型自动学习来确定哪些特征最能表征图片特点。
在上述实施例中,利用Faster-RCNN对物品图片中存在的物品个体进行识别和提取,获取物品响应的特征向量来表征物品特征。本方法通过深度学习模型自动生成特征向量来表征物品特征,而无需人为指定物品特征,可以挖掘海量参考图片中的物品搭配,从而提高了物品搭配的覆盖率和匹配度。
图8示出根据本公开的再一些实施例的物品数据处理装置的示例性框图。
如图8所示,该实施例的装置80包括:存储器801和耦接至该存储器801的处理器802,处理器802被配置为基于存储在存储器801中的指令,执行本公开中任意一个实施例中的物品数据处理方法。
存储器801例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)、数据库以及其他程序等。
在一些实施例中,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一个实施例中的物品数据处理方法。例如,该计算机可读存储介质为非瞬时性计算机可读存储介质。
至此,已经详细描述了根据本公开的物品数据处理方法、物品数据处理装置和计算机可读存储介质。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
可能以许多方式来实现本公开的方法和系统。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和系统。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改。本公开的范围由所附权利要求来限定。

Claims (12)

  1. 一种物品数据处理方法,包括:
    提取待搭配图片中待搭配物品的特征向量,确定所述待搭配物品的类目;
    响应于用户的搭配需求,确定目标物品的类目;
    将搭配特征数据库中与所述待搭配物品同类目且与其特征匹配的参考物品作为类比物品;
    根据所述类比物品对应的搭配关系,在所述搭配特征数据库中选取与所述目标物品同类目的参考物品作为搭配结果;
    其中,所述搭配特征数据库中包括各参考物品对应的特征向量和所述各参考物品之间的搭配关系。
  2. 根据权利要求1所述的物品数据处理方法,还包括:
    提取多张参考图片中各参考物品的特征向量,确定所述各参考物品的类目,根据各类目所述参考物品之间的搭配关系建立所述搭配特征数据库。
  3. 根据权利要求2所述的物品数据处理方法,其中,所述将搭配特征数据库中与所述待搭配物品同类目且与其特征匹配的参考物品作为类比物品包括:
    将所述搭配特征数据库中所有与所述待搭配物品同类目的所述参考物品作为候选物品;
    分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧式距离;
    选取一个或多个与所述待搭配物品之间的欧式距离最近的所述候选物品作为所述类比物品。
  4. 根据权利要求2所述的物品数据处理方法,其中,
    将所述搭配特征数据库中所有与所述待搭配物品同类目的所述参考物品作为候选物品;
    分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的哈希海明距离,并选取与所述待搭配物品之间的哈希海明距离最近的第一数量所述候选物品构成候选集合;
    计算所述候选集合中的所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧氏距离,并选取与所述待搭配物品之间的欧式距离最近的 第二数量所述候选物品作为所述类比物品,所述第一数量大于第二数量。
  5. 根据权利要求4所述的物品数据处理方法,其中,建立所述搭配特征数据库包括:
    利用快速区域卷积神经网络Faster-RCNN提取所述参考图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述参考物品的图像区域;
    对所述图像区域进行检测,以确定确实存在所述参考物品的图像区域作为目标区域,并对所述目标区域中所述参考物品进行特征提取,生成所述参考物品对应的所述特征向量;
    根据所述特征向量确定所述参考物品的类目,并获取各类目所述参考物品之间的搭配关系,从而建立所述搭配特征数据库。
  6. 根据权利要求5所述的物品数据处理方法,其中,确定所述待搭配物品的类目包括:
    利用Faster-RCNN提取所述待搭配图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述待搭配物品的图像区域;
    对所述图像区域进行检测,以确定确实存在所述待搭配物品的图像区域作为目标区域,并对所述目标区域中所述待搭配物品进行特征提取,生成所述待搭配物品对应的所述特征向量;
    根据所述特征向量确定所述待搭配物品的类目。
  7. 一种物品数据处理装置,包括:
    待搭配物品确定单元,用于提取待搭配图片中待搭配物品的特征向量,确定所述待搭配物品的类目;
    目标物品确定单元,用于响应于用户的搭配需求,确定目标物品的类目;
    类比物品确定单元,用于将搭配特征数据库中与所述待搭配物品同类目且与其特征匹配的参考物品作为类比物品;和
    物品搭配单元,用于根据所述类比物品对应的搭配关系,在所述搭配特征数据库中选取与所述目标物品同类目的参考物品作为搭配结果;
    其中,所述搭配特征数据库中包括各参考物品对应的特征向量和所述各参考物品之间的搭配关系。
  8. 根据权利要求7所述的物品数据处理装置,还包括:
    搭配特征数据库建立单元,用于提取多张参考图片中各所述参考物品的特征向 量,确定所述各参考物品的类目,根据各类目所述参考物品之间的搭配关系建立所述搭配特征数据库。
  9. 根据权利要求8所述的物品数据处理装置,其中,所述类比物品确定单元包括:
    候选物品确定子单元,用于将所述搭配特征数据库中所有与所述待搭配物品同类目的所述参考物品作为候选物品;
    特征距离确定子单元,用于分别计算各所述候选物品对应的所述特征向量与所述待搭配物品对应的所述特征向量之间的欧式距离,并选取若干与所述待搭配物品之间的欧式距离最近的所述候选物品作为所述类比物品。
  10. 根据权利要求9所述的物品数据处理装置,其中,所述搭配特征数据库建立单元包括:
    参考物品区域确定子单元,用于提取所述参考图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述参考物品的图像区域,对所述图像区域进行检测,以确定确实存在所述参考物品的图像区域作为目标区域;
    参考物品类目确定子单元,用于对所述目标区域中所述参考物品进行特征提取,生成所述参考物品对应的所述特征向量,根据所述特征向量确定所述参考物品的类目,并获取各类目所述参考物品之间的搭配关系,从而建立所述搭配特征数据库;
    所述待搭配物品确定单元包括:
    待搭配物品区域确定子单元,用于提取所述待搭配图片的像素特征,生成若干坐标集合,每个所述坐标集合对应一个可能存在所述待搭配物品的图像区域,对所述图像区域进行检测,以确定确实存在所述待搭配物品的图像区域作为目标区域;
    待搭配物品类目确定子单元,用于对所述目标区域中所述待搭配物品进行特征提取,生成所述待搭配物品对应的所述特征向量,根据所述特征向量确定所述待搭配物品的类目。
  11. 一种物品数据处理装置,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1至6中任一项所述的物品数据处理方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-6任一项所述的物品数据处理方法。
PCT/CN2017/119516 2017-02-20 2017-12-28 物品数据处理方法、装置和计算机可读存储介质 WO2018149237A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710089504.9 2017-02-20
CN201710089504.9A CN106846122B (zh) 2017-02-20 2017-02-20 商品数据处理方法和装置

Publications (1)

Publication Number Publication Date
WO2018149237A1 true WO2018149237A1 (zh) 2018-08-23

Family

ID=59127960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119516 WO2018149237A1 (zh) 2017-02-20 2017-12-28 物品数据处理方法、装置和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN106846122B (zh)
WO (1) WO2018149237A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476621A (zh) * 2019-01-24 2020-07-31 百度在线网络技术(北京)有限公司 用户物品推荐方法和装置
CN111798286A (zh) * 2020-04-26 2020-10-20 北京沃东天骏信息技术有限公司 物品搭配方法、物品搭配模型的构建方法和计算机
CN111797664A (zh) * 2019-08-19 2020-10-20 北京沃东天骏信息技术有限公司 视频中的目标检测方法、装置和计算机可读存储介质
CN113378601A (zh) * 2020-03-09 2021-09-10 深圳码隆科技有限公司 防止货损的方法、自助设备及存储介质
CN113744011A (zh) * 2020-06-17 2021-12-03 北京沃东天骏信息技术有限公司 物品搭配方法和物品搭配装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846122B (zh) * 2017-02-20 2021-02-26 北京京东尚科信息技术有限公司 商品数据处理方法和装置
CN107463946B (zh) * 2017-07-12 2020-06-23 浙江大学 一种结合模板匹配与深度学习的商品种类检测方法
CN109034980B (zh) * 2018-08-23 2021-12-28 深圳码隆科技有限公司 一种搭配商品推荐方法、装置和用户终端
CN110874771A (zh) * 2018-08-29 2020-03-10 北京京东尚科信息技术有限公司 一种商品搭配的方法和装置
CN113127728A (zh) * 2020-01-16 2021-07-16 北京沃东天骏信息技术有限公司 一种处理物品场景图的方法和装置
CN113628011B (zh) * 2021-08-16 2023-07-25 唯品会(广州)软件有限公司 一种商品搭配方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200249A (zh) * 2014-08-26 2014-12-10 重庆邮电大学 一种衣物自动搭配的方法,装置及系统
CN105138610A (zh) * 2015-08-07 2015-12-09 深圳码隆科技有限公司 一种基于图像元素的图像特征值预测方法和装置
CN105224775A (zh) * 2015-11-12 2016-01-06 中国科学院重庆绿色智能技术研究院 基于图片处理来对衣服进行搭配的方法和装置
CN106846122A (zh) * 2017-02-20 2017-06-13 北京京东尚科信息技术有限公司 商品数据处理方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331417B (zh) * 2014-10-09 2018-01-02 深圳码隆科技有限公司 一种用户个人服饰的搭配方法
CN104951966A (zh) * 2015-07-13 2015-09-30 百度在线网络技术(北京)有限公司 推荐服饰商品的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200249A (zh) * 2014-08-26 2014-12-10 重庆邮电大学 一种衣物自动搭配的方法,装置及系统
CN105138610A (zh) * 2015-08-07 2015-12-09 深圳码隆科技有限公司 一种基于图像元素的图像特征值预测方法和装置
CN105224775A (zh) * 2015-11-12 2016-01-06 中国科学院重庆绿色智能技术研究院 基于图片处理来对衣服进行搭配的方法和装置
CN106846122A (zh) * 2017-02-20 2017-06-13 北京京东尚科信息技术有限公司 商品数据处理方法和装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476621A (zh) * 2019-01-24 2020-07-31 百度在线网络技术(北京)有限公司 用户物品推荐方法和装置
CN111476621B (zh) * 2019-01-24 2023-09-22 百度在线网络技术(北京)有限公司 用户物品推荐方法和装置
CN111797664A (zh) * 2019-08-19 2020-10-20 北京沃东天骏信息技术有限公司 视频中的目标检测方法、装置和计算机可读存储介质
CN111797664B (zh) * 2019-08-19 2024-04-19 北京沃东天骏信息技术有限公司 视频中的目标检测方法、装置和计算机可读存储介质
CN113378601A (zh) * 2020-03-09 2021-09-10 深圳码隆科技有限公司 防止货损的方法、自助设备及存储介质
CN111798286A (zh) * 2020-04-26 2020-10-20 北京沃东天骏信息技术有限公司 物品搭配方法、物品搭配模型的构建方法和计算机
CN113744011A (zh) * 2020-06-17 2021-12-03 北京沃东天骏信息技术有限公司 物品搭配方法和物品搭配装置

Also Published As

Publication number Publication date
CN106846122A (zh) 2017-06-13
CN106846122B (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2018149237A1 (zh) 物品数据处理方法、装置和计算机可读存储介质
US11462001B2 (en) Textile matching using color and pattern recognition and methods of use
US10747826B2 (en) Interactive clothes searching in online stores
KR102244561B1 (ko) 이미지 특징 데이터 추출 및 사용
WO2019133849A1 (en) Computer vision and image characteristic search
US20140310304A1 (en) System and method for providing fashion recommendations
US20160063588A1 (en) Methods and systems for virtual fitting rooms or hybrid stores
US20130185288A1 (en) Product search device, product search method, and computer program product
US11475500B2 (en) Device and method for item recommendation based on visual elements
US20180173807A1 (en) System for managing a wardrobe
KR102580009B1 (ko) 의류 피팅 시스템 및 의류 피팅 시스템의 동작 방법
US9996763B2 (en) Systems and methods for evaluating suitability of an article for an individual
US11972466B2 (en) Computer storage media, method, and system for exploring and recommending matching products across categories
US9953242B1 (en) Identifying items in images using regions-of-interest
US20150269189A1 (en) Retrieval apparatus, retrieval method, and computer program product
JP2016218578A (ja) 画像検索装置、画像検索システム、画像検索方法、及び画像検索プログラム
WO2023062668A1 (ja) 情報処理装置、情報処理方法、情報処理システム、およびプログラム
CN115344730A (zh) 搭配推荐方法、装置、储物柜、衣柜、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17897123

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/11/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17897123

Country of ref document: EP

Kind code of ref document: A1