WO2019148729A1 - 奢侈品辨别方法、电子装置及存储介质 - Google Patents

奢侈品辨别方法、电子装置及存储介质 Download PDF

Info

Publication number
WO2019148729A1
WO2019148729A1 PCT/CN2018/089880 CN2018089880W WO2019148729A1 WO 2019148729 A1 WO2019148729 A1 WO 2019148729A1 CN 2018089880 W CN2018089880 W CN 2018089880W WO 2019148729 A1 WO2019148729 A1 WO 2019148729A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
neural network
convolutional neural
luxury
feature vector
Prior art date
Application number
PCT/CN2018/089880
Other languages
English (en)
French (fr)
Inventor
王健宗
王晨羽
马进
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019148729A1 publication Critical patent/WO2019148729A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns

Definitions

  • the present application relates to the field of computer technology, and in particular, to a luxury product identification method, an electronic device, and a storage medium.
  • the present application provides a luxury product identification method, the method comprising: a sample acquisition step: obtaining a sample picture library corresponding to a luxury brand, wherein the sample picture library includes a genuine picture set composed of a plurality of genuine pictures And a product image set composed of a plurality of fake pictures; a model training step: constructing a convolutional neural network, training the sample picture library through the convolutional neural network, and obtaining a convolutional neural network corresponding to the sample picture library a first convolution step: inputting the authentic picture set and the fake picture set into the convolutional neural network model respectively, and obtaining a feature vector corresponding to the real picture set by convolution kernel convolution of the convolutional neural network model And a second set of convolution steps: when receiving the luxury identification request of the user, obtaining a to-be-recognized picture from the luxury identification request, and inputting the to-be-identified picture into the a convolutional neural network model, the convolution kernel convolution of the convolutional
  • the present application further provides an electronic device including a memory and a processor, wherein the memory includes a luxury product identification program, and when the luxury product identification program is executed by the processor, the following steps are implemented: Obtaining step: obtaining a sample picture library corresponding to the luxury brand, the sample picture library includes a genuine picture set composed of a plurality of genuine pictures and a fake picture set composed of a plurality of fake pictures; model training step: constructing a convolutional nerve a network, the sample picture library is trained by the convolutional neural network to obtain a convolutional neural network model corresponding to the sample picture library; a first convolution step: inputting the genuine picture set and the fake picture set respectively The convolutional neural network model obtains a feature vector set corresponding to the authentic image set and a feature vector set corresponding to the fake image set by convolution kernel convolution of the convolutional neural network model; a second convolution step: when received When the user's luxury identification request is made, the image to be identified is obtained from the luxury
  • the present application further provides a computer readable storage medium including a luxury discriminating program, which is implemented by a processor to implement the luxury as described above. Any step of the identification method.
  • the luxury product identification method, the electronic device and the storage medium proposed by the present application by first acquiring a sample picture library, constructing a convolutional neural network to train the sample picture library, obtaining a convolutional neural network model, and then taking the sample picture library
  • the real image set and the fake image set are input into the convolutional neural network model, and the feature vector set corresponding to the authentic image set and the feature vector set corresponding to the fake image set are obtained, and when the user's luxury identification request is received, the image to be identified is obtained,
  • the image input convolutional neural network model is to be distinguished, and the feature vector of the image to be distinguished is obtained.
  • the feature vector of the image to be identified is respectively performed with the feature vector set corresponding to the authentic image set and the feature vector set corresponding to the fake image set.
  • the comparison according to the result of the comparison to obtain the true and false identification results, can easily and accurately identify the luxury goods for the user at any time, and protect the interests of the user.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device of the present application
  • FIG. 2 is a schematic diagram of interaction between an electronic device and a client according to a preferred embodiment of the present application
  • 3 is a flow chart of a preferred embodiment of the luxury product identification method of the present application.
  • FIG. 4 is a flow chart of a preferred embodiment of the sample picture library construction method of FIG. 3;
  • FIG. 5 is a flow chart of a preferred embodiment of the convolutional neural network model training method of FIG. 3;
  • Figure 6 is a block diagram showing the program of the luxury discriminating program of Figure 1.
  • embodiments of the present application can be implemented as a method, apparatus, device, system, or computer program product. Accordingly, the application can be embodied in a complete hardware, complete software (including firmware, resident software, microcode, etc.), or a combination of hardware and software.
  • a luxury product identification method an electronic device, and a storage medium are proposed.
  • FIG. 1 is a schematic diagram of an operating environment of a preferred embodiment of an electronic device 1 of the present application.
  • the electronic device 1 may be a terminal device having a storage and computing function such as a server, a portable computer, or a desktop computer.
  • the electronic device 1 includes a memory 11, a processor 12, a network interface 13, and a communication bus 14.
  • the network interface 13 can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the communication bus 14 is used to implement connection communication between the above components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC). , Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used for storing the luxury product identification program 10 installed in the electronic device 1 and a sample picture library storing a luxury brand, and convolution for realizing luxury identification. Neural network model database 4 and so on.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing luxury identification. Program 10 and so on.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing luxury identification. Program 10 and so on.
  • FIG. 1 shows only the electronic device 1 having the components 11-14 and the luxury discriminating program 10, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, a device with a voice recognition function, a voice output device such as an audio, a headphone, and the like.
  • the user interface may also include a standard wired interface and a wireless interface.
  • the electronic device 1 may further include a display, which may also be referred to as a display screen or a display unit.
  • a display may also be referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) display.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array. The user can activate the luxury discriminating program 10 by touching the touch area.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • FIG. 2 it is a schematic diagram of interaction between the electronic device 1 and the client 2 according to a preferred embodiment of the present application.
  • the luxury discriminating program 10 operates in the electronic device 1, and the preferred embodiment of the electronic device 1 in Fig. 2 is a server.
  • the electronic device 1 is communicatively coupled to the client 2 via a network 3.
  • the client 2 can run in various types of terminal devices, such as smart phones, portable computers, and the like.
  • the luxury product discriminating program 10 can receive the luxury identification request of the user and discriminate the luxury item in the to-be-identified picture input by the user. Return the result of the discrimination to client 2.
  • FIG. 3 a flow chart of a preferred embodiment of the luxury identification method of the present application is shown.
  • the processor 12 of the electronic device 1 executes the luxury product identification program 10 stored in the memory 11, the following steps are implemented to implement the luxury product identification method:
  • Step S1 Obtain a sample picture library corresponding to the luxury brand, where the sample picture library includes a genuine picture set composed of a plurality of genuine pictures and a fake picture set composed of a plurality of fake pictures.
  • the luxury brand may be, for example, Chanel, Gucci, Hermes, and the like.
  • the method for constructing the sample picture library includes the following steps:
  • Step S11 collecting a plurality of candidate pictures corresponding to the luxury brand from a plurality of specified sources, determining a reference information collection rule of the corresponding candidate picture according to the collection source of each candidate picture, and collecting each of the reference information collection rules respectively.
  • Reference information for candidate images
  • the specified multiple sources may be, for example, an official website of a luxury brand, a physical store of a luxury brand, an integrated shopping website, a market for a cottage, and the like.
  • the method for collecting the candidate pictures may be manual collection, for example, a physical store to a luxury brand, a commodity market, a corresponding candidate picture for a commodity corresponding to a luxury brand, or an artificial website from the luxury brand.
  • Downloading corresponding candidate pictures, such as a comprehensive shopping website may also be automatically obtained by a computer, for example, by crawling technology, automatically downloading corresponding pictures from the official website of the luxury brand and the integrated shopping network.
  • the reference information refers to information that can be used as a reference for evaluating that the corresponding candidate picture belongs to a genuine picture or a fake picture.
  • the authentic picture refers to a candidate picture in which the luxury item of the picture is recognized as a genuine product
  • the product picture refers to a candidate picture in which the luxury item in the picture is identified as a fake product.
  • the reference information collection rule may be an anti-counterfeiting identifier, a website or a store name information in the candidate image, and the collection rule is collected by the reference information.
  • the anti-counterfeiting logo, website or store name information is used as reference information, it can be determined that the corresponding candidate picture is a genuine picture.
  • the reference information collection rule may be collecting the selling price of the corresponding item in the comprehensive shopping website, the buyer evaluation, the name of the merchant sales link, etc., by using the reference information collection rule, when collecting
  • the corresponding candidate picture is determined to be a genuine picture or a fake picture according to the selling price, the buyer's rating, and the merchant's sales link name. For example, when the difference between the selling price and the official price of the corresponding luxury item is greater than a preset threshold, it is determined that the corresponding candidate picture belongs to the fake picture.
  • Step S12 Determine, according to the reference information of each candidate picture, that the corresponding candidate picture belongs to a genuine picture or a fake picture, and label the corresponding candidate picture according to the determination result.
  • the candidate picture is discarded.
  • Step S13 Preprocessing each candidate picture, and performing feature information extraction of the preset feature for each candidate picture after preprocessing, the preprocessing including target object extraction, size normalization, and color space return One.
  • the target object extraction refers to detecting the contour of the corresponding luxury item from the candidate picture by using the contour detection technology, and cutting out the corresponding luxury item from the candidate picture as the target object according to the detected contour and the cropping ratio.
  • the normalization of the size refers to converting the size of the candidate picture after the target object is cropped to a preset size.
  • the color space normalization refers to uniformly converting the color space of the candidate picture after the target object is cropped into the same color space.
  • the size normalization and color space normalization are to make the candidate pictures have relatively uniform features for subsequent comparison and processing.
  • the preset features include, for example, grayscale, brightness of the candidate picture, luxury pattern texture in the candidate picture, overall size ratio, metal part size ratio, position information, and the like.
  • Step S14 Integrating each pre-processed candidate picture and its corresponding label indicating that it belongs to a genuine picture or a fake picture, and the extracted feature information constitutes the sample picture library. According to the label, the pictures in the sample picture library can be divided into the genuine picture set and the fake picture set.
  • step S2 a convolutional neural network is constructed, and the sample picture library is trained by the convolutional neural network to obtain a convolutional neural network model corresponding to the sample picture library.
  • the training method of the convolutional neural network model includes the following steps:
  • Step S21 convolution processing each picture in the sample picture library in turn using each convolution kernel in the constructed convolutional neural network, to obtain a feature vector corresponding to each picture of each convolution kernel.
  • Each convolution kernel in the constructed convolutional neural network is used to convolve each picture in the sample picture library in turn, that is, using each of the convolution kernels to sequentially perform on each picture in the sample picture library.
  • the feature information is subjected to convolution processing.
  • Step S22 for each convolution kernel, calculating an information entropy of the feature vector of all the pictures in the sample picture library corresponding to the convolution kernel, thereby obtaining information entropy corresponding to the convolution kernel.
  • the larger the information entropy the more information the corresponding convolution kernel corresponds, and the more important the convolution kernel is.
  • Step S23 setting a weight value of the corresponding convolution kernel according to the information entropy corresponding to each convolution kernel, and generating a convolutional neural network model corresponding to the sample picture library according to each convolution kernel and its weight value.
  • the larger the weight value of the convolution kernel the larger the number of iterations of convolution using the convolution kernel, or the smaller number of convolution kernels with smaller weight values, from the remaining volumes.
  • the accumulation kernel is constructed into the convolutional neural network model.
  • the weight value of the corresponding convolution kernel is set, thereby generating a convolutional neural network model, which can generate different specific targeting for different luxury brands.
  • the convolutional neural network model has stronger pertinence and discrimination ability for feature vector extraction of different luxury brands.
  • the convolutional neural network model may employ a VGG-16 model and may include, for example, five convolution pooling layers, two fully connected layers, and one sorting layer.
  • five convolution pooling layers For example, as shown in the table below, is an example of the convolutional neural network model parameter table.
  • the number of filters in the five convolution pooling layers (MaxPool) is 64, 128, 256, 512, 512, respectively.
  • the classification layer (Softmax) defines the number of classification results as 2.
  • Step S3 the authentic picture set and the fake picture set are respectively input into the convolutional neural network model, and the convolution kernel convolution of the convolutional neural network model is used to obtain the feature vector set and the fake image set corresponding to the authentic picture set.
  • Corresponding feature vector set Specifically, the convolution kernel of the convolutional neural network model sequentially convolves each authentic image in the set of authentic images, and merges the feature vectors obtained by convolution of all the authentic images into a set to form a corresponding set of the authentic image set. Feature vector set.
  • the convolution kernel of the convolutional neural network model sequentially convolves each of the product images in the product image set, and merges the feature vectors obtained by convolution of all the product images into a set to form a feature vector set corresponding to the product image set. .
  • Step S4 when receiving the luxury identification request of the user, acquiring a to-be-recognized picture from the luxury identification request, and inputting the to-be-identified picture into the convolutional neural network model, by using the convolutional neural network
  • the convolution kernel convolution of the model yields the feature vector of the image to be discerned.
  • the to-be-recognized picture is a picture of the luxury item to be distinguished, for example, a photo taken by a user to identify a luxury item, or a photo of a luxury item to be discerned by a user from a third party.
  • the user can log in to the luxury discriminating program 10 through the client 2 to upload the to-be-identified picture, thereby issuing the luxury item identification request to the luxury item discrimination program 10.
  • the to-be-recognized picture needs to be pre-processed, the pre-processing including target object extraction, size normalization, and color space normalization. Wait.
  • step S5 the feature vectors of the to-be-reviewed picture are respectively compared with the feature vector set corresponding to the authentic picture set and the feature vector set corresponding to the fake picture set. Specifically, step S5 includes the following steps:
  • Step S6 obtaining a true and false discrimination result for the luxury item in the to-be-identified picture according to the comparison result, and outputting the discrimination result.
  • the result of the true and false identification of the luxury goods in the to-be-identified picture according to the comparison result includes:
  • the comparison result is that the first similarity value is greater than the second similarity value, indicating that the to-be-identified picture and the picture in the authentic picture set have higher similarity, determining that the to-be-identified picture is in the picture luxury is true. If the comparison result is that the first similarity value is smaller than the second similarity value, indicating that the to-be-identified picture is higher in similarity with the picture in the fake picture set, determining that the to-be-identified picture is in the picture luxury is fake.
  • the convolutional neural network model may be updated according to the discrimination result, thereby complementing the convolutional neural network model to make the convolutional nerve
  • the network model is more precise.
  • the step of updating includes:
  • a convolutional neural network is constructed, and the updated sample picture library is trained by the convolutional neural network to obtain an updated convolutional neural network model.
  • the method for training the updated sample picture library by using the convolutional neural network may refer to the step S21 to step S23, and details are not described herein again.
  • the sample picture library is constructed by first acquiring a sample picture library, and the sample picture library is trained to obtain a convolutional neural network model, and then the genuine picture set in the sample picture library is The product image set is input into the convolutional neural network model, and the feature vector set corresponding to the authentic image set and the feature vector set corresponding to the fake image set are obtained.
  • the image to be identified is acquired, and the image to be identified is input.
  • the convolutional neural network model obtains the feature vector of the image to be distinguished, and finally compares the feature vector of the to-be-reviewed image with the feature vector set corresponding to the authentic image set and the feature vector set corresponding to the fake image set, according to The result of the comparison is true and false, and the user can identify the luxury goods conveniently and accurately at any time and protect the interests of the users.
  • FIG. 6 it is a program block diagram of the luxury discriminating program 10 of FIG.
  • the luxury product discriminating program 10 is divided into a plurality of modules, which are stored in the memory 11 and executed by the processor 12 to complete the present application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the luxury discriminating program 10 can be divided into a sample acquisition module 110, a model training module 120, a first convolution module 130, a second convolution module 140, a feature comparison module 150, and a result output module 160.
  • the sample obtaining module 110 is configured to obtain a sample picture library corresponding to the luxury brand, where the sample picture library includes a genuine picture set composed of a plurality of genuine pictures and a fake picture set composed of a plurality of fake pictures.
  • the sample obtaining module 110 is further configured to first construct the sample image library, and specifically includes:
  • the sample obtaining module 110 collects a plurality of candidate pictures corresponding to the luxury brand from the specified multiple sources, determines a reference information collection rule of the corresponding candidate picture according to the collection source of each candidate picture, and separately collects the reference information collection rules by using the reference information collection rules.
  • the specified multiple sources may be, for example, an official website of a luxury brand, a physical store of a luxury brand, an integrated shopping website, a market for a cottage, and the like.
  • the method for collecting the candidate pictures may be manually collected or automatically acquired by a computer through the Internet.
  • the sample obtaining module 110 determines, according to the reference information of each candidate picture, that the corresponding candidate picture belongs to a genuine picture or a fake picture, and tags the corresponding candidate picture according to the determination result.
  • the sample obtaining module 110 performs pre-processing on each candidate picture, and performs feature information extraction of preset features for each candidate picture after pre-processing, the pre-processing including target object extraction, size normalization, and color space. Normalized.
  • the target object extraction refers to detecting the contour of the corresponding luxury item from the candidate picture by using the contour detection technology, and cutting out the corresponding luxury item from the candidate picture as the target object according to the detected contour and the cropping ratio.
  • the normalization of the size refers to converting the size of the candidate picture after the target object is cropped to a preset size.
  • the color space normalization refers to uniformly converting the color space of the candidate picture after the target object is cropped into the same color space.
  • the sample obtaining module 110 integrates each pre-processed candidate picture and the corresponding label indicating the belonging to the authentic picture or the fake picture, and the extracted feature information to form the sample picture library. According to the label, the pictures in the sample picture library can be divided into the genuine picture set and the fake picture set.
  • the model training module 120 is configured to construct a convolutional neural network, and the sample picture library is trained by the convolutional neural network to obtain a convolutional neural network model corresponding to the sample picture library.
  • the training of the convolutional neural network model includes:
  • the model training module 120 convolutes each picture in the sample picture library in turn using each convolution kernel in the constructed convolutional neural network to obtain a feature vector corresponding to each picture of each convolution kernel.
  • the model training module 120 calculates, for each convolution kernel, the information entropy of the feature vector of all the pictures in the sample picture library corresponding to the convolution kernel, thereby obtaining the information entropy corresponding to the convolution kernel.
  • the model training module 120 sets the weight value of the corresponding convolution kernel according to the information entropy corresponding to each convolution kernel, and generates a convolutional neural network model corresponding to the sample picture library according to each convolution kernel and its weight value. For example, the larger the weight value of the convolution kernel, the larger the number of iterations of convolution using the convolution kernel, or the smaller number of convolution kernels with smaller weight values, from the remaining volumes.
  • the accumulation kernel is constructed into the convolutional neural network model.
  • the convolutional neural network model may employ a VGG-16 model and may include, for example, five convolution pooling layers, two fully connected layers, and one sorting layer.
  • the convolution pooling layer is used for extracting features and mapping features
  • the fully connected layer is used to map the extracted features to a feature vector
  • the classification layer is used to define the final classification result. number.
  • the first convolution module 130 is configured to input the genuine picture set and the fake picture set into the convolutional neural network model respectively, and obtain a feature corresponding to the authentic picture set by convolutional kernel convolution of the convolutional neural network model.
  • the set of feature vectors corresponding to the vector set and the fake image set Specifically, the convolution kernel of the convolutional neural network model sequentially convolves each authentic image in the set of authentic images, and merges the feature vectors obtained by convolution of all the authentic images into a set to form a corresponding set of the authentic image set. Feature vector set.
  • the convolution kernel of the convolutional neural network model sequentially convolves each of the product images in the product image set, and merges the feature vectors obtained by convolution of all the product images into a set to form a feature vector set corresponding to the product image set. .
  • a second convolution module 140 configured to: when receiving a luxury identification request of the user, obtain a to-be-recognized picture from the luxury identification request, and input the to-be-identified picture into the convolutional neural network model, The convolution kernel convolution of the convolutional neural network model obtains the feature vector of the picture to be distinguished.
  • the second convolution module 140 is further configured to pre-process the to-be-identified picture, where the pre-processing includes target object extraction, size normalization, color space normalization, and the like.
  • the feature comparison module 150 is configured to compare the feature vectors of the to-be-reviewed picture with the feature vector set corresponding to the authentic picture set and the feature vector set corresponding to the fake picture set. Specifically, the alignment includes:
  • the feature comparison module 150 calculates a cosine similarity of the feature vector of the to-be-recognized picture and the feature vector set corresponding to the authentic picture set, to obtain a first similarity value, and calculates a feature vector of the to-be-recognized picture and the product. a cosine similarity of the feature vector set corresponding to the picture set, to obtain a second similarity value;
  • the feature comparison module 150 compares the magnitudes of the first similarity value and the second similarity value to obtain the comparison result.
  • the result output module 160 is configured to obtain a true and false discrimination result of the luxury item in the to-be-identified picture according to the comparison result, and output the discrimination result.
  • the obtaining, according to the comparison result, the true/false discrimination result of the luxury item in the to-be-identified picture includes: if the comparison result is that the first similarity value is greater than the second similarity value, The similarity between the to-be-identified picture and the picture in the authentic picture set is higher, and the result outputting module 160 determines that the luxury item in the to-be-identified picture is true.
  • the result outputting module 160 determines that the The luxury goods in the picture are to be identified as fake.
  • model training module 120 is further configured to update the convolutional neural network model. After obtaining the discrimination result for the to-be-recognized picture, the model training module 120 may update the convolutional neural network model according to the discrimination result, thereby complementing the convolutional neural network model to make the volume
  • the neural network model is more accurate.
  • the update includes:
  • the sample obtaining module 110 merges the to-be-identified picture into the authentic picture set or the fake picture set according to the discrimination result, and generates an updated sample picture library;
  • the model training module 120 constructs a convolutional neural network, and the updated sample image library is trained by the convolutional neural network to obtain an updated convolutional neural network model.
  • the method for training the updated sample picture library by using the convolutional neural network may refer to the descriptions of the sample obtaining module 110 and the model training module 120, and details are not described herein again.
  • the memory 11 including the readable storage medium may include an operating system, a luxury product identification program 10, and a database 4.
  • the processor 12 executes the luxury discriminating program 10 stored in the memory 11, the following steps are implemented:
  • the sample obtaining step is to obtain a sample image library corresponding to the luxury brand, the sample image library includes a genuine image set composed of a plurality of genuine pictures and a fake picture set composed of a plurality of fake pictures;
  • Model training step constructing a convolutional neural network, training the sample picture library through the convolutional neural network, and obtaining a convolutional neural network model corresponding to the sample picture library;
  • a first convolution step inputting the authentic picture set and the fake picture set into the convolutional neural network model respectively, and obtaining a feature vector set corresponding to the authentic picture set by convolution kernel convolution of the convolutional neural network model a set of feature vectors corresponding to the product image set;
  • a second convolution step of: when receiving a luxury identification request from a user, obtaining a to-be-recognized picture from the luxury identification request, and inputting the to-be-identified picture into the convolutional neural network model, through the volume
  • the convolution kernel convolution of the product neural network model obtains the feature vector of the image to be discerned
  • Feature matching step comparing the feature vectors of the to-be-reviewed picture with the feature vector set corresponding to the authentic picture set and the feature vector set corresponding to the fake picture set;
  • Result output step obtaining a true and false discrimination result of the luxury item in the to-be-identified picture according to the comparison result, and outputting the discrimination result.
  • the method for constructing the sample picture library includes:
  • pre-processing on each candidate picture, and performing feature information extraction of preset features for each candidate picture after pre-processing, the pre-processing including target object extraction, size normalization, and color space normalization;
  • Each candidate picture after integration preprocessing and its corresponding label indicating that it belongs to a genuine picture or a fake picture, and the extracted feature information constitute the sample picture library.
  • the model training steps include:
  • Each convolution kernel in the constructed convolutional neural network is used to convolve each image in the sample picture library in turn, to obtain a feature vector corresponding to each picture of each convolution kernel;
  • the convolutional neural network model includes, for example, five convolutional pooling layers, two fully connected layers, and one sorting layer.
  • the feature comparison step includes:
  • the result of obtaining the true and false identification of the luxury item in the to-be-identified picture according to the comparison result includes:
  • the comparison result is that the first similarity value is smaller than the second similarity value, it is determined that the luxury item in the to-be-recognized picture is false.
  • the result output step further includes a model update step:
  • a convolutional neural network is constructed, and the updated sample picture library is trained by the convolutional neural network to obtain an updated convolutional neural network model.
  • the embodiment of the present application further provides a computer readable storage medium, which may be a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a read only memory (ROM), and an erasable programmable Any combination or combination of any one or more of read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, and the like.
  • the computer readable storage medium includes a picture storing a luxury product, a database for distinguishing models, a luxury product identification program 10, and the like. When the luxury product identification program 10 is executed by the processor 12, the following operations are performed:
  • the sample obtaining step is to obtain a sample image library corresponding to the luxury brand, the sample image library includes a genuine image set composed of a plurality of genuine pictures and a fake picture set composed of a plurality of fake pictures;
  • Model training step constructing a convolutional neural network, training the sample picture library through the convolutional neural network, and obtaining a convolutional neural network model corresponding to the sample picture library;
  • a first convolution step inputting the authentic picture set and the fake picture set into the convolutional neural network model respectively, and obtaining a feature vector set corresponding to the authentic picture set by convolution kernel convolution of the convolutional neural network model a set of feature vectors corresponding to the product image set;
  • a second convolution step of: when receiving a luxury identification request from a user, obtaining a to-be-recognized picture from the luxury identification request, and inputting the to-be-identified picture into the convolutional neural network model, through the volume
  • the convolution kernel convolution of the product neural network model obtains the feature vector of the image to be discerned
  • Feature matching step comparing the feature vectors of the to-be-reviewed picture with the feature vector set corresponding to the authentic picture set and the feature vector set corresponding to the fake picture set;
  • Result output step obtaining a true and false discrimination result of the luxury item in the to-be-identified picture according to the comparison result, and outputting the discrimination result.
  • the method for constructing the sample picture library includes:
  • pre-processing on each candidate picture, and performing feature information extraction of preset features for each candidate picture after pre-processing, the pre-processing including target object extraction, size normalization, and color space normalization;
  • Each candidate picture after integration preprocessing and its corresponding label indicating that it belongs to a genuine picture or a fake picture, and the extracted feature information constitute the sample picture library.
  • the model training steps include:
  • Each convolution kernel in the constructed convolutional neural network is used to convolve each image in the sample picture library in turn, to obtain a feature vector corresponding to each picture of each convolution kernel;
  • the convolutional neural network model includes, for example, five convolutional pooling layers, two fully connected layers, and one sorting layer.
  • the feature comparison step includes:
  • the result of obtaining the true and false identification of the luxury item in the to-be-identified picture according to the comparison result includes:
  • the comparison result is that the first similarity value is smaller than the second similarity value, it is determined that the luxury item in the to-be-recognized picture is false.
  • the result output step further includes a model update step:
  • a convolutional neural network is constructed, and the updated sample picture library is trained by the convolutional neural network to obtain an updated convolutional neural network model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种奢侈品辨别方法,包括:获取样本图片库,所述样本图片库中包括真品图片集和赝品图片集;构建卷积神经网络对样本图片库进行训练,得到卷积神经网络模型;将真品图片集和赝品图片集输入卷积神经网络模型,得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;获取待辨别图片,将待辨别图片输入卷积神经网络模型,得到待辨别图片的特征向量;将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;根据比对结果得到真假辨别结果并输出。本申请还提供一种电子装置及存储介质。利用本申请可以准确辨别奢侈品的真伪,避免消费者的经济损失和形象损害。

Description

奢侈品辨别方法、电子装置及存储介质
本申请要求于2018年2月1日提交中国专利局,申请号为201810103409.4、发明名称为“奢侈品辨别方法、电子装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种奢侈品辨别方法、电子装置及存储介质。
背景技术
随着经济的发展,各种品牌例如香奈儿、古驰的奢侈品逐渐成为人们喜爱和追求的目标。由于奢侈品通常价值不菲,并且拥有的奢侈品在一定程度上反映了消费者的品味和身份,因此消费者对奢侈品的挑选十分谨慎。然而,目前市场上充斥着许多奢侈品的赝品,且随着赝品仿制水平的提高,有些高仿奢侈品甚至做到了以假乱真的地步,仅凭消费者的知识水平从肉眼上难以辨别真伪,而联系奢侈品专家对普通消费者而言也并不方便。若是购买到奢侈品的赝品,不仅给消费者带来较大的经济损失,同时也有损于消费者的社交形象。
发明内容
鉴于以上原因,有必要提供一种奢侈品辨别方法、电子装置及存储介质,可以随时方便地为消费者辨别奢侈品的真伪,避免消费者的经济损失和形象损害。
为实现上述目的,本申请提供一种奢侈品辨别方法,该方法包括:样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
为实现上述目的,本申请还提供一种电子装置,该电子装置包括存储器 和处理器,所述存储器中包括奢侈品辨别程序,该奢侈品辨别程序被所述处理器执行时实现如下步骤:样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括奢侈品辨别程序,该奢侈品辨别程序被处理器执行时,实现如上所述的奢侈品辨别方法的任意步骤。
本申请提出的奢侈品辨别方法、电子装置及存储介质,通过先获取样本图片库,构建卷积神经网络对所述样本图片库进行训练,得到卷积神经网络模型,然后将样本图片库中的真品图片集和赝品图片集输入卷积神经网络模型,得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集,当接收到用户的奢侈品辨别请求时,获取待辨别图片,将待辨别图片输入卷积神经网络模型,得到待辨别图片的特征向量,最后将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对,根据比对结果得到真假辨别结果,可以随时方便而且准确地为用户辨别奢侈品,保护用户的利益。
附图说明
图1为本申请电子装置较佳实施例的运行环境示意图;
图2为本申请电子装置与客户端较佳实施例的交互示意图;
图3为本申请奢侈品辨别方法较佳实施例的流程图;
图4为图3中样本图片库构建方法较佳实施例的流程图;
图5为图3中卷积神经网络模型训练方法较佳实施例的流程图;
图6为图1中奢侈品辨别程序的程序模块图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
下面将参考若干具体实施例来描述本申请的原理和精神。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本领域的技术人员知道,本申请的实施方式可以实现为一种方法、装置、设备、系统或计算机程序产品。因此,本申请可以具体实现为完全的硬件、完全的软件(包括固件、驻留软件、微代码等),或者硬件和软件结合的形式。
根据本申请的实施例,提出了一种奢侈品辨别方法、电子装置及存储介质。
参照图1所示,为本申请电子装置1较佳实施例的运行环境示意图。
该电子装置1可以是服务器、便携式计算机、桌上型计算机等具有存储和运算功能的终端设备。
该电子装置1包括存储器11、处理器12、网络接口13及通信总线14。所述网络接口13可选地可以包括标准的有线接口和无线接口(如WI-FI接口)。通信总线14用于实现上述组件之间的连接通信。
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器11,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的奢侈品辨别程序10及存储有奢侈品品牌的样本图片库、实现奢侈品辨别的卷积神经网络模型的数据库4等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行奢侈品辨别程序10等。
图1仅示出了具有组件11-14以及奢侈品辨别程序10的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等。可选地,用户接口还可以包括标准的有线接口、无线接口。
可选地,该电子装置1还可以包括显示器,显示器也可以称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)显示器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。
可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供 用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。用户可以通过触摸所述触控区域启动奢侈品辨别程序10。
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。
该电子装置1还可以包括射频(Radio Frequency,RF)电路、传感器和音频电路等等,在此不再赘述。
参阅图2所示,为本申请电子装置1与客户端2较佳实施例的交互示意图。所述奢侈品辨别程序10运行于电子装置1中,在图2中所述电子装置1的较佳实施例为服务器。所述电子装置1通过网络3与客户端2通信连接。所述客户端2可以运行于各类终端设备中,例如智能手机、便携式计算机等。用户通过客户端2登录至所述电子装置1后,通过所述奢侈品辨别方法,奢侈品辨别程序10可以接收用户的奢侈品辨别请求,对用户输入的待辨别图片中的奢侈品进行辨别,并将辨别结果返回至客户端2。
参阅图3所示,为本申请奢侈品辨别方法较佳实施例的流程图。电子装置1的处理器12执行存储器11中存储的奢侈品辨别程序10时实现奢侈品辨别方法的如下步骤:
步骤S1,获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集。
所述奢侈品品牌例如可以为香奈儿、古驰、爱马仕等。参阅图4所示,所述样本图片库的构建方法包括以下步骤:
步骤S11,从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息。
其中,所述指定的多个来源例如可以为奢侈品品牌的官方网站、奢侈品品牌的实体店、综合购物网站、山寨品市场等。所述候选图片的采集方法可以为人工采集,例如人工到奢侈品品牌的实体店、山寨品市场对奢侈品品牌对应的商品拍摄得到相应的候选图片,或者人工从所述奢侈品品牌的官方网站、综合购物网站等下载相应的候选图片,也可以是计算机自动获取,例如通过爬虫技术从所述奢侈品品牌的官方网站、综合购物网络上自动下载相应的图片。
根据不同的采集来源,预设有对候选图片不同的参考信息采集规则。所述参考信息是指可以为评价相应候选图片属于真品图片或赝品图片做参考的信息。所述真品图片是指图片中奢侈品被认定为真品的候选图片,所述赝品 图片是指图片中奢侈品被认定为赝品的候选图片。
例如,若所述采集来源为奢侈品品牌的实体店或官方网站,则所述参考信息采集规则可以为采集候选图片中的防伪标识、网站或店名信息,通过所述参考信息采集规则,当采集到所述防伪标识、网站或店名信息作为参考信息时,即可判定相应的候选图片为真品图片。
若所述采集来源为综合购物网站,则所述参考信息采集规则可以为采集综合购物网站中相应商品的售价、买家评价、商家销售链接名称等,通过所述参考信息采集规则,当采集到所述售价、买家评价、商家销售链接名称时,根据所述售价、买家评价、商家销售链接名称来判定相应的候选图片属于真品图片或假品图片。例如,当所述售价与相应奢侈品的官方价格之间的差值大于预设阈值时,判定相应的候选图片属于假品图片。当所述买家评价中评价相应商品为假的评论所占的比例达到预设比例值时,判定相应的候选图片属于假品图片。当所述商家销售链接名称中带有“高仿”字样时,判定相应的候选图片属于假品图片。
步骤S12,根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签。当然,若根据某一候选图片的参考信息无法判断该候选图片属于真品图片还是赝品图片,则对该候选图片弃之不用。
步骤S13,对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化。所述目标对象提取是指通过轮廓检测技术,从候选图片中检测出相应奢侈品的轮廓,并根据所检测出的轮廓和裁剪比例,从候选图片中裁剪出相应的奢侈品作为目标对象。所述尺寸归一化是指将裁剪出目标对象后的候选图片的尺寸转换为预设尺寸。所述色彩空间归一化是指将裁剪出目标对象后的候选图片的色彩空间统一转换到同一色彩空间中。所述尺寸归一化和色彩空间归一化是为了使候选图片具有相对统一的特征,便于后续的对比和处理。所述预设特征例如包括候选图片的灰度、亮度、候选图片中奢侈品花纹纹理、整体尺寸比例、金属件尺寸比例和位置信息等。
步骤S14,整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。根据所述标签,即可以将所述样本图片库中的图片分为所述真品图片集和赝品图片集。
步骤S2,构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型。参阅图5所示,所述卷积神经网络模型的训练方法包括以下步骤:
步骤S21,使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量。使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,即为使用所述每个卷积核依次对样本图片库中每个图片的所述特征 信息进行卷积处理。
步骤S22,针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵。所述信息熵越大,相应卷积核对应的信息量越多,该卷积核就越重要。
步骤S23,根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。例如,卷积核的权重值越大,则该卷积神经网络模型使用该卷积核进行卷积的迭代次数越大,或者裁剪掉权重值较小的若干个卷积核,由剩余的卷积核构建成所述卷积神经网络模型。根据每个卷积核对应的信息熵,即根据每个卷积核的重要程度设置相应卷积核的权重值,从而生成卷积神经网络模型,可以针对不同的奢侈品品牌,生成针对性不同的卷积神经网络模型,对不同奢侈品品牌的特征向量提取具有更强的针对性和辨别能力。
所述卷积神经网络模型可以采用VGG-16模型,且例如可以包括5个卷积池化层、2个全连接层和1个分类层。例如下表所示,为所述卷积神经网络模型参数表的一个示例。在下表中,所述5个卷积池化层(MaxPool)中滤波器的个数分别为64、128、256、512、512。所述分类层(Softmax)定义分类结果的个数为2。
Layer Name Num_of_output Kernel Size Stride Size Pad Size
Input 128 N/A N/A N/A
Conv1_1 64 3 1 1
Conv1_2 64 3 1 1
MaxPool1 64 2 2 0
Conv2_1 128 3 1 1
Conv2_2 128 3 1 1
MaxPool2 128 2 2 0
Conv3_1 256 3 1 1
Conv3_2 256 3 1 1
Conv3_3 256 3 1 1
MaxPool3 256 2 2 0
Conv4_1 512 3 1 1
Conv4_2 512 3 1 1
Conv4_3 512 3 1 1
MaxPool4 512 2 2 0
Conv5_1 512 3 1 1
Conv5_2 512 3 1 1
Conv5_3 512 3 1 1
MaxPool5 512 2 2 0
Fc1 4096 1 1 0
Fc2 2 1 1 0
Softmax 2 N/A N/A N/A
步骤S3,将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集。具体地,所述卷积神经网络模型的卷积核依次卷积真品图片集中的每一个真品图片,将所有真品图片经卷积后得到的特征向量合并为集合即构成所述真品图片集对应的特征向量集。所述卷 积神经网络模型的卷积核依次卷积赝品图片集中的每一个赝品图片,将所有赝品图片经卷积后得到的特征向量合并为集合即构成所述赝品图片集对应的特征向量集。
步骤S4,当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量。所述待辨别图片为所述待辨别奢侈品的图片,例如为用户对待辨别奢侈品拍摄的照片,或者用户从第三方处获取的待辨别奢侈品的照片。用户可以通过客户端2登录所述奢侈品辨别程序10,上传所述待辨别图片,从而向奢侈品辨别程序10发出所述奢侈品辨别请求。同样地,将所述待辨别图片输入所述卷积神经网络模型之前,也需要对所述待辨别图片进行预处理,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化等。
步骤S5,将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对。具体地,步骤S5包括以下步骤:
计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值,计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
比对所述第一相似度值与第二相似度值的大小,得到所述比对结果。
步骤S6,根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。其中所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
若所述比对结果为所述第一相似度值大于所述第二相似度值,说明所述待辨别图片与所述真品图片集中的图片相似度更高,则判定所述待辨别图片中奢侈品为真。若所述比对结果为所述第一相似度值小于所述第二相似度值,说明所述待辨别图片与所述赝品图片集中的图片相似度更高,则判定所述待辨别图片中奢侈品为假。
此外,当对所述待辨别图片得到所述辨别结果后,可以根据所述辨别结果对所述卷积神经网络模型进行更新,从而补充完善所述卷积神经网络模型,使所述卷积神经网络模型更加精确。具体地,所述更新的步骤包括:
根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。通过所述卷积神经网络对所述更新后的样本图片库进行训练的方法可以参考所述步骤S21~步骤S23,此处不再赘述。
根据本实施例提供的奢侈品辨别方法,通过先获取样本图片库,构建卷积神经网络对所述样本图片库进行训练,得到卷积神经网络模型,然后将样本图片库中的真品图片集和赝品图片集输入卷积神经网络模型,得到真品图 片集对应的特征向量集和赝品图片集对应的特征向量集,当接收到用户的奢侈品辨别请求时,获取待辨别图片,将待辨别图片输入卷积神经网络模型,得到待辨别图片的特征向量,最后将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对,根据比对结果得到真假辨别结果,可以随时方便而且准确地为用户辨别奢侈品,保护用户的利益。
参阅图6所示,为图1中奢侈品辨别程序10的程序模块图。在本实施例中,奢侈品辨别程序10被分割为多个模块,该多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
所述奢侈品辨别程序10可以被分割为:样本获取模块110、模型训练模块120、第一卷积模块130、第二卷积模块140、特征比对模块150和结果输出模块160。
样本获取模块110,用于获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集。所述样本获取模块110还用于先构建所述样本图片库,具体包括:
样本获取模块110从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息。其中,所述指定的多个来源例如可以为奢侈品品牌的官方网站、奢侈品品牌的实体店、综合购物网站、山寨品市场等。所述候选图片的采集方法可以为人工采集或计算机通过互联网自动获取。
样本获取模块110根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签。
样本获取模块110对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化。所述目标对象提取是指通过轮廓检测技术,从候选图片中检测出相应奢侈品的轮廓,并根据所检测出的轮廓和裁剪比例,从候选图片中裁剪出相应的奢侈品作为目标对象。所述尺寸归一化是指将裁剪出目标对象后的候选图片的尺寸转换为预设尺寸。所述色彩空间归一化是指将裁剪出目标对象后的候选图片的色彩空间统一转换到同一色彩空间中。
样本获取模块110整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。根据所述标签,即可以将所述样本图片库中的图片分为所述真品图片集和赝品图片集。
模型训练模块120,用于构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型。具体地,所述卷积神经网络模型的训练包括:
模型训练模块120使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量。
模型训练模块120针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵。所述信息熵越大,相应卷积核对应的信息量越多,该卷积核就越重要。
模型训练模块120根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。例如,卷积核的权重值越大,则该卷积神经网络模型使用该卷积核进行卷积的迭代次数越大,或者裁剪掉权重值较小的若干个卷积核,由剩余的卷积核构建成所述卷积神经网络模型。
所述卷积神经网络模型可以采用VGG-16模型,且例如可以包括5个卷积池化层、2个全连接层和1个分类层。所述卷积池化层用于对特征进行抽取和对特征的保持,所述全连接层用于把提取到的特征映射到一个特征向量上,所述分类层用于定义最终分类结果的个数。
第一卷积模块130,用于将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集。具体地,所述卷积神经网络模型的卷积核依次卷积真品图片集中的每一个真品图片,将所有真品图片经卷积后得到的特征向量合并为集合即构成所述真品图片集对应的特征向量集。所述卷积神经网络模型的卷积核依次卷积赝品图片集中的每一个赝品图片,将所有赝品图片经卷积后得到的特征向量合并为集合即构成所述赝品图片集对应的特征向量集。
第二卷积模块140,用于当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量。第二卷积模块140还用于先对所述待辨别图片进行预处理,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化等。
特征比对模块150,用于将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对。具体地,所以比对包括:
特征比对模块150计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值,计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
特征比对模块150比对所述第一相似度值与第二相似度值的大小,得到所述比对结果。
结果输出模块160,用于根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。具体地,所述根据比对结果得到对所述 待辨别图片中奢侈品的真假辨别结果包括:若所述比对结果为所述第一相似度值大于所述第二相似度值,说明所述待辨别图片与所述真品图片集中的图片相似度更高,则结果输出模块160判定所述待辨别图片中奢侈品为真。若所述比对结果为所述第一相似度值小于所述第二相似度值,说明所述待辨别图片与所述赝品图片集中的图片相似度更高,则结果输出模块160判定所述待辨别图片中奢侈品为假。
此外,所述模型训练模块120还用于更新所述卷积神经网络模型。当对所述待辨别图片得到所述辨别结果后,模型训练模块120可以根据所述辨别结果对所述卷积神经网络模型进行更新,从而补充完善所述卷积神经网络模型,使所述卷积神经网络模型更加精确。具体地,所述更新包括:
样本获取模块110根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
模型训练模块120构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。通过所述卷积神经网络对所述更新后的样本图片库进行训练的方法可以参考所述样本获取模块110和模型训练模块120的描述,此处不再赘述。
在图1所示的电子装置1较佳实施例的运行环境示意图中,包含可读存储介质的存储器11中可以包括操作系统、奢侈品辨别程序10及数据库4。处理器12执行存储器11中存储的奢侈品辨别程序10时实现如下步骤:
样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;
模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;
第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;
第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;
特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;
结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
其中,所述样本图片库的构建方法包括:
从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息;
根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图 片,根据判断结果为相应候选图片打上标签;
对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化;
整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。
所述模型训练步骤包括:
使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量;
针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵;
根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。
所述卷积神经网络模型例如包括5个卷积池化层、2个全连接层和1个分类层。
所述特征比对步骤包括:
计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值;
计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
比对所述第一相似度值与第二相似度值的大小,得到所述比对结果;
所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
若所述比对结果为所述第一相似度值大于所述第二相似度值,则判定所述待辨别图片中奢侈品为真;
若所述比对结果为所述第一相似度值小于所述第二相似度值,则判定所述待辨别图片中奢侈品为假。
所述结果输出步骤后还包括模型更新步骤:
根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
具体原理请参照上述图6关于奢侈品辨别程序10的程序模块图及图3关于奢侈品辨别方法较佳实施例的流程图的介绍。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是硬盘、多媒体卡、SD卡、闪存卡、SMC、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器等等中的任意一种或者几种的任意组合。所述计算机可读存储介质中包括存储有奢侈品的图片、辨别模型的的数据库及奢侈品辨别程序10等, 所述奢侈品辨别程序10被所述处理器12执行时实现如下操作:
样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;
模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;
第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;
第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;
特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;
结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
其中,所述样本图片库的构建方法包括:
从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息;
根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签;
对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化;
整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。
所述模型训练步骤包括:
使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量;
针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵;
根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。
所述卷积神经网络模型例如包括5个卷积池化层、2个全连接层和1个分类层。
所述特征比对步骤包括:
计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值;
计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的 余弦相似度,得到第二相似度值;
比对所述第一相似度值与第二相似度值的大小,得到所述比对结果;
所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
若所述比对结果为所述第一相似度值大于所述第二相似度值,则判定所述待辨别图片中奢侈品为真;
若所述比对结果为所述第一相似度值小于所述第二相似度值,则判定所述待辨别图片中奢侈品为假。
所述结果输出步骤后还包括模型更新步骤:
根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
本申请之计算机可读存储介质的具体实施方式与上述奢侈品辨别方法以及电子装置1的具体实施方式大致相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种奢侈品辨别方法,其特征在于,该方法包括以下步骤:
    样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;
    模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;
    第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;
    第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;
    特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;
    结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
  2. 如权利要求1所述的奢侈品辨别方法,其特征在于,所述样本图片库的构建方法包括:
    从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息;
    根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签;
    对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化;
    整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。
  3. 如权利要求2所述的奢侈品辨别方法,其特征在于,所述模型训练步骤包括:
    使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量;
    针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵;
    根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。
  4. 如权利要求1所述的奢侈品辨别方法,其特征在于,所述卷积神经网络模型包括5个卷积池化层、2个全连接层和1个分类层。
  5. 如权利要求1所述的奢侈品辨别方法,其特征在于,所述特征比对步骤包括:
    计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值;
    计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
    比对所述第一相似度值与第二相似度值的大小,得到所述比对结果;
    所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
    若所述比对结果为所述第一相似度值大于所述第二相似度值,则判定所述待辨别图片中奢侈品为真;
    若所述比对结果为所述第一相似度值小于所述第二相似度值,则判定所述待辨别图片中奢侈品为假。
  6. 如权利要求1所述的奢侈品辨别方法,其特征在于,所述结果输出步骤后还包括模型更新步骤:
    根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
    构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
  7. 如权利要求2-5任一项所述的奢侈品辨别方法,其特征在于,所述结果输出步骤后还包括模型更新步骤:
    根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
    构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
  8. 一种电子装置,包括存储器和处理器,其特征在于,所述存储器中包括奢侈品辨别程序,该奢侈品辨别程序被所述处理器执行时实现如下步骤:
    样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;
    模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;
    第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;
    第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;
    特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;
    结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨 别结果,输出所述辨别结果。
  9. 如权利要求8所述的电子装置,其特征在于,所述样本图片库的构建方法包括:
    从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息;
    根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签;
    对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化;
    整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。
  10. 如权利要求9所述的电子装置,其特征在于,所述模型训练步骤包括:
    使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量;
    针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵;
    根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。
  11. 如权利要求8所述的电子装置,其特征在于,所述卷积神经网络模型包括5个卷积池化层、2个全连接层和1个分类层。
  12. 如权利要求8所述的电子装置,其特征在于,所述特征比对步骤包括:
    计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值;
    计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
    比对所述第一相似度值与第二相似度值的大小,得到所述比对结果;
    所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
    若所述比对结果为所述第一相似度值大于所述第二相似度值,则判定所述待辨别图片中奢侈品为真;
    若所述比对结果为所述第一相似度值小于所述第二相似度值,则判定所述待辨别图片中奢侈品为假。
  13. 如权利要求8所述的电子装置,其特征在于,所述结果输出步骤后还包括模型更新步骤:
    根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
    构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
  14. 如权利要求9-12任一项所述的电子装置,其特征在于,所述结果输出步骤后还包括模型更新步骤:
    根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
    构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括奢侈品辨别程序,所述奢侈品辨别程序被处理器执行时,实现如下步骤:
    样本获取步骤:获取奢侈品品牌对应的样本图片库,所述样本图片库中包括由多个真品图片构成的真品图片集和由多个赝品图片构成的赝品图片集;
    模型训练步骤:构建卷积神经网络,通过所述卷积神经网络对所述样本图片库进行训练,得到所述样本图片库对应的卷积神经网络模型;
    第一卷积步骤:将所述真品图片集和赝品图片集分别输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到真品图片集对应的特征向量集和赝品图片集对应的特征向量集;
    第二卷积步骤:当接收到用户的奢侈品辨别请求时,从所述奢侈品辨别请求中获取待辨别图片,并将所述待辨别图片输入所述卷积神经网络模型,通过所述卷积神经网络模型的卷积核卷积得到待辨别图片的特征向量;
    特征比对步骤:将所述待辨别图片的特征向量分别与所述真品图片集对应的特征向量集和赝品图片集对应的特征向量集进行比对;
    结果输出步骤:根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果,输出所述辨别结果。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述样本图片库的构建方法包括:
    从指定的多个来源采集所述奢侈品品牌对应的多个候选图片,根据每个候选图片的采集来源确定相应候选图片的参考信息采集规则,运用所述参考信息采集规则分别采集每个候选图片的参考信息;
    根据每个候选图片的参考信息判断相应候选图片属于真品图片或赝品图片,根据判断结果为相应候选图片打上标签;
    对每个候选图片进行预处理,并对预处理后的每个候选图片,分别进行预设特征的特征信息提取,所述预处理包括目标对象提取、尺寸归一化和色彩空间归一化;
    整合预处理后的每个候选图片及其对应的指示属于真品图片或赝品图片的所述标签,以及所提取的所述特征信息,构成所述样本图片库。
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,所述模型训练步骤包括:
    使用所构建的卷积神经网络中的每个卷积核依次对样本图片库中每个图片进行卷积处理,得到每个卷积核对应每个图片的特征向量;
    针对每个卷积核,计算该卷积核对应样本图片库中所有图片的特征向量的信息熵,从而得到该卷积核对应的信息熵;
    根据每个卷积核对应的信息熵设置相应卷积核的权重值,根据所述每个卷积核及其权重值,生成所述样本图片库对应的卷积神经网络模型。
  18. 如权利要求15所述的计算机可读存储介质,其特征在于,所述卷积神经网络模型包括5个卷积池化层、2个全连接层和1个分类层。
  19. 如权利要求15所述的计算机可读存储介质,其特征在于,所述特征比对步骤包括:
    计算所述待辨别图片的特征向量与所述真品图片集对应的特征向量集的余弦相似度,得到第一相似度值;
    计算所述待辨别图片的特征向量与所述赝品图片集对应的特征向量集的余弦相似度,得到第二相似度值;
    比对所述第一相似度值与第二相似度值的大小,得到所述比对结果;
    所述根据比对结果得到对所述待辨别图片中奢侈品的真假辨别结果包括:
    若所述比对结果为所述第一相似度值大于所述第二相似度值,则判定所述待辨别图片中奢侈品为真;
    若所述比对结果为所述第一相似度值小于所述第二相似度值,则判定所述待辨别图片中奢侈品为假。
  20. 如权利要求15所述的计算机可读存储介质,其特征在于,所述结果输出步骤后还包括模型更新步骤:
    根据所述辨别结果将所述待辨别图片并入所述真品图片集或赝品图片集,生成更新后的样本图片库;
    构建卷积神经网络,通过所述卷积神经网络对所述更新后的样本图片库进行训练,得到更新后的卷积神经网络模型。
PCT/CN2018/089880 2018-02-01 2018-06-05 奢侈品辨别方法、电子装置及存储介质 WO2019148729A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810103409.4 2018-02-01
CN201810103409.4A CN108520196B (zh) 2018-02-01 2018-02-01 奢侈品辨别方法、电子装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019148729A1 true WO2019148729A1 (zh) 2019-08-08

Family

ID=63432763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089880 WO2019148729A1 (zh) 2018-02-01 2018-06-05 奢侈品辨别方法、电子装置及存储介质

Country Status (2)

Country Link
CN (1) CN108520196B (zh)
WO (1) WO2019148729A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705567A (zh) * 2019-09-16 2020-01-17 上海电机学院 一种基于机器学习的球鞋鉴定系统及方法
CN112949488A (zh) * 2021-03-01 2021-06-11 北京京东振世信息技术有限公司 图片信息处理方法及装置、计算机存储介质、电子设备
CN113592515A (zh) * 2021-08-03 2021-11-02 北京沃东天骏信息技术有限公司 一种物品的真伪识别方法、系统及装置
US20220051040A1 (en) * 2020-08-17 2022-02-17 CERTILOGO S.p.A Automatic method to determine the authenticity of a product

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445992B (zh) * 2019-09-03 2024-02-20 阿里巴巴集团控股有限公司 信息处理方法及装置
CN111597252B (zh) * 2020-02-23 2021-02-26 广西数字创新科技有限公司 分布式大数据网络服务平台
CN112307115B (zh) * 2020-02-23 2021-06-25 北京领先未来智慧科技有限公司 分布式大数据网络服务方法
CN111582359B (zh) * 2020-04-28 2023-04-07 新疆维吾尔自治区烟草公司 一种图像识别方法、装置、电子设备及介质
CN112257768B (zh) * 2020-10-19 2023-01-31 广州金融科技股份有限公司 一种非法金融图片的识别方法、装置、计算机存储介质
CN112906671B (zh) * 2021-04-08 2024-03-15 平安科技(深圳)有限公司 面审虚假图片识别方法、装置、电子设备及存储介质
CN117392684B (zh) * 2023-11-02 2024-05-14 北京邮电大学 奢侈品鉴定模型训练方法、奢侈品鉴定方法及其装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548182A (zh) * 2016-11-02 2017-03-29 武汉理工大学 基于深度学习和主成因分析的路面裂纹检测方法及装置
CN107463962A (zh) * 2017-08-08 2017-12-12 张天君 一种显微人工智能鉴定皮包的方法和系统
US9858496B2 (en) * 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116755B (zh) * 2013-01-27 2016-01-06 深圳市书圣艺术品防伪鉴定有限公司 书画真伪度自动检测系统及其方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9858496B2 (en) * 2016-01-20 2018-01-02 Microsoft Technology Licensing, Llc Object detection and classification in images
CN106548182A (zh) * 2016-11-02 2017-03-29 武汉理工大学 基于深度学习和主成因分析的路面裂纹检测方法及装置
CN107463962A (zh) * 2017-08-08 2017-12-12 张天君 一种显微人工智能鉴定皮包的方法和系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705567A (zh) * 2019-09-16 2020-01-17 上海电机学院 一种基于机器学习的球鞋鉴定系统及方法
US20220051040A1 (en) * 2020-08-17 2022-02-17 CERTILOGO S.p.A Automatic method to determine the authenticity of a product
CN112949488A (zh) * 2021-03-01 2021-06-11 北京京东振世信息技术有限公司 图片信息处理方法及装置、计算机存储介质、电子设备
CN112949488B (zh) * 2021-03-01 2023-09-01 北京京东振世信息技术有限公司 图片信息处理方法及装置、计算机存储介质、电子设备
CN113592515A (zh) * 2021-08-03 2021-11-02 北京沃东天骏信息技术有限公司 一种物品的真伪识别方法、系统及装置

Also Published As

Publication number Publication date
CN108520196A (zh) 2018-09-11
CN108520196B (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
WO2019148729A1 (zh) 奢侈品辨别方法、电子装置及存储介质
CN108256568B (zh) 一种植物种类识别方法以及装置
CN106776619B (zh) 用于确定目标对象的属性信息的方法和装置
US9336459B2 (en) Interactive content generation
US10198635B2 (en) Systems and methods for associating an image with a business venue by using visually-relevant and business-aware semantics
US20160314512A1 (en) Visual search in a controlled shopping environment
CN112348117B (zh) 场景识别方法、装置、计算机设备和存储介质
US20140254942A1 (en) Systems and methods for obtaining information based on an image
JP2010518507A (ja) 特徴マッチング方法
CN111209827B (zh) 一种基于特征检测的ocr识别票据问题的方法及系统
JP2013109773A (ja) 特徴マッチング方法及び商品認識システム
CN111582932A (zh) 场景间信息推送方法、装置、计算机设备及存储介质
CN110363206B (zh) 数据对象的聚类、数据处理及数据识别方法
CN111046889A (zh) 压制茶信息处理的方法、装置及电子设备
CN107203638B (zh) 监控视频处理方法、装置及系统
US9875386B2 (en) System and method for randomized point set geometry verification for image identification
CN115690672A (zh) 异常图像识别方法、装置、计算机设备和存储介质
US20210117987A1 (en) Fraud estimation system, fraud estimation method and program
CN111126457A (zh) 信息的获取方法和装置、存储介质和电子装置
CN110674388A (zh) 推送项目的配图方法、装置、存储介质和终端设备
JP2023130409A (ja) 情報処理装置、情報処理方法及びプログラム
Golubev et al. Validation of Real Estate Ads based on the Identification of Identical Images
CN113297411B (zh) 轮形图谱相似性的度量方法、装置、设备及存储介质
JP2024050174A (ja) 情報処理装置、情報処理方法、及びプログラム
US12002252B2 (en) Image matching system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18903725

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/11/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18903725

Country of ref document: EP

Kind code of ref document: A1