WO2019201187A1 - 物品鉴别方法、系统、设备及存储介质 - Google Patents

物品鉴别方法、系统、设备及存储介质 Download PDF

Info

Publication number
WO2019201187A1
WO2019201187A1 PCT/CN2019/082575 CN2019082575W WO2019201187A1 WO 2019201187 A1 WO2019201187 A1 WO 2019201187A1 CN 2019082575 W CN2019082575 W CN 2019082575W WO 2019201187 A1 WO2019201187 A1 WO 2019201187A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
images
point
single point
neural network
Prior art date
Application number
PCT/CN2019/082575
Other languages
English (en)
French (fr)
Inventor
唐平中
李元齐
汪勋
戚婧晨
Original Assignee
图灵人工智能研究院(南京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 图灵人工智能研究院(南京)有限公司 filed Critical 图灵人工智能研究院(南京)有限公司
Priority to EP19789249.0A priority Critical patent/EP3627392A4/en
Publication of WO2019201187A1 publication Critical patent/WO2019201187A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Definitions

  • the present application relates to the field of machine learning technology, and in particular, to an object identification method, system, device and storage medium.
  • the artificial luxury goods identification industry has a large profit margin, and it is easy to form a black interest chain, which is difficult to obtain the trust of consumers. Compared with the time-consuming and laborious solutions for identifying physical stores and sending them to the platform for identification, it is more in line with the habits of today's people.
  • Deep learning is an algorithm for learning and characterizing data in machine learning.
  • the motivation derived from artificial neural networks is to establish a neural network that simulates the human brain for analytical learning, and interprets image, sound and text data by mimicking the mechanism of the human brain.
  • Deep learning combines low-level features to form more abstract high-level representation attribute categories or features to discover distributed feature representations of data.
  • Hinton et al. proposed an unsupervised greedy layer-by-layer training algorithm based on Deep Trust Network (DBN), which opened up a new wave of deep learning in academia and industry.
  • DBN Deep Trust Network
  • deep learning has become a representative machine learning technology, and is widely used in the fields of images, sounds, and texts.
  • the purpose of the present application is to provide an object identification method, system, device and storage medium for solving the problem that the authenticity of an item cannot be quickly identified in the prior art.
  • a first aspect of the present application provides an item identification method, comprising the steps of: acquiring a plurality of images taken on an item to be identified, each of the images including at least one target authentication point And determining, according to the plurality of convolutional neural network models corresponding to the at least one target authentication point, the plurality of images are respectively obtained to obtain a single point score corresponding to each target authentication point; and the training concentration test according to the training Obtaining weights, performing weighted summation processing on a plurality of single point scores of the target authentication points to obtain a total score; and authenticating the authenticity of the item according to the single point score or/and the total score .
  • the method further comprises the step of pre-processing the plurality of images.
  • the step of pre-processing includes resizing, scaling, adding noise, inverting, rotating, translating, scaling, cropping, contrasting the image One or more of the random channel offsets.
  • the method further comprises the step of clustering at least one of the images.
  • the step of performing clustering processing on at least one image comprises using a VGG19 network feature classification model to perform feature approximation degree clustering on at least one of the images to determine a pair a training model of the plurality of images.
  • the plurality of convolutional neural network models corresponding to each of the target authentication points are: VGG19 network model, RESNET54 network model, and at least two of WRESNET16 network models Network model.
  • the plurality of convolutional neural network models corresponding to the at least one target authentication point according to training respectively identify the plurality of images to obtain corresponding target authentication points.
  • the single point score further includes: an output result of the plurality of convolutional neural network models storing the at least one target authentication point and the extracted features; and feature splicing respectively extracting the plurality of convolutional neural network models
  • the processing is further classified using a decision tree algorithm to obtain a single point score corresponding to each target authentication point.
  • the step of authenticating the authenticity of the item according to the single point score and the total score includes: preset a first threshold and a second threshold; Determining that the single point score is lower than the first threshold, outputting a result of determining that the item is false; and determining that the single point score is higher than the first threshold, determining that When the total score is lower than the second threshold, a result of determining that the item is false is output.
  • the method further includes the step of preventing overfitting.
  • the item is a luxury item.
  • the second aspect of the present application provides an item identification system, comprising: a preprocessing module, configured to acquire a plurality of images taken on an item to be identified, each of the images includes at least one target authentication point; and an identification module, configured to And determining, by the plurality of convolutional neural network models corresponding to the at least one target authentication point, the plurality of images to obtain a single point score corresponding to each target authentication point; and an evaluation module, configured to perform the training according to the training The training concentrates the obtained weights, performs weighted summation processing on the single point scores of the plurality of target authentication points to obtain a total score; and identifies the single point score or/and the total score according to the The authenticity of the item.
  • a preprocessing module configured to acquire a plurality of images taken on an item to be identified, each of the images includes at least one target authentication point
  • an identification module configured to And determining, by the plurality of convolutional neural network models corresponding to the at least one target authentication point, the plurality of images
  • the pre-processing module is further configured to pre-process the plurality of images.
  • the pre-processing module performs size modification, scaling, noise addition, inversion, rotation, translation, scaling transformation, cropping, contrast transformation, randomization on the image One or more of the channel offsets.
  • the pre-processing module is further configured to perform clustering processing on at least one of the images.
  • the pre-processing module utilizes a VGG19 network feature classification model to perform feature approximation degree clustering on at least one of the images to determine a training model for the plurality of images.
  • the plurality of convolutional neural network models corresponding to each of the target authentication points are: VGG19 network model, RESNET54 network model, and at least two of WRESNET16 network models Network model.
  • the identification module is configured to perform the steps of: saving an output of the plurality of convolutional neural network models of the at least one target authentication point and the extracted features;
  • the feature splicing process respectively extracted by the plurality of convolutional neural network models is further classified by using a decision tree algorithm to obtain a single point score corresponding to each target authentication point.
  • the evaluation module is configured to perform the steps of: preset a first threshold and a second threshold; and determine to output when the single point score is lower than the first threshold Determining that the item is a false result; and, if it is determined that the single point score is higher than the first threshold, determining that the total score is lower than the second threshold, outputting the determined item It is a false result.
  • system further includes a training module for training each convolutional neural network model, wherein the training module performs the step of preventing overfitting.
  • the item is a luxury item.
  • a third aspect of the present application provides an item identification device, including: a memory for storing program code; one or more processors; wherein the processor is configured to invoke program code stored in the memory to execute the first The article identification method according to any of the aspects.
  • a fourth aspect of the present application provides a computer readable storage medium storing a computer program for authenticating an article, wherein the computer program is executed to implement the article identification method according to any one of the above aspects.
  • the article identification method, system, device, and storage medium of the present application use a convolutional neural network to identify a target authentication point of an item in each image, construct an evaluation mechanism, and determine the authenticity of the item by evaluating the recognition result. It effectively solves the problem that counterfeit goods cannot be quickly identified, and greatly simplifies the disputes between the electronic transactions or other remote shopping methods.
  • FIG. 1 shows a flow chart of an item identification method of the present application in an embodiment.
  • Figure 2-5 shows a plurality of images containing the target authentication points for the item to be identified as a package.
  • FIG. 6 is a flow chart showing an item identification system of the present application in an embodiment.
  • FIG. 7 is a schematic structural diagram of an embodiment of an article identification device of the present application.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first predetermined threshold may be referred to as a second predetermined threshold, and similarly, the second predetermined threshold may be referred to as a first predetermined threshold without departing from the scope of the various described embodiments.
  • Both the first preset threshold and the preset threshold are describing a threshold, but unless the context is otherwise explicitly indicated, they are not the same preset threshold.
  • a similar situation also includes a first volume and a second volume.
  • the present application provides an item identification method.
  • the item identification method is performed by an item identification system.
  • the item identification system is implemented by software and hardware in a computer device.
  • the computer device may be any computing device having mathematical and logical operations, data processing capabilities, including but not limited to: a personal computer device, a single server, a server cluster, a distributed server, the cloud server, and the like.
  • the cloud server includes a public cloud server and a private cloud server, where the public or private cloud server includes Software-as-a-Service. SaaS), Platform-as-a-Service (Platform as a Service, abbreviated as PaaS) and Infrastructure-as-a-Service (Infrastructure as a Service, IaaS for short).
  • the private cloud server is, for example, an Facebook Cloud computing service platform, an Amazon cloud computing service platform, a Baidu cloud computing platform, a Tencent cloud computing platform, and the like.
  • the cloud server provides at least one remote image uploading service.
  • the remote picture uploading service includes, but is not limited to, at least one of the following: an item racking service, an item identifying service, and an item complaint service.
  • the service provided by the item for example, a picture of a product uploaded by a merchant for sale and a related text description, etc.
  • the service for identifying the item for example, the purchaser uploads a picture of the item to identify the authenticity, etc.
  • the service of the item complaint is, for example, the purchaser cannot Uploading an image of the item for agreement with the merchant for third parties (such as an electronic trading platform) to intervene in mediation services.
  • the computer device includes at least: a memory, one or more processors, an I/O interface, a network interface, an input structure, and the like.
  • the memory is for storing a plurality of images of the item to be identified and at least one program.
  • the memory can include high speed random access memory and can also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices.
  • the memory may also include memory remote from one or more processors, such as network attached memory accessed via RF circuitry or external ports and a communication network, where the communication network may be the Internet, one or more Intranet, local area network (LAN), wide area network (WLAN), storage area network (SAN), etc., or a suitable combination thereof.
  • the memory controller can control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
  • the memory optionally includes a high speed random access memory, and optionally also a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state memory devices. Access to the memory is optionally controlled by the memory controller by other components of the device, such as the CPU and peripheral interfaces.
  • the one or more processors are operatively coupled to a network interface to communicatively couple the computing device to the network.
  • a network interface can connect a computing device to a local area network (such as a LAN), and/or a wide area network (such as a WAN).
  • the processor is also operatively coupled to an I/O port and an input structure that enables the computing device to interact with various other electronic devices that enable the user to interact with the computing device.
  • the input structure can include buttons, keyboards, mice, trackpads, and the like.
  • the electronic display can include a touch component that facilitates user input by detecting the occurrence and/or location of the object touching its screen.
  • the item identification system may also be implemented by an application (APP) loaded on the smart terminal, which acquires multiple images of the item to be authenticated by shooting, and via a wireless network. Upload to the cloud server, and use the cloud to identify and feedback the results.
  • APP application
  • the smart terminal is, for example, a portable or wearable electronic device including, but not limited to, a smart phone, a tablet, a smart watch, smart glasses, a personal digital assistant (PDA), etc., it should be understood that the present application is portable as described in the embodiments.
  • An electronic device is just one application example, and the components of the device may have more or fewer components than the illustration, or have different component configurations.
  • the various components depicted may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the smart terminal includes a memory, a memory controller, one or more processors (CPUs), peripheral interfaces, RF circuits, audio circuits, speakers, microphones, input/output (I/O) subsystems, touch screens, other outputs Or control devices, as well as external ports. These components communicate over one or more communication buses or signal lines.
  • the smart terminal supports various applications, such as one or more of the following: a drawing application, a rendering application, a word processing application, a website creation application, a disk editing application, a spreadsheet application, Game apps, phone apps, video conferencing apps, email apps, instant messaging apps, fitness support apps, photo management apps, digital camera apps, digital video camera apps, web browsing apps, digital Music player app and/or digital video player app.
  • applications such as one or more of the following: a drawing application, a rendering application, a word processing application, a website creation application, a disk editing application, a spreadsheet application, Game apps, phone apps, video conferencing apps, email apps, instant messaging apps, fitness support apps, photo management apps, digital camera apps, digital video camera apps, web browsing apps, digital Music player app and/or digital video player app.
  • FIG. 1 shows a flow chart of an item identification method of the present application in an embodiment.
  • the one or more processors retrieve at least one program and image in the memory to perform the item identification method.
  • step S110 a plurality of images taken of the item to be identified are acquired, and each of the images includes at least one target authentication point.
  • the image is a photo or a picture
  • the format of the photo or picture is a format for storing a picture in a computer, for example, bmp, jpg, png, tiff, gif, pcx, tga, exif, fpx, svg , psd, cdr, pcd, dxf, ufo, eps, ai, raw, WMF, etc.
  • the item identification system provides an image acquisition interface, and the user provides a plurality of images of the item to be identified according to the prompt in the interface.
  • each image includes at least one target authentication point.
  • the item identification system can identify a plurality of items belonging to the same item category, for example, identifying a brand of shoulder bags, travel bags, messenger bags, etc.; and/or identifying different types of items, such as
  • the identified items include backpacks, travel bags, satchels, ladies' watches, men's watches, pocket watches, and various cosmetics.
  • the identification points of the identification items will be different.
  • the identification points generally include a leather tag, a zipper and a label including the article trademark.
  • the identification point usually lies in the combination of the body of the teapot and the handle or the spout, and the top cover.
  • the article is a lady bag as an example for description.
  • the target authentication point includes, but is not limited to, at least one of the following: an item identification part, an item accessory, an overall shape of the item, and the like.
  • the item identification part includes, but is not limited to, a printed mark on the item (such as an item trademark), a zipper and a zipper, an interface between the bag body and the bag, a watch band, a dial, and the like; The skin shape, the bracelet sample, etc.; the overall shape of the article is exemplified by the whole body of the article displayed at least one angle of view.
  • the article identification system may prompt the user to provide an overall and partial image of the article with as many viewing angles as possible, and perform step S120.
  • the item identification system may filter the acquired plurality of images to obtain an image that meets at least one of image recognition resolution, resolution, and target authentication points. For example, the item identification system will reject images whose image size is less than a preset size threshold. As another example, the item identification system performs a sharpness analysis on the acquired image and selects an image whose resolution meets the preset sharpness condition. The item identification system performs step S120 after screening.
  • the article identification method further includes the step of pre-processing the plurality of images.
  • the pre-processing step is intended to adapt the acquired image to the neural network model input image to be retrieved.
  • the step of pre-processing includes one or more of size modification, scaling, noise addition, inversion, rotation, translation, scaling transformation, cropping, contrast transformation, random channel offset of the image. .
  • the item identification system rotates the received image of 550 x 300 pixels to 300 x 550.
  • the item identification system reduces or cuts the received image of 1000 ⁇ 800 pixels and then rotates to 300 ⁇ 550.
  • the item identification system performs processing such as sharpening, adding noise, and the like on the acquired image of the beauty to restore the real image.
  • the item identification system then performs step S120 to send the pre-processed images to a neural network model for identification.
  • the item identification system identifies the outline of the item in the image, and rotates, translates, and cuts the image according to the position of the entire image according to the image of the item to minimize the background in the image of the item and obtain a conforming convolutional neural network.
  • the item identification system performs grayscale inversion on each of the acquired images to facilitate subsequent suppression of the background or highlighting of the item features.
  • the item identification system injects the images into the different channels of the convolutional neural network according to the image color, gray scale, etc. before the images are input into the convolutional neural network.
  • the item identification system can identify a plurality of items belonging to the same item category, for example, identifying a brand of shoulder bags, travel bags, messenger bags, etc.; and/or identifying different types of items,
  • the types of items to be identified include backpacks, travel bags, satchels, ladies' watches, men's watches, pocket watches, various cosmetics, and the like.
  • the item identification method further includes the step of performing clustering processing on the plurality of images. Before performing step S120, the item identification system first performs clustering processing on the acquired image by using features such as the shape of the item captured in the image to attribute the item to be identified to one of the preset categories.
  • a classification prompt may be provided according to the target authentication point, so that the image uploaded by the user belongs to the corresponding category.
  • the item identification system may select at least one image from the acquired plurality of images for clustering processing to determine a product classification to which the item to be identified belongs.
  • the item identification system selects an image containing the whole body of the item to be identified from a plurality of images, and then performs clustering processing on the selected image to determine the item classification to be identified.
  • the item identification system simultaneously performs clustering processing on the acquired plurality of images, and evaluates each product classification obtained by clustering each image to determine the category of the item to be identified.
  • the clustering process may adopt a VGG19 network feature classification model, that is, using the VGG19 network feature classification model to perform feature approximation degree clustering on at least one of the images to determine a training model for the plurality of images.
  • the item identification system uses the VGG19 network feature classification model to perform feature approximation degree clustering on at least one acquired image to determine the category to which the item to be identified belongs, and determines the plurality of steps used in step S120 based on the determined category.
  • Neural network model uses the VGG19 network feature classification model to perform feature approximation degree clustering on at least one acquired image to determine the category to which the item to be identified belongs, and determines the plurality of steps used in step S120 based on the determined category.
  • step S120 the plurality of convolutional neural network models (referred to as network models) corresponding to the at least one target authentication point are respectively trained to identify the plurality of images to obtain a single point score corresponding to each target authentication point. .
  • the item identification system sends the acquired images to different convolutional neural network models to identify at least one target identification point by using each convolutional neural network model and obtain each target identification that can be identified.
  • the single point score of the point there are a plurality of convolutional neural network models corresponding to identifying the same target authentication point.
  • each target authentication point corresponds to at least two convolutional neural network models preset, and the preset convolutional neural network model is only used to identify the target authentication point, for example, a convolutional neural network model A1 and A2 is used to identify the target authentication point B1, and the convolutional neural network models A3 and A4 are used to identify the target authentication point B2.
  • each target authentication point corresponds to at least two convolutional neural network models, and the preset convolutional neural network model is used to identify at least one target authentication point, for example, a convolutional neural network model A5.
  • the convolutional neural network model A6 is used to identify the target discrimination points B1 and B2, and the convolutional neural network model A7 is used to identify the target authentication point B2.
  • a plurality of convolutional neural network models corresponding to each of the target authentication points include, but are not limited to, a VGG19 network model, a RESNET54 network model, and at least two network models in the WRESNET16 network model.
  • each convolutional neural network model is trained through the sample image.
  • the sample image can be photographed by taking pictures of the original, defective, and counterfeit products of the preset brand, and performing image processing on the photographed photos to enrich the number of sample images. For example, when shooting, the articles of different viewing angles are taken at multiple angles, and the obtained photos are subjected to image enhancement processing such as image_aug to increase the number of sample images.
  • image_aug image enhancement processing
  • the sample image is used to train each convolutional neural network model, and the structure in the network model is adjusted according to the training result.
  • the convolution kernel size in each module is set to 3 ⁇ 3, the stride is set to 1 ⁇ 1, and the number of convolution kernels of the first 18 residual modules Set to 16, the middle 18 is set to 32, and the last 18 is set to 64.
  • a convolution layer is included in the RESNET residual module, the convolution kernel is 3 ⁇ 3, and the stride is 2 ⁇ . 2.
  • Each residual module is combined by three parts as the output recognition result. According to the above parameter settings, the RESNET54 network model is trained, and the back propagation algorithm is used to obtain the parameters in the RESNET54 network model.
  • steps to prevent overfitting are also included.
  • a discard layer (dropout layer) is set in each network model, weight_decay is set, and early stopping is used to prevent overfitting.
  • dropout is designed to optimize some networks with certain probability to ignore certain nodes, so that the structure of the final network model can be seen as the integration of many networks; in addition, use weight decay to adjust the impact of model complexity on the loss function;
  • the early stop means is used to end the training of the network ahead of time when the training error is not reduced after a number of epochs.
  • At least one network model includes a normalization layer to enhance the model capacity.
  • a normalization layer to enhance the model capacity. For example, for the identified item category, the network layer in the network model for identifying the target authentication point in the corresponding category is normalized according to the batch.
  • the results of the target discrimination points identified using the respective convolutional neural network models can be used to describe the likelihood that a single target discrimination point is included in the input image.
  • the likelihood that the corresponding target authentication point is included in the image is described in accordance with a preset rating level.
  • FIG. 2-5 shows multiple images of the object to be identified as containing the target authentication point, and the acquired target images include: overall appearance, zipper and zipper, trademark and skin sign.
  • Each target discriminating point is separately identified by three convolutional neural network models. Therefore, each target discriminating point obtains three single point scores output by three convolutional neural network models.
  • Step S140 may be performed after the single point score is obtained, or steps S130 and S140 may be performed as shown in FIG.
  • the output of each of the network models does not directly represent a single point score corresponding to the target authentication point, and the item identification system makes further decisions based on the respective recognition results of the same target authentication point to obtain a corresponding target identification.
  • the single point score of the point includes the steps of: saving an output result of the plurality of convolutional neural network models of the at least one target authentication point and the extracted features; and extracting the plurality of convolutional neural network models respectively
  • the feature splicing process is further classified using a decision tree algorithm to obtain a single point score corresponding to each target discrimination point.
  • the item identification system intercepts the information outputted in the hidden layer of each network model as a feature to be saved. For example, the fully connected layer in the hidden layer of each network model outputs the resulting data as features to the item identification system.
  • the item identification system splicing the features provided by the network models that identify the same target authentication point. For example, the item identification system will splicing three features corresponding to the same target authentication point, namely three matrices.
  • the spliced features are then input into the decision tree algorithm to obtain a single point score corresponding to the target authentication point.
  • the decision tree algorithm is exemplified by an XGBoost algorithm and the like.
  • step S130 according to the weights obtained by the trained training concentration test, the single point scores of the plurality of target authentication points are subjected to weighted summation processing to obtain a total score.
  • the correct rate of the corresponding target authentication point is identified according to each network model, and a weight is set for each network model.
  • the discriminating system performs a weighted summation process on a single point score of a plurality of the target discriminating points according to each of the weights to obtain a total score.
  • the two network models A1 and A2 of the target authentication point B1 in the item to be identified each have weights w1 and w2, and the single point scores of the respective outputs are P1 and P2; and the two network models A3 of the target authentication point B2 and The respective weights of A4 are w3 and w4, and the single point scores of the respective outputs are P3 and P4, and the total score obtained by the item identification system is (w1 ⁇ P1+w2 ⁇ P2+w3 ⁇ P3+w4 ⁇ P4). ).
  • the single point scores of each target discriminant point obtained based on the decision tree algorithm may be weighted and summed according to the same weight. For example, the single point scores of each target authentication point are directly summed to obtain a total score.
  • step S140 the authenticity of the item is identified according to the single point score or/and the total score.
  • the item identification system may set a plurality of authentication stages according to a single point score and a total score, and traverse the identification stages according to a preset authentication order to obtain authenticity according to the identification result of one of the identification stages. Identify the results.
  • the multiple authentication stages include: identifying the single point scores one by one and discriminating the total scores, and presetting the authentication order of the last point scores of all the single point scores of all the target authentication points in turn, the item identification system pre-predetermined
  • the threshold corresponding to each authentication stage is set, and according to the above-mentioned identification stage sequence, each time an identification stage is true, the process proceeds to the next identification stage until the traversal of all the identification stages finally obtains a true result, and if it is determined in any of the identification stages, A false result determines the result of the item being false.
  • the item identification system presets a first threshold and a second threshold; determines that the single point score is lower than the first threshold, outputs a result of determining that the item is false; and determines When the single point score is higher than the first threshold, it is determined that the total score is lower than the second threshold, and a result of determining that the item is false is output.
  • the item to be identified includes two target identification points, each target authentication point corresponds to three single point scores, and the item identification system determines six single point scores one by one, when the single point score is lower than the first
  • the output determines that the item is a false result; otherwise, if the item is true based on the single point score, it is determined whether the total score is lower than the second threshold. If so, it is judged that the output determines that the item is a false result, otherwise it outputs a result that determines that the item is true.
  • the ratio threshold is between [0, 1].
  • the target identification point of the item to be identified may be one.
  • the corresponding single point score is the total score, and the step S130 may be simplified to a single point.
  • the comparison of the score with the threshold should be considered as a specific example based on the technical idea of the present application.
  • the manner of authenticating the authenticity of the item according to the single point score or/and the total score is only an example, and any single point score obtained according to the manner described in the present application or
  • the discriminating strategy for the authenticity discrimination using the at least one score and using the at least one score should be regarded as a specific example based on the technical idea of the present application.
  • the method for identifying an item by using an image uses a convolutional neural network to identify a target identification point of an item in each image, construct an evaluation mechanism, and determine the authenticity of the item by evaluating the recognition result. It solves the problem that counterfeit goods cannot be quickly identified, and greatly simplifies the disputes between the electronic transactions or other remote shopping methods.
  • FIG. 6 shows a block diagram of an item identification system of the present application in an embodiment.
  • the item identification system 1 includes a preprocessing module 11, an identification module 12, and an evaluation module 13.
  • the pre-processing module 11 is configured to acquire a plurality of images taken on an item to be identified, and each of the images includes at least one target authentication point.
  • the image is a photo or a picture
  • the format of the photo or picture is a format for storing a picture in a computer, for example, bmp, jpg, png, tiff, gif, pcx, tga, exif, fpx, svg , psd, cdr, pcd, dxf, ufo, eps, ai, raw, WMF, etc.
  • the pre-processing module 11 provides an image acquisition interface, and the user provides multiple images of the item to be identified according to the prompt in the interface.
  • Each of the plurality of images acquired by the pre-processing module 11 includes at least one target authentication point.
  • the item identification system can identify a plurality of items belonging to the same item category, for example, identifying a brand of shoulder bags, travel bags, messenger bags, etc.; and/or identifying different types of items, such as
  • the identified items include backpacks, travel bags, satchels, ladies' watches, men's watches, pocket watches, and various cosmetics.
  • the identification points of the identification items will be different.
  • the identification points generally include a leather tag, a zipper and a label including the article trademark.
  • the identification point usually lies in the combination of the body of the teapot and the handle or the spout, and the top cover.
  • the article is a lady bag as an example for description.
  • the target authentication point includes, but is not limited to, at least one of the following: an item identification part, an item accessory, an overall shape of the item, and the like.
  • the item identification part includes, but is not limited to, a printed mark on the item (such as an item trademark), a zipper and a zipper, an interface between the bag body and the bag, a watch band, a dial, and the like; The skin shape, the bracelet sample, etc.; the overall shape of the article is exemplified by the whole body of the article displayed at least one angle of view.
  • the pre-processing module 11 may prompt the user to provide an overall and partial image of the article with as many views as possible, and to recognize the module 12.
  • the pre-processing module 11 may filter the acquired multiple images to obtain an image that meets at least one of image recognition resolution, resolution, and target authentication points. For example, the pre-processing module 11 will cull an image whose image size is less than a preset size threshold. As another example, the pre-processing module 11 performs a sharpness analysis on the acquired image and selects an image whose resolution meets the preset sharpness condition. The pre-processing module 11 performs step S120 after screening.
  • the article identification method further includes the step of pre-processing the plurality of images.
  • the pre-processing step is intended to adapt the acquired image to the neural network model input image to be retrieved.
  • the step of pre-processing includes one or more of size modification, scaling, noise addition, inversion, rotation, translation, scaling transformation, cropping, contrast transformation, random channel offset of the image.
  • the pre-processing module 11 rotates the received image of 550 x 300 pixels to 300 x 550.
  • the pre-processing module 11 reduces and rotates the received image of 1000 x 800 pixels to 300 x 550.
  • the pre-processing module 11 performs a process of sharpening, adding noise, and the like on the acquired image of the beauty to restore the real image.
  • the pre-processing module 11 then performs step S120 to send the pre-processed images to the neural network model for identification.
  • the pre-processing module 11 identifies an outline of an item in the image, and rotates, translates, and cuts the image according to the position of the entire image according to the image of the item, so as to minimize the background in the image of the item and obtain a conformal convolutional nerve.
  • An image of the size received by the network For another example, the pre-processing module 11 performs grayscale inversion on each acquired image to facilitate subsequent suppression of the background or the feature of the highlighted item.
  • the pre-processing module 11 injects the images into the different channels of the convolutional neural network according to the image color, the gray scale, or the like before the images are input into the convolutional neural network.
  • the pre-processing module 11 can identify a plurality of items belonging to the same item category, for example, identifying a brand of shoulder bags, travel bags, messenger bags, etc.; and/or identifying different types of items.
  • the types of items to be identified include backpacks, travel bags, satchels, ladies' watches, men's watches, pocket watches, various cosmetics, and the like.
  • the item identification method further includes the step of performing clustering processing on the plurality of images.
  • the pre-processing module 11 Before the step S120 is performed, the pre-processing module 11 first performs clustering processing on the acquired image by using features such as the shape of the item captured in the image, so as to attribute the item to be identified to one of the preset categories.
  • a classification prompt may be provided according to the target authentication point in the graphic acquisition interface, so that the image uploaded by the user belongs to the corresponding category.
  • the pre-processing module 11 may select at least one image from the acquired multiple images for clustering processing to determine a product classification to which the item to be identified belongs. For example, the pre-processing module 11 selects an image containing the whole body of the item to be identified from the plurality of images, and then performs clustering processing on the selected image to determine the item classification to be identified. For another example, the pre-processing module 11 simultaneously performs clustering processing on the acquired multiple images, and evaluates each product classification obtained by clustering each image to determine the category of the item to be identified.
  • the clustering process may adopt a VGG19 network feature classification model, that is, using the VGG19 network feature classification model to perform feature approximation degree clustering on at least one of the images to determine a training model for the plurality of images.
  • the pre-processing module 11 performs feature approximation degree clustering on at least one acquired image by using the VGG19 network feature classification model to determine the category to which the item to be identified belongs, and determines the use of the identification module 12 based on the determined category.
  • the identified identification module 12 is configured to respectively identify the plurality of images according to the plurality of convolutional neural network models (referred to as network models) corresponding to the at least one target authentication point, and obtain a single point corresponding to each target authentication point. Score.
  • the identification module 12 sends the acquired images to different convolutional neural network models to identify at least one target authentication point by using each convolutional neural network model and obtain each target identification that can be identified.
  • the single point score of the point there are a plurality of convolutional neural network models corresponding to identifying the same target authentication point.
  • each target authentication point corresponds to at least two convolutional neural network models preset, and the preset convolutional neural network model is only used to identify the target authentication point, for example, a convolutional neural network model A1 and A2 is used to identify the target authentication point B1, and the convolutional neural network models A3 and A4 are used to identify the target authentication point B2.
  • each target authentication point corresponds to at least two convolutional neural network models, and the preset convolutional neural network model is used to identify at least one target authentication point, for example, a convolutional neural network model A5.
  • the convolutional neural network model A6 is used to identify the target discrimination points B1 and B2, and the convolutional neural network model A7 is used to identify the target authentication point B2.
  • a plurality of convolutional neural network models corresponding to each of the target authentication points include, but are not limited to, a VGG19 network model, a RESNET54 network model, and at least two network models in the WRESNET16 network model.
  • each convolutional neural network model is trained through the sample image.
  • the sample image can be photographed by taking pictures of the original, defective, and counterfeit products of the preset brand, and performing image processing on the photographed photos to enrich the number of sample images. For example, when shooting, the articles of different viewing angles are taken at multiple angles, and the obtained photos are subjected to image enhancement processing such as image_aug to increase the number of sample images.
  • the sample image is used to train each convolutional neural network model, and the structure in the network model is adjusted according to the training result.
  • the convolution kernel size in each module is set to 3 ⁇ 3, the stride is set to 1 ⁇ 1, and the number of convolution kernels of the first 18 residual modules Set to 16, the middle 18 is set to 32, and the last 18 is set to 64.
  • a convolution layer is included in the RESNET residual module, the convolution kernel is 3 ⁇ 3, and the stride is 2 ⁇ . 2.
  • Each residual module is combined by three parts as the output recognition result. According to the above parameter settings, the RESNET54 network model is trained, and the back propagation algorithm is used to obtain the parameters in the RESNET54 network model.
  • steps to prevent overfitting are also included.
  • a discard layer (dropout layer) is set in each network model, weight_decay is set, and early stopping is used to prevent overfitting.
  • dropout is designed to optimize some networks with certain probability to ignore certain nodes, so that the structure of the final network model can be seen as the integration of many networks; in addition, use weight decay to adjust the impact of model complexity on the loss function;
  • the early stop means is used to end the training of the network ahead of time when the training error is not reduced after a number of epochs.
  • At least one network model includes a normalization layer to enhance the model capacity.
  • a normalization layer to enhance the model capacity. For example, for the identified item category, the network layer in the network model for identifying the target authentication point in the corresponding category is normalized according to the batch.
  • the results of the target discrimination points identified using the respective convolutional neural network models can be used to describe the likelihood that a single target discrimination point is included in the input image.
  • the likelihood that the corresponding target authentication point is included in the image is described in accordance with a preset rating level.
  • FIG. 2-5 shows multiple images of the object to be identified as containing the target authentication point, and the acquired target images include: overall appearance, zipper and zipper, trademark and buckle.
  • Each target discriminating point is separately identified by three convolutional neural network models. Therefore, each target discriminating point obtains three single point scores output by three convolutional neural network models.
  • the output of each of the network models does not directly represent a single point score of the corresponding target authentication point
  • the identification module 12 makes further decisions based on the respective recognition results of the same target authentication point to obtain a corresponding target identification.
  • the single point score of the point the identification module 12 performs the steps of: saving an output result of the plurality of convolutional neural network models of the at least one target authentication point and the extracted features; and extracting the plurality of convolutional neural network models respectively
  • the feature splicing process is further classified by using a decision tree algorithm to obtain a single point score corresponding to each target discrimination point.
  • the identification module 12 intercepts the information outputted in the hidden layer of each network model as a feature and saves it.
  • the fully connected layer in the hidden layer of each network model outputs the obtained data as a feature to the identification module 12.
  • the identification module 12 performs splicing processing on the features provided by the network models that identify the same target authentication point. For example, the identification module 12 splices three features corresponding to the same target authentication point, namely, three matrices.
  • the spliced features are then input into the decision tree algorithm to obtain a single point score corresponding to the target authentication point.
  • the decision tree algorithm is exemplified by an XGBoost algorithm and the like.
  • the evaluation module 13 is configured to perform weighted summation processing on the single point scores of the plurality of target identification points according to the weights obtained by the trained training concentration test to obtain a total score.
  • the correct rate of the corresponding target authentication point is identified according to each network model, and weights are set for each network model.
  • the authentication system 1 performs weighted summation processing on a single point score of a plurality of the target authentication points according to each of the weights to obtain a total score.
  • the two network models A1 and A2 of the target authentication point B1 in the item to be identified each have weights w1 and w2, and the single point scores of the respective outputs are P1 and P2; and the two network models A3 of the target authentication point B2 and The respective weights of A4 are w3 and w4, and the single point scores of the respective outputs are P3 and P4, and the total score obtained by the evaluation module 13 is (w1 ⁇ P1+w2 ⁇ P2+w3 ⁇ P3+w4 ⁇ P4). ).
  • the single point scores of each target discriminant point obtained based on the decision tree algorithm may be weighted and summed according to the same weight. For example, the single point scores of each target authentication point are directly summed to obtain a total score.
  • the evaluation module 13 is further configured to identify the authenticity of the item according to the single point score or/and the total score.
  • the evaluation module 13 may set a plurality of authentication stages according to the single point score and the total score, and traverse the identification stages according to a preset authentication order to obtain authenticity according to the identification result of one of the identification stages. Identify the results.
  • the multiple authentication stages include: identifying the single point scores one by one and discriminating the total scores, and presetting the authentication order of the last point scores of all the single point scores of all the target authentication points in turn, the evaluation module 13 pre-predetermines The threshold corresponding to each authentication stage is set, and according to the above-mentioned identification stage sequence, each time an identification stage is true, the process proceeds to the next identification stage until the traversal of all the identification stages finally obtains a true result, and if it is determined in any of the identification stages, A false result determines the result of the item being false.
  • the evaluation module 13 presets a first threshold and a second threshold; and determines that the single point score is lower than the first threshold, outputs a result that determines that the item is false; and determines When the single point score is higher than the first threshold, it is determined that the total score is lower than the second threshold, and a result of determining that the item is false is output.
  • the item to be identified includes two target authentication points, each target authentication point corresponds to three single point scores, and the evaluation module 13 determines six single point scores one by one, when the single point score is lower than the first
  • the output determines that the item is a false result; otherwise, if the item is true based on the single point score, it is determined whether the total score is lower than the second threshold. If so, it is judged that the output determines that the item is a false result, otherwise it outputs a result that determines that the item is true.
  • the ratio threshold is between [0, 1].
  • the target identification point of the item to be identified may be one.
  • the corresponding single point score is the total score, and the evaluation module 13 can be simplified to the pair.
  • the comparison of the point score to the threshold should be considered as a specific example based on the technical idea of the present application.
  • the manner of authenticating the authenticity of the item according to the single point score or/and the total score is only an example, and any single point score obtained according to the manner described in the present application or
  • the discriminating strategy for the authenticity discrimination using the at least one score and using the at least one score should be regarded as a specific example based on the technical idea of the present application.
  • FIG. 7 is a schematic structural diagram of Embodiment 1 of the article identification device of the present application.
  • the computer device 2 provided in this embodiment mainly includes a memory 21, one or more processors 22, and one or more programs stored in the memory 21, wherein the memory 21 stores execution instructions when When the computer device 2 is in operation, the processor 22 communicates with the memory 21.
  • the one or more programs are stored in the memory and configured to execute instructions by the one or more processors, the one or more processors executing the execution instructions such that the electronic device Executing the above-described item identification method, that is, the processor 22 executes an execution instruction such that the computer device 2 performs the method as shown in FIG. 1, whereby the authenticity of the item can be authenticated by means of image recognition.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be applied to the embodiment of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the present application further provides a computer readable and writable storage medium having stored thereon a computer program for authenticating an item, wherein the step of implementing the item identification method when the computer program for identifying the item is executed by the processor is stored in FIG. The steps described in .
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the computer readable and writable storage medium may include a read-only memory (ROM), a random access memory (RAM), an EEPROM, a CD-ROM or the like.
  • ROM read-only memory
  • RAM random access memory
  • EEPROM electrically erasable programmable read-only memory
  • CD-ROM compact disc-read only memory
  • any connection is properly termed a computer-readable medium.
  • Disks and optical discs as used in the application include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), floppy discs, and Blu-ray discs, in which the disc typically magnetically replicates data while the disc is optically optical. Copy data.
  • CDs compact discs
  • DVDs digital versatile discs
  • floppy discs floppy discs
  • Blu-ray discs in which the disc typically magnetically replicates data while the disc is optically optical. Copy data.
  • the present invention provides a method for identifying an item using an image, using a convolutional neural network to identify a target identification point of an item in each image, constructing an evaluation mechanism, and determining an item by evaluating the recognition result.
  • the authenticity effectively solves the problem that counterfeit goods cannot be quickly identified, which greatly simplifies the disputes between the electronic transactions or other remote shopping methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种物品鉴别方法、系统、设备及存储介质。所述方法包括以下步骤:获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点;依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值;依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值;以及依据所述单点分值或/及所述总分值鉴别所述物品的真伪。本申请利用卷积神经网络对各图像中描述物品的目标鉴别点进行识别,构建评价机制以及通过对识别结果的评价确定物品真伪,有效解决了伪造商品无法快速鉴别的问题。

Description

物品鉴别方法、系统、设备及存储介质 技术领域
本申请涉及机器学习技术领域,特别是涉及一种物品鉴别方法、系统、设备及存储介质。
背景技术
如今,居民购买力提升、购入奢侈品增多,然而奢侈品真伪混合销售的情况层出不穷,奢侈品鉴别已成为消费者多渠道购入奢侈品的核心需求。目前国内外奢侈品鉴别的主要方式是人工鉴别,还没有通过深度学习方法鉴别奢侈品的论文或专利。
人工方式鉴别奢侈品尽管已经取得一些发展,但仍存在诸多问题。第一,国内目前没有奢侈品鉴别行业的职业资格证书,缺乏对于从业者所必备的学识、技术和能力的基本要求,且没有正规的培养方式,从业人员的职业素质很难保证。第二,培养奢侈品鉴别师的代价大,符合要求的鉴别师少,难以满足日益增大的市场需求。且奢侈品鉴别时效性强,所需知识储备量大,而人对于物品真假的理解是非常主观的,不同人依照自身的知识、经验、情绪等情况可能做出不同的判断,难以保证鉴别结果的准确率。第三,人工奢侈品鉴别行业利润空间大,易形成黑色利益链条,难以取得消费者的信任。且对比于去线下找实体店鉴别、寄送给平台鉴别等费时费力的方案,通过图片即可得到结果的方式更符合如今人们的使用习惯。
深度学习是机器学习中一种对数据进行表征学习的算法。源于人工神经网络的研究,其动机在于建立模拟人脑进行分析学习的神经网络,通过模仿人脑的机制来解释图像,声音和文本数据。深度学习通过组合低层特征形成更加抽象的高层表示属性类别或特征,以发现数据的分布式特征表示。Hinton等人于2006年提出了基于深度置信网络(DBN)的非监督贪心逐层训练算法,开启了深度学习在学术界和工业界的新浪潮。当前,深度学习已成为一种代表性的机器学习技术,被广泛应用于图像、声音、文本等领域。
因此,如何利用深度学习技术对奢侈品进行鉴别以提高鉴别准确率并降低鉴别成为业已成为本领域业者亟待解决的技术问题。
发明内容
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种物品鉴别方法、系统、设备及存储介质,用于解决现有技术中无法快速鉴别物品真伪的问题。
为实现上述目的及其他相关目的,本申请的第一方面提供一种物品鉴别方法,包括以下步骤:获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点;依 据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值;依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值;以及依据所述单点分值或/及所述总分值鉴别所述物品的真伪。
在所述第一方面的某些实时方式中,所述方法还包括对所述多幅图像进行预处理的步骤。
在所述第一方面的某些实时方式中,所述预处理的步骤包括对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。
在所述第一方面的某些实时方式中,所述方法还包括对至少一幅所述幅图像进行聚类处理的步骤。
在所述第一方面的某些实时方式中,所述对至少一幅图像进行聚类处理的步骤包括利用VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。
在所述第一方面的某些实时方式中,所述训练的对应各该目标鉴别点的多个卷积神经网络模型包括:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。
在所述第一方面的某些实时方式中,所述依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值还包括:保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
在所述第一方面的某些实时方式中,所述的依据所述单点分值与所述总分值鉴别所述物品的真伪的步骤包括:预设第一阈值及第二阈值;判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
在所述第一方面的某些实时方式中,所述方法还包括防止过拟合的步骤。
在所述第一方面的某些实时方式中,所述物品为奢侈品。
本申请第二方面提供一种物品鉴别系统,包括:预处理模块,用于获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点;识别模块,用于依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值;评价模块,用于依据所述训练的训练集中测试获得的权重,将 多个所述目标鉴别点的单点分值进行加权求和处理获得总分值;以及依据所述单点分值或/及所述总分值鉴别所述物品的真伪。
在所述第二方面的某些实时方式中,所述预处理模块还用于对所述多幅图像进行预处理。
在所述第二方面的某些实时方式中,所述预处理模块对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。
在所述第二方面的某些实时方式中,所述预处理模块还用于对至少一幅所述幅图像进行聚类处理。
在所述第二方面的某些实时方式中,所述预处理模块利用VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。
在所述第二方面的某些实时方式中,所述训练的对应各该目标鉴别点的多个卷积神经网络模型包括:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。
在所述第二方面的某些实时方式中,所述识别模块用于执行以下步骤:保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
在所述第二方面的某些实时方式中,所述评价模块用于执行以下步骤:预设第一阈值及第二阈值;判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
在所述第二方面的某些实时方式中,所述系统还包括训练模块,用于训练各卷积神经网络模型,其中,所述训练模块执行防止过拟合的步骤。
在所述第二方面的某些实时方式中,所述物品为奢侈品。
本申请第三方面提供一种物品识别设备,包括:存储器,用于存储程序代码;一个或多个处理器;其中,所述处理器用于调用所述存储器中存储的程序代码来执行上述第一方面任一项所述的物品鉴别方法。
本申请第四方面提供一种计算机可读存储介质,存储有用于鉴别物品的计算机程序,其特征在于,所述计算机程序被执行时实现上述第一方面任一项所述的物品鉴别方法。
如上所述,本申请物品鉴别方法、系统、设备及存储介质,利用卷积神经网络对各图像中描述物品的目标鉴别点进行识别,构建评价机制以及通过对识别结果的评价确定物品真伪, 有效解决了伪造商品无法快速鉴别的问题,大大简化了电子交易或其他远程购物方式所产生的物品真伪争议。
附图说明
图1显示为本申请的物品鉴别方法在一实施方式中的流程图。
图2-5,其显示为待鉴别物品为包的包含目标鉴别点的多幅图像。
图6显示为本申请的物品鉴别系统在一实施方式中的流程图。
图7显示为本申请的物品鉴别设备一实施例的结构示意图。
具体实施方式
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一预设阈值可以被称作第二预设阈值,并且类似地,第二预设阈值可以被称作第一预设阈值,而不脱离各种所描述的实施例的范围。第一预设阈值和预设阈值均是在描述一个阈值,但是除非上下文以其他方式明确指出,否则它们不是同一个预设阈值。相似的情况还包括第一音量与第二音量。
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示.应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加.此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合.因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
电子交易平台的日益发展不仅丰富了物品的销售渠道,还为伪造物品提供了便捷的销售平台,用户不得不担心所购买物品的真伪性。然而,物品鉴别能力并非普通消费者易于具备的,同样地,作为物品销售平台,物品的真伪也并非电子交易平台易于审核的。作为一些时尚的奢侈品,如包、表等奢侈品等,物品鉴别人需要储备大量相关物品知识,以及熟悉物品所使用材质、做工流程、主题特点等,因此,采用人工鉴别的方式已无法满足购买高端物品的用户对物品鉴别准确性、时效性等需求。
为此,本申请提供一种物品鉴别方法。所述物品鉴别方法通过一物品鉴别系统来执行。其中,所述物品鉴别系统包含通过计算机设备中的软件和硬件来实现。
所述计算机设备可以是任何具有数学和逻辑运算、数据处理能力的计算设备,其包括但不限于:个人计算机设备、单台服务器、服务器集群、分布式服务端、所述云服务端等。其中,所述云服务端包括公共云(Public Cloud)服务端与私有云(Private Cloud)服务端,其中,所述公共或私有云服务端包括Software-as-a-Service(软件即服务,简称SaaS)、Platform-as-a-Service(平台即服务,简称PaaS)及Infrastructure-as-a-Service(基础设施即服务,简称IaaS)等。所述私有云服务端例如阿里云计算服务平台、亚马逊(Amazon)云计算服务平台、百度云计算平台、腾讯云计算平台等等。
其中,所述云服务端提供至少一种远程图片上传服务。所述远程图片上传服务包括但不限于以下至少一种:物品上架服务、物品鉴别的服务和物品投诉的服务等。其中,所述物品上架的服务例如商家上传待售物品图片和相关文字描述等,所述物品鉴别的服务例如购买者上传物品图片以鉴别真伪等,所述物品投诉的服务例如为购买者无法与商家达成一致时上传物品图片以供第三方(如电子交易平台)介入调解的服务等。
所述计算机设备至少包括:存储器、一个或多个处理器、I/O接口、网络接口和输入结构等。
其中所述存储器用于存储待鉴别物品的多幅图像以及至少一个程序。所述存储器可包括高速随机存取存储器,并且还可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。
在某些实施例中,存储器还可以包括远离一个或多个处理器的存储器,例如经由RF电路或外部端口以及通信网络访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制设备的诸如CPU和外设接口之类的其他组件对存储器的访问。存储器可选地包括高速随机存取存储器,并且可选地还包括非易失性存储器,诸如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储器设备。由设备的其他组件诸如CPU和外围接口,对存储器的访问可选地通过存储器控制器来控制。
所述一个或多个处理器可操作地与网络接口耦接,以将计算设备以通信方式耦接至网络。例如,网络接口可将计算设备连接到局域网(如LAN)、和/或广域网(如WAN)。处理器还与I/O端口和输入结构可操作地耦接,该I/O端口可使得计算设备能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。此外,电子显示器可包括触摸部件,该触摸部件通过检测对象触摸 其屏幕的发生和/或位置来促进用户输入。
在另一种情况下,所述物品鉴别系统还可以由一种装载于智能终端上的应用程序(APP)来实现,所述智能终端通过拍摄获取待鉴定物品的多幅图像,并经由无线网络上传至云端服务器,藉由云端进行鉴别后反馈识别结果。
所述智能终端例如为包括但不限于智能手机、平板电脑、智能手表、智能眼镜、个人数字助理(PDA)等等便携式或者穿戴式的电子设备,应当理解,本申请于实施方式中描述的便携式电子设备只是一个应用实例,该设备的组件可以比图示具有更多或更少的组件,或具有不同的组件配置。所绘制图示的各种组件可以用硬件、软件或软硬件的组合来实现,包括一个或多个信号处理和/或专用集成电路。
所述智能终端包括存储器、存储器控制器、一个或多个处理器(CPU)、外设接口、RF电路、音频电路、扬声器、麦克风、输入/输出(I/O)子系统、触摸屏、其他输出或控制设备,以及外部端口。这些组件通过一条或多条通信总线或信号线进行通信。
所述智能终端支持各种应用程序,诸如以下各项中的一种或多种:绘图应用程序、呈现应用程序、文字处理应用程序、网站创建应用程序、盘编辑应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息应用程序、健身支持应用程序、照片管理应用程序、数码相机应用程序、数码视频摄像机应用程序、网页浏览应用程序、数码音乐播放器应用程序和/或数码视频播放器应用程序。
请参阅图1,其显示为本申请的物品鉴别方法在一实施方式中的流程图。其中,所述一个或多个处理器调取存储器中的至少一个程序和图像,用以执行所述物品鉴别方法。
在步骤S110中,获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点。在一实施例中,所述图像为的照片或图片,所述照片或图片的格式为计算机存储图片的格式,例如为bmp,jpg,png,tiff,gif,pcx,tga,exif,fpx,svg,psd,cdr,pcd,dxf,ufo,eps,ai,raw,WMF等存储的格式。
在此,所述物品鉴别系统提供一图像获取界面,用户根据所述界面中的提示提供待鉴定物品的多幅图像。其中,所述物品鉴别系统所获取的多幅图像中,每幅图像包含有至少要一个目标鉴别点。
在一些实施方式中,所述物品鉴别系统能鉴别属于同一商品类别的多种物品,例如,鉴别某品牌的双肩包、旅行包、斜挎包等;和/或鉴别不同种类的多种物品,例如,鉴别的物品种类包含双肩包、旅行包、挎包、女士表、男士表、怀表、各类化妆品等。需要特别说明的是,由于物品的不同,应理解的,鉴定物品的鉴定点也会不同,比如对于奢侈品而言的女士包,则其鉴定点一般包括皮签、拉链以及包括物品商标的标签;再比如对紫砂壶而言,则鉴 定点通常在于紫砂壶的壶身与把手或者壶嘴的结合部以及顶盖等。在本申请提供的实施例中,暂以所述物品为女士包为例进行说明。
其中,所述目标鉴别点包括但不限于以下至少一种:物品标识部分、物品附属品、物品整体外形等。所述物品标识部分包括但不限于物品上的印制痕迹(如物品商标)、拉链和拉锁、包体和包袋之间的衔接处、表带、表盘等;所述物品附属品举例为包的皮签、表链样品等;物品整体外形举例为在至少一个视角下所展示的物品全身。为了确保所获取图像的清晰度和目标鉴别点的数量,所述物品鉴别系统可提示用户提供尽可能多视角的物品整体和局部图像,并执行步骤S120。
在一些实施方式中,所述物品鉴别系统可对所获取的多幅图像进行筛选,以得到符合图像鉴别清晰度、分辨率、目标鉴别点中至少一种要求的图像。例如,所述物品鉴别系统将剔除图像尺寸小于预设尺寸阈值的图像。又如,所述物品鉴别系统对所获取的图像进行清晰度分析并选择清晰度满足预设清晰度条件的图像。所述物品鉴别系统在筛选后执行步骤S120。
在又一些实施方式中,为了便于后续步骤对所获取的所有图像或者经筛选后保留的图像进行识别,所述物品鉴别方法还包括对所述多幅图像进行预处理的步骤。其中,所述预处理步骤旨在将所获取的图像与待调取的神经网络模型输入图像进行适配。所述预处理的步骤包括对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。
例如,所述物品鉴别系统将所接收的550×300像素的图像旋转成300×550。又如,所述物品鉴别系统将所接收的1000×800像素的图像缩小或剪切后,再旋转至300×550。再如,所述物品鉴别系统对所获取的被美颜的图像进行锐化、加噪等处理以还原真实图像。所述物品鉴别系统接着执行步骤S120将预处理后的各图像送入神经网络模型进行识别。又如,所述物品鉴别系统识别图像中的物品轮廓,根据物品图像在整幅图像的位置对图像进行旋转、平移和剪切,以尽量减少物品图像中的背景并得到符合后续卷积神经网络所接收尺寸的图像。再如,所述物品鉴别系统对所获取的各图像进行灰度翻转,以便于后续抑制背景或对突出物品特征。再比如,所述物品鉴别系统在将各图像输入卷积神经网络前按照图像颜色或灰度等将图像进行随即通道偏移后注入卷积神经网络的不同通道。
在另一些实施方式中,所述物品鉴别系统能鉴别属于同一商品类别的多种物品,例如,鉴别某品牌的双肩包、旅行包、斜挎包等;和/或鉴别不同种类的多种物品,例如,鉴别的物品种类包含双肩包、旅行包、挎包、女士表、男士表、怀表、各类化妆品等。为了提高鉴别准确率,所述物品鉴别方法还包括对所述多幅图像进行聚类处理的步骤。在执行步骤S120之前,所述物品鉴别系统先利用图像中所拍摄的物品的外形等特征对所获取的图像进行聚类 处理,用以将待鉴定物品归属于预设的其中一个种类中。在一具体示例中,为便于所述物品鉴别系统选取合适的图像进行聚类处理,在图形获取界面中可按照目标鉴别点提供分类提示,以使用户所上传的图像属于相应的种类。在另一具体示例中,所述物品鉴别系统可从所获取的多幅图像中选取至少一幅图像进行聚类处理,以确定待鉴别物品所归属的商品分类。
例如,所述物品鉴别系统从多幅图像中选择包含待鉴别物品全身的图像,再对所选出的图像进行聚类处理以确定待鉴别的物品分类。又如,所述物品鉴别系统对所获取的多幅图像同时进行聚类处理,并对每幅图像经聚类处理所得到的各商品分类进行评价以确定待识别的物品所属种类。其中,所述聚类处理可采用VGG19网络特征分类模型,即利用所述VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。例如,所述物品鉴别系统利用VGG19网络特征分类模型对至少一幅所获取的图像进行特征近似程度聚类,以确定待鉴定物品所属种类,并基于所确定的种类确定步骤S120所使用的多个神经网络模型。
在步骤S120中,依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型(简称网络模型)分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值。
在此,所述物品鉴别系统将所获取的各幅图像送入不同的卷积神经网络模型,以利用各卷积神经网络模型分别识别至少一个目标鉴别点并得到能识别出的每个目标鉴别点的单点分值。其中,对应识别同一目标鉴别点的卷积神经网络模型为多个。
在一示例中,每个目标鉴别点对应预设至少两个卷积神经网络模型,且所预设的卷积神经网络模型仅用于识别该目标鉴别点,例如,卷积神经网络模型A1和A2均用于识别目标鉴别点B1,卷积神经网络模型A3和A4用于识别目标鉴别点B2。在又一示例中,每个目标鉴别点对应预设至少两个卷积神经网络模型,所预设的卷积神经网络模型用于识别至少一个目标鉴别点,例如,卷积神经网络模型A5用于识别目标鉴别点B1,卷积神经网络模型A6用于识别目标鉴别点B1和B2,卷积神经网络模型A7用于识别目标鉴别点B2。
对应各该目标鉴别点的多个卷积神经网络模型包括但不限于:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。
为了提高各卷积神经网络模型识别相应目标鉴别点的准确率,各卷积神经网络模型经由样本图像训练而得的。其中,样本图像可通过对预设品牌的正品、次品、假冒品图像进行拍照,并对所拍摄照片进行图像处理,以丰富样本图像的数量。例如,在拍摄时按照多个角度拍摄不同视角的物品,再对所得到的照片进行如image_aug等图像增强处理,以增加样本图像数量。再利用样本图像对各卷积神经网络模型进行训练,并根据训练结果调整网络模型中的结构。以RESNET54网络模型为例,预设54个残差模块,每个模块内的卷积核大小均设 为3×3,步幅设为1×1,前18个残差模块的卷积核数量设为16,中间18个设为32,最后18个设为64,除此之外,在RESNET残差模块中还包括一个卷积层,其卷积核为3×3,步幅为2×2,每个残差模块由三部分合并作为输出识别结果。按照上述参数设置对RESNET54网络模型进行训练,并利用反向传播算法求得RESNET54网络模型中各参数。
在进行各网络模型训练期间,还包括防止过拟合的步骤。例如,在各网络模型中设置丢弃层(dropout层)、设置weight_decay以及使用早停(early stopping)来防止过拟合。其中,dropout旨在优化网络的时候以一定的概率忽视某些节点,使得最终网络模型的结构可以看成是很多个网络的集成;另外,使用weight decay调节模型复杂度对损失函数的影响;以及使用早停手段使得当若干epoch之后训练误差没有降低时,提前结束网络的训练。
另外,至少一个网络模型中包含归一化层,用以提升模型容纳能力。例如,针对所鉴别的物品种类,用于鉴别相应种类中的目标鉴别点的网络模型中网络层按照batch进行归一化处理。
利用各卷积神经网络模型所识别的目标鉴别点的结果可用于描述输入的图像中包含单一目标鉴别点的可能性。例如,按照预设的评分等级描述图像中包含相应目标鉴别点的可能性。请参阅图2-5,其显示为待鉴别物品为包的包含目标鉴别点的多幅图像,所获取的多幅图像中包含的目标鉴别点包括:整体外观、拉链和拉锁、商标和皮签,每个目标鉴别点对应三个卷积神经网络模型进行单独识别,故而,每个目标鉴别点得到三个卷积神经网络模型所输出的三个单点分值。在得到单点分值后可执行步骤S140、或按照图1所示执行步骤S130及S140。
在一些实施方式中,各所述网络模型所输出的结果并非直接表征对应目标鉴别点的单点分值,物品鉴别系统基于同一目标鉴别点的各识别结果做进一步决策,以得到对应一个目标鉴别点的单点分值。为此,所述步骤S120包括以下步骤:保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
在此,所述物品鉴别系统截取各网络模型的隐藏层中所输出的信息作为特征予以保存。例如,各网络模型的隐藏层中的全连接层将所得到的数据作为特征输出给所述物品鉴别系统。所述物品鉴别系统将识别同一目标鉴别点的各网络模型所提供的特征进行拼接处理。例如,所述物品鉴别系统将对应同一目标鉴别点的三个特征,即三个矩阵,进行拼接。再将拼接后的特征输入决策树算法,以得到对应目标鉴别点的单点分值。其中,所述决策树算法举例为XGBoost算法等。
在步骤S130中,依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值。
在进行网络模型训练期间,根据各网络模型识别对应目标鉴别点的正确率,为每个网络模型设置权重,当按照步骤S120对所获取的多幅图像进行识别并得到单点分值时,所述鉴别系统按照各所述权重对多个所述目标鉴别点的单点分值进行加权求和处理获得总分值。例如,待鉴别物品中目标鉴别点B1的两个网络模型A1和A2各自的权重为w1和w2,各自输出的单点分值为P1和P2;以及目标鉴别点B2的两个网络模型A3和A4各自的权重为w3和w4,各自输出的单点分值为P3和P4,则所述物品鉴别系统所得到的总分值为(w1×P1+w2×P2+w3×P3+w4×P4)。
基于决策树算法而得到的各目标鉴别点的单点分值可按照同一权重进行加权求和。例如,直接将各目标鉴别点的单点分值进行取和得到总分值。
在步骤S140中,依据所述单点分值或/及所述总分值鉴别所述物品的真伪。在此,所述物品鉴别系统可按照单点分值和总分值设置多个鉴别阶段,并按照预设的鉴别顺序遍历各鉴别阶段,以当根据其中一个鉴别阶段的鉴别结果得到真伪的鉴别结果。
例如,多个鉴别阶段包括:逐个鉴别单点分值和鉴别总分值,预设先依次鉴别所有目标鉴别点的各单点分值最后鉴别总分值的鉴别顺序,所述物品鉴别系统预设对应每个鉴别阶段的阈值,并按照上述鉴别阶段顺序,每当一个鉴别阶段为真则转至下一个鉴别阶段,直至遍历所有鉴别阶段最终得到真的结果,反之若在任一鉴别阶段确定为伪的结果,则确定所述物品为伪的结果。
在一些实施方式中,所述物品鉴别系统预设第一阈值及第二阈值;判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
例如,待鉴定物品包含两个目标鉴别点,每个目标鉴别点对应三个单点分值,所述物品鉴别系统逐一判断六个单点分值,当单点分值低于所述第一阈值的比例大于预设比例阈值时,输出判定所述物品为伪的结果;反之,在基于单点分值判断为物品为真的条件下,判断所述总分值是否低于第二阈值,若是,则判断输出判定所述物品为伪的结果,否则输出判定所述物品为真的结果。其中,所述比例阈值在[0,1]之间。
需要说明的是,待鉴别的物品的目标鉴别点可以为一个,当目标鉴别点仅为一个的时候,所对应的单点分值即为总分值,所述步骤S130可简化为对单点分值与阈值的比较。该方式应视为基于本申请技术思想下的一个具体示例。
还需要说明的是,上述依据所述单点分值或/及所述总分值鉴别所述物品的真伪的方式仅为举例,任何依据本申请所述方式而得到的单点分值或/及总分值,并利用该至少一个分值进行真伪辨别的辨别策略应视为基于本申请技术思想下的具体示例。
本申请所提供的一种可利用图像进行物品鉴别的方式,利用卷积神经网络对各图像中描述物品的目标鉴别点进行识别,构建评价机制以及通过对识别结果的评价确定物品真伪,有效解决了伪造商品无法快速鉴别的问题,大大简化了电子交易或其他远程购物方式所产生的物品真伪争议。
请参阅图6,其显示为本申请的物品鉴别系统在一实施方式中的框图。所述物品鉴别系统1包括:预处理模块11、识别模块12和评价模块13。
所述预处理模块11用于获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点。在一实施例中,所述图像为的照片或图片,所述照片或图片的格式为计算机存储图片的格式,例如为bmp,jpg,png,tiff,gif,pcx,tga,exif,fpx,svg,psd,cdr,pcd,dxf,ufo,eps,ai,raw,WMF等存储的格式。
在此,所述预处理模块11提供一图像获取界面,用户根据所述界面中的提示提供待鉴定物品的多幅图像。其中,所述预处理模块11所获取的多幅图像中,每幅图像包含有至少要一个目标鉴别点。
在一些实施方式中,所述物品鉴别系统能鉴别属于同一商品类别的多种物品,例如,鉴别某品牌的双肩包、旅行包、斜挎包等;和/或鉴别不同种类的多种物品,例如,鉴别的物品种类包含双肩包、旅行包、挎包、女士表、男士表、怀表、各类化妆品等。需要特别说明的是,由于物品的不同,应理解的,鉴定物品的鉴定点也会不同,比如对于奢侈品而言的女士包,则其鉴定点一般包括皮签、拉链以及包括物品商标的标签;再比如对紫砂壶而言,则鉴定点通常在于紫砂壶的壶身与把手或者壶嘴的结合部以及顶盖等。在本申请提供的实施例中,暂以所述物品为女士包为例进行说明。
其中,所述目标鉴别点包括但不限于以下至少一种:物品标识部分、物品附属品、物品整体外形等。所述物品标识部分包括但不限于物品上的印制痕迹(如物品商标)、拉链和拉锁、包体和包袋之间的衔接处、表带、表盘等;所述物品附属品举例为包的皮签、表链样品等;物品整体外形举例为在至少一个视角下所展示的物品全身。为了确保所获取图像的清晰度和目标鉴别点的数量,所述预处理模块11可提示用户提供尽可能多视角的物品整体和局部图像,并执识别模块12。
在一些实施方式中,所述预处理模块11可对所获取的多幅图像进行筛选,以得到符合图像鉴别清晰度、分辨率、目标鉴别点中至少一种要求的图像。例如,所述预处理模块11将剔除图像尺寸小于预设尺寸阈值的图像。又如,所述预处理模块11对所获取的图像进行清晰度分析并选择清晰度满足预设清晰度条件的图像。所述预处理模块11在筛选后执行步骤S120。
在又一些实施方式中,为了便于后续步骤对所获取的所有图像或者经筛选后保留的图像 进行识别,所述物品鉴别方法还包括对所述多幅图像进行预处理的步骤。
其中,所述预处理步骤旨在将所获取的图像与待调取的神经网络模型输入图像进行适配。所述预处理的步骤包括对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。例如,所述预处理模块11将所接收的550×300像素的图像旋转成300×550。又如,所述预处理模块11将所接收的1000×800像素的图像缩小并旋转至300×550。再如,所述预处理模块11对所获取的被美颜的图像进行锐化、加噪等处理以还原真实图像。所述预处理模块11接着执行步骤S120将预处理后的各图像送入神经网络模型进行识别。又如,所述预处理模块11识别图像中的物品轮廓,根据物品图像在整幅图像的位置对图像进行旋转、平移和剪切,以尽量减少物品图像中的背景并得到符合后续卷积神经网络所接收尺寸的图像。再如,所述预处理模块11对所获取的各图像进行灰度翻转,以便于后续抑制背景或对突出物品特征。再比如,所述预处理模块11在将各图像输入卷积神经网络前按照图像颜色或灰度等将图像进行随即通道偏移后注入卷积神经网络的不同通道。
在另一些实施方式中,所述预处理模块11能鉴别属于同一商品类别的多种物品,例如,鉴别某品牌的双肩包、旅行包、斜挎包等;和/或鉴别不同种类的多种物品,例如,鉴别的物品种类包含双肩包、旅行包、挎包、女士表、男士表、怀表、各类化妆品等。为了提高鉴别准确率,所述物品鉴别方法还包括对所述多幅图像进行聚类处理的步骤。
在执行步骤S120之前,所述预处理模块11先利用图像中所拍摄的物品的外形等特征对所获取的图像进行聚类处理,用以将待鉴定物品归属于预设的其中一个种类中。在一具体示例中,为便于所述预处理模块11选取合适的图像进行聚类处理,在图形获取界面中可按照目标鉴别点提供分类提示,以使用户所上传的图像属于相应的种类。
在另一具体示例中,所述预处理模块11可从所获取的多幅图像中选取至少一幅图像进行聚类处理,以确定待鉴别物品所归属的商品分类。例如,所述预处理模块11从多幅图像中选择包含待鉴别物品全身的图像,再对所选出的图像进行聚类处理以确定待鉴别的物品分类。又如,所述预处理模块11对所获取的多幅图像同时进行聚类处理,并对每幅图像经聚类处理所得到的各商品分类进行评价以确定待识别的物品所属种类。其中,所述聚类处理可采用VGG19网络特征分类模型,即利用所述VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。例如,所述预处理模块11利用VGG19网络特征分类模型对至少一幅所获取的图像进行特征近似程度聚类,以确定待鉴定物品所属种类,并基于所确定的种类确定识别模块12所使用的多个神经网络模型。
所识别识别模块12用于依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络 模型(简称网络模型)分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值。
在此,所述识别模块12将所获取的各幅图像送入不同的卷积神经网络模型,以利用各卷积神经网络模型分别识别至少一个目标鉴别点并得到能识别出的每个目标鉴别点的单点分值。其中,对应识别同一目标鉴别点的卷积神经网络模型为多个。
在一示例中,每个目标鉴别点对应预设至少两个卷积神经网络模型,且所预设的卷积神经网络模型仅用于识别该目标鉴别点,例如,卷积神经网络模型A1和A2均用于识别目标鉴别点B1,卷积神经网络模型A3和A4用于识别目标鉴别点B2。在又一示例中,每个目标鉴别点对应预设至少两个卷积神经网络模型,所预设的卷积神经网络模型用于识别至少一个目标鉴别点,例如,卷积神经网络模型A5用于识别目标鉴别点B1,卷积神经网络模型A6用于识别目标鉴别点B1和B2,卷积神经网络模型A7用于识别目标鉴别点B2。
对应各该目标鉴别点的多个卷积神经网络模型包括但不限于:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。为了提高各卷积神经网络模型识别相应目标鉴别点的准确率,各卷积神经网络模型经由样本图像训练而得的。其中,样本图像可通过对预设品牌的正品、次品、假冒品图像进行拍照,并对所拍摄照片进行图像处理,以丰富样本图像的数量。例如,在拍摄时按照多个角度拍摄不同视角的物品,再对所得到的照片进行如image_aug等图像增强处理,以增加样本图像数量。再利用样本图像对各卷积神经网络模型进行训练,并根据训练结果调整网络模型中的结构。以RESNET54网络模型为例,预设54个残差模块,每个模块内的卷积核大小均设为3×3,步幅设为1×1,前18个残差模块的卷积核数量设为16,中间18个设为32,最后18个设为64,除此之外,在RESNET残差模块中还包括一个卷积层,其卷积核为3×3,步幅为2×2,每个残差模块由三部分合并作为输出识别结果。按照上述参数设置对RESNET54网络模型进行训练,并利用反向传播算法求得RESNET54网络模型中各参数。
在进行各网络模型训练期间,还包括防止过拟合的步骤。例如,在各网络模型中设置丢弃层(dropout层)、设置weight_decay以及使用早停(early stopping)来防止过拟合。其中,dropout旨在优化网络的时候以一定的概率忽视某些节点,使得最终网络模型的结构可以看成是很多个网络的集成;另外,使用weight decay调节模型复杂度对损失函数的影响;以及使用早停手段使得当若干epoch之后训练误差没有降低时,提前结束网络的训练。
另外,至少一个网络模型中包含归一化层,用以提升模型容纳能力。例如,针对所鉴别的物品种类,用于鉴别相应种类中的目标鉴别点的网络模型中网络层按照batch进行归一化处理。
利用各卷积神经网络模型所识别的目标鉴别点的结果可用于描述输入的图像中包含单一 目标鉴别点的可能性。例如,按照预设的评分等级描述图像中包含相应目标鉴别点的可能性。请参阅图2-5,其显示为待鉴别物品为包的包含目标鉴别点的多幅图像,所获取的多幅图像中包含的目标鉴别点包括:整体外观、拉链和拉锁、商标和包扣,每个目标鉴别点对应三个卷积神经网络模型进行单独识别,故而,每个目标鉴别点得到三个卷积神经网络模型所输出的三个单点分值。
在一些实施方式中,各所述网络模型所输出的结果并非直接表征对应目标鉴别点的单点分值,识别模块12基于同一目标鉴别点的各识别结果做进一步决策,以得到对应一个目标鉴别点的单点分值。为此,所述识别模块12执行以下步骤:保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
在此,所述识别模块12截取各网络模型的隐藏层中所输出的信息作为特征予以保存。例如,各网络模型的隐藏层中的全连接层将所得到的数据作为特征输出给所述识别模块12。所述识别模块12将识别同一目标鉴别点的各网络模型所提供的特征进行拼接处理。例如,所述识别模块12将对应同一目标鉴别点的三个特征,即三个矩阵,进行拼接。再将拼接后的特征输入决策树算法,以得到对应目标鉴别点的单点分值。其中,所述决策树算法举例为XGBoost算法等。
所述评价模块13用于依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值。
在进行网络模型训练期间,根据各网络模型识别对应目标鉴别点的正确率,为每个网络模型设置权重,当按照识别模块12对所获取的多幅图像进行识别并得到单点分值时,所述鉴别系统1按照各所述权重对多个所述目标鉴别点的单点分值进行加权求和处理获得总分值。例如,待鉴别物品中目标鉴别点B1的两个网络模型A1和A2各自的权重为w1和w2,各自输出的单点分值为P1和P2;以及目标鉴别点B2的两个网络模型A3和A4各自的权重为w3和w4,各自输出的单点分值为P3和P4,则所述评价模块13所得到的总分值为(w1×P1+w2×P2+w3×P3+w4×P4)。
基于决策树算法而得到的各目标鉴别点的单点分值可按照同一权重进行加权求和。例如,直接将各目标鉴别点的单点分值进行取和得到总分值。
所述评价模块13还用于依据所述单点分值或/及所述总分值鉴别所述物品的真伪。在此,所述评价模块13可按照单点分值和总分值设置多个鉴别阶段,并按照预设的鉴别顺序遍历各鉴别阶段,以当根据其中一个鉴别阶段的鉴别结果得到真伪的鉴别结果。例如,多个鉴别阶段包括:逐个鉴别单点分值和鉴别总分值,预设先依次鉴别所有目标鉴别点的各单点分值最 后鉴别总分值的鉴别顺序,所述评价模块13预设对应每个鉴别阶段的阈值,并按照上述鉴别阶段顺序,每当一个鉴别阶段为真则转至下一个鉴别阶段,直至遍历所有鉴别阶段最终得到真的结果,反之若在任一鉴别阶段确定为伪的结果,则确定所述物品为伪的结果。
在一些实施方式中,所述评价模块13预设第一阈值及第二阈值;判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
例如,待鉴定物品包含两个目标鉴别点,每个目标鉴别点对应三个单点分值,所述评价模块13逐一判断六个单点分值,当单点分值低于所述第一阈值的比例大于预设比例阈值时,输出判定所述物品为伪的结果;反之,在基于单点分值判断为物品为真的条件下,判断所述总分值是否低于第二阈值,若是,则判断输出判定所述物品为伪的结果,否则输出判定所述物品为真的结果。其中,所述比例阈值在[0,1]之间。
需要说明的是,待鉴别的物品的目标鉴别点可以为一个,当目标鉴别点仅为一个的时候,所对应的单点分值即为总分值,所述评价模块13可简化为对单点分值与阈值的比较。该方式应视为基于本申请技术思想下的一个具体示例。
还需要说明的是,上述依据所述单点分值或/及所述总分值鉴别所述物品的真伪的方式仅为举例,任何依据本申请所述方式而得到的单点分值或/及总分值,并利用该至少一个分值进行真伪辨别的辨别策略应视为基于本申请技术思想下的具体示例。
请参阅图7,其显示为本申请的物品鉴别设备实施例一的结构示意图。如图所示,本实施例提供的计算机设备2主要包括存储器21、一个或多个处理器22、以及存储于所述存储器21中的一个或多个程序,其中,存储器21存储执行指令,当计算机设备2运行时,处理器22与存储器21之间通信。
其中,所述一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行指令,所述一个或多个处理器执行所述执行指令使得所述电子设备执行上述的物品鉴别方法,即所述处理器22执行执行指令使得计算机设备2执行如图1所示的方法,藉此可以通过图像识别的方式来鉴别物品真伪。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每 个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
本申请再提供一种计算机可读写存储介质,其上存储有用于鉴别物品的计算机程序,所述存储有用于鉴别物品的计算机程序被处理器执行时实现上述物品鉴别方法的步骤,即图1中所述的步骤。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
于本申请提供的实施例中,所述计算机可读写存储介质可以包括只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁存储设备、闪存、U盘、移动硬盘、或者能够用于存储具有指令或数据结构形式的期望的程序代码并能够由计算机进行存取的任何其它介质。另外,任何连接都可以适当地称为计算机可读介质。例如,如果指令是使用同轴电缆、光纤光缆、双绞线、数字订户线(DSL)或者诸如红外线、无线电和微波之类的无线技术,从网站、服务器或其它远程源发送的,则所述同轴电缆、光纤光缆、双绞线、DSL或者诸如红外线、无线电和微波之类的无线技术包括在所述介质的定义中。然而,应当理解的是,计 算机可读写存储介质和数据存储介质不包括连接、载波、信号或者其它暂时性介质,而是旨在针对于非暂时性、有形的存储介质。如申请中所使用的磁盘和光盘包括压缩光盘(CD)、激光光盘、光盘、数字多功能光盘(DVD)、软盘和蓝光光盘,其中,磁盘通常磁性地复制数据,而光盘则用激光来光学地复制数据。
如上所述,本申请所提供的一种可利用图像进行物品鉴别的方式,利用卷积神经网络对各图像中描述物品的目标鉴别点进行识别,构建评价机制以及通过对识别结果的评价确定物品真伪,有效解决了伪造商品无法快速鉴别的问题,大大简化了电子交易或其他远程购物方式所产生的物品真伪争议。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。

Claims (22)

  1. 一种物品鉴别方法,其特征在于,包括以下步骤:
    获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点;
    依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值;
    依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值;以及
    依据所述单点分值或/及所述总分值鉴别所述物品的真伪。
  2. 根据权利要求1所述的物品鉴别方法,其特征在于,还包括对所述多幅图像进行预处理的步骤。
  3. 根据权利要求2所述的物品鉴别方法,其特征在于,所述预处理的步骤包括对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。
  4. 根据权利要求1所述的物品鉴别方法,其特征在于,还包括对至少一幅所述幅图像进行聚类处理的步骤。
  5. 根据权利要求4所述的物品鉴别方法,其特征在于,所述对至少一幅图像进行聚类处理的步骤包括利用VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。
  6. 根据权利要求1所述的物品鉴别方法,其特征在于,所述训练的对应各该目标鉴别点的多个卷积神经网络模型包括:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。
  7. 根据权利要求1所述的物品鉴别方法,其特征在于,所述依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值还包括:
    保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及
    将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
  8. 根据权利要求1所述的物品鉴别方法,其特征在于,所述的依据所述单点分值与所述总分值鉴别所述物品的真伪的步骤包括:
    预设第一阈值及第二阈值;
    判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及
    在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
  9. 根据权利要求1所述的物品鉴别方法,其特征在于,还包括防止过拟合的步骤。
  10. 根据权利要求1所述的物品鉴别方法,其特征在于,所述物品为奢侈品。
  11. 一种物品鉴别系统,其特征在于,包括:
    预处理模块,用于获取拍摄于待鉴别物品的多幅图像,每幅所述图像中包括有至少一个目标鉴别点;
    识别模块,用于依据训练的对应所述至少一个目标鉴别点的多个卷积神经网络模型分别对所述多幅图像进行识别获得对应每一目标鉴别点的单点分值;
    评价模块,用于依据所述训练的训练集中测试获得的权重,将多个所述目标鉴别点的单点分值进行加权求和处理获得总分值;以及依据所述单点分值或/及所述总分值鉴别所述物品的真伪。
  12. 根据权利要求11所述的物品鉴别系统,其特征在于,所述预处理模块还用于对所述多幅图像进行预处理。
  13. 根据权利要求12所述的物品鉴别系统,其特征在于,所述预处理模块对所述图像进行尺寸修改、按比例缩放、加噪、反转、旋转、平移、缩放变换、剪切、对比度变换、随机通道偏移中的一种或多种处理。
  14. 根据权利要求11所述的物品鉴别系统,其特征在于,所述预处理模块还用于对至少一幅 所述幅图像进行聚类处理。
  15. 根据权利要求14所述的物品鉴别方法,其特征在于,所述预处理模块利用VGG19网络特征分类模型对至少一幅所述图像进行特征近似程度聚类,以确定对所述多幅图像的训练模型。
  16. 根据权利要求11所述的物品鉴别系统,其特征在于,所述训练的对应各该目标鉴别点的多个卷积神经网络模型包括:VGG19网络模型,RESNET54网络模型,以及WRESNET16网络模型中的至少两个网络模型。
  17. 根据权利要求11所述的物品鉴别系统,其特征在于,所述识别模块用于执行以下步骤:
    保存所述至少一个目标鉴别点的多个卷积神经网络模型的输出结果以及提取到的特征;以及
    将所述多个卷积神经网络模型分别提取到的特征拼接处理,使用决策树算法进一步分类,获得对应每一目标鉴别点的单点分值。
  18. 根据权利要求11所述的物品鉴别系统,其特征在于,所述评价模块用于执行以下步骤:
    预设第一阈值及第二阈值;
    判定为所述单点分值低于所述第一阈值时输出判定所述物品为伪的结果;以及
    在判定为所述单点分值高于所述第一阈值的条件下,判定为所述总分值低于所述第二阈值时输出判定所述物品为伪的结果。
  19. 根据权利要求11所述的物品鉴别系统,其特征在于,还包括训练模块,用于训练各卷积神经网络模型,其中,所述训练模块执行防止过拟合的步骤。
  20. 根据权利要求11所述的物品鉴别系统,其特征在于,所述物品为奢侈品。
  21. 一种物品识别设备,其特征在于,包括:
    存储器,用于存储程序代码;
    一个或多个处理器;
    其中,所述处理器用于调用所述存储器中存储的程序代码来执行权利要求1-10任一 项所述的物品鉴别方法。
  22. 一种计算机可读存储介质,存储有用于鉴别物品的计算机程序,其特征在于,所述计算机程序被执行时实现权利要求1-10任一项所述的物品鉴别方法。
PCT/CN2019/082575 2018-04-16 2019-04-12 物品鉴别方法、系统、设备及存储介质 WO2019201187A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19789249.0A EP3627392A4 (en) 2018-04-16 2019-04-12 METHOD, SYSTEM AND DEVICE FOR OBJECT IDENTIFICATION AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810346438.3 2018-04-16
CN201810346438.3A CN108520285B (zh) 2018-04-16 2018-04-16 物品鉴别方法、系统、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2019201187A1 true WO2019201187A1 (zh) 2019-10-24

Family

ID=63428839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082575 WO2019201187A1 (zh) 2018-04-16 2019-04-12 物品鉴别方法、系统、设备及存储介质

Country Status (3)

Country Link
EP (1) EP3627392A4 (zh)
CN (1) CN108520285B (zh)
WO (1) WO2019201187A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242094A (zh) * 2020-02-25 2020-06-05 深圳前海达闼云端智能科技有限公司 商品识别方法、智能货柜及智能货柜系统
CN111738415A (zh) * 2020-06-17 2020-10-02 北京字节跳动网络技术有限公司 模型同步更新方法、装置及电子设备
CN111787351A (zh) * 2020-07-01 2020-10-16 百度在线网络技术(北京)有限公司 信息查询方法、装置、设备和计算机存储介质
CN112446829A (zh) * 2020-12-11 2021-03-05 成都颜创启新信息技术有限公司 图片方向调整方法、装置、电子设备及存储介质
CN112767389A (zh) * 2021-02-03 2021-05-07 紫东信息科技(苏州)有限公司 基于fcos算法的胃镜图片病灶识别方法及装置
CN112906656A (zh) * 2021-03-30 2021-06-04 自然资源部第三海洋研究所 水下照片珊瑚礁识别方法、系统及存储介质
CN113243018A (zh) * 2020-08-01 2021-08-10 商汤国际私人有限公司 目标对象的识别方法和装置
CN113762292A (zh) * 2020-06-03 2021-12-07 杭州海康威视数字技术股份有限公司 一种训练数据获取方法、装置及模型训练方法、装置
US20220051040A1 (en) * 2020-08-17 2022-02-17 CERTILOGO S.p.A Automatic method to determine the authenticity of a product
CN115100517A (zh) * 2022-06-08 2022-09-23 北京市农林科学院信息技术研究中心 田间昆虫识别方法及装置

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520285B (zh) * 2018-04-16 2021-02-09 图灵人工智能研究院(南京)有限公司 物品鉴别方法、系统、设备及存储介质
CN111382605B (zh) * 2018-12-28 2023-08-18 广州市百果园信息技术有限公司 视频内容审核方法、装置、存储介质和计算机设备
CN109784384B (zh) * 2018-12-28 2023-04-07 佛山科学技术学院 一种自动辨别商标真伪的方法及装置
CN109886081A (zh) * 2018-12-31 2019-06-14 武汉中海庭数据技术有限公司 一种车道线形点串提取方法和装置
CN109784394A (zh) * 2019-01-07 2019-05-21 平安科技(深圳)有限公司 一种翻拍图像的识别方法、系统及终端设备
CN110175623A (zh) * 2019-04-10 2019-08-27 阿里巴巴集团控股有限公司 基于图像识别的脱敏处理方法以及装置
CN110222728B (zh) * 2019-05-15 2021-03-12 图灵深视(南京)科技有限公司 物品鉴别模型的训练方法、系统及物品鉴别方法、设备
CN110136125B (zh) * 2019-05-17 2021-08-20 北京深醒科技有限公司 一种基于层次特征点匹配的图像复制移动伪造检测方法
CN110414995B (zh) * 2019-06-12 2023-04-14 丁俊锋 一种验证采用黄龙山紫砂泥料制成的紫砂作品是否存在的方法及专用服务器
CN112651410A (zh) * 2019-09-25 2021-04-13 图灵深视(南京)科技有限公司 用于鉴别的模型的训练、鉴别方法、系统、设备及介质
CN111046883B (zh) * 2019-12-05 2022-08-23 吉林大学 一种基于古钱币图像的智能评估方法及系统
FR3111218A1 (fr) * 2020-06-08 2021-12-10 Cypheme Procédé d’identification et dispositif de détection de la contrefaçon par traitement totalement automatisé des caractéristiques des produits photographiés par un appareil muni d’une caméra digitale
CN112115960A (zh) * 2020-06-15 2020-12-22 曹辉 一种收藏品鉴别方法和系统
CN111899035B (zh) * 2020-07-31 2024-04-30 西安加安信息科技有限公司 一种高端酒水鉴真的方法、移动终端和计算机存储介质
CN111967887B (zh) * 2020-08-21 2024-02-09 北京邮来邮网络科技有限公司 数字化邮票和钱币的远程鉴评方法、系统及计算机可读存储介质
CN112308053B (zh) * 2020-12-29 2021-04-09 北京易真学思教育科技有限公司 检测模型训练、判题方法、装置、电子设备及存储介质
CN112734735A (zh) * 2021-01-15 2021-04-30 广州富港生活智能科技有限公司 物品鉴别方法、装置、电子设备及存储介质
CN112785005B (zh) * 2021-01-22 2023-02-03 中国平安人寿保险股份有限公司 多目标任务的辅助决策方法、装置、计算机设备及介质
CN112712139B (zh) * 2021-03-29 2022-12-02 北京妃灵科技有限公司 一种基于图像处理的箱包识别方法、系统及存储介质
CN112712401A (zh) * 2021-03-29 2021-04-27 北京妃灵科技有限公司 一种多维度箱包价格获取方法、装置及系统
CN112712524B (zh) * 2021-03-29 2022-06-21 北京妃灵科技有限公司 基于深度学习模型的箱包品质检测方法、装置及存储介质
CN118140255A (zh) * 2021-10-01 2024-06-04 伊顿智能动力有限公司 产品原真性的确定
CN116935140B (zh) * 2023-08-04 2024-04-16 北京邮电大学 基于油墨的奢侈品鉴定模型训练方法、鉴定方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853364A (zh) * 2010-05-12 2010-10-06 中国艺术科技研究所 中国书画的防伪方法
CN105139516A (zh) * 2015-08-26 2015-12-09 上海古鳌电子科技股份有限公司 一种纸张类残破程度识别结构及纸币交易装置
US20170238056A1 (en) * 2014-01-28 2017-08-17 Google Inc. Identifying related videos based on relatedness of elements tagged in the videos
CN107895144A (zh) * 2017-10-27 2018-04-10 重庆工商大学 一种手指静脉图像防伪鉴别方法及装置
CN108520285A (zh) * 2018-04-16 2018-09-11 清华大学 物品鉴别方法、系统、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751396A (zh) * 2008-11-28 2010-06-23 张政 一种兴趣点信息加工处理系统
US20170032285A1 (en) * 2014-04-09 2017-02-02 Entrupy Inc. Authenticating physical objects using machine learning from microscopic variations
CN108291876B (zh) * 2014-11-21 2022-03-15 盖伊·李·亨纳夫 用于检测产品的真实性的系统及方法
CN107180479B (zh) * 2017-05-15 2020-10-20 深圳怡化电脑股份有限公司 一种票据鉴别方法、装置、设备和存储介质
CN107463962B (zh) * 2017-08-08 2020-06-02 张天君 一种显微人工智能鉴定皮包的方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853364A (zh) * 2010-05-12 2010-10-06 中国艺术科技研究所 中国书画的防伪方法
US20170238056A1 (en) * 2014-01-28 2017-08-17 Google Inc. Identifying related videos based on relatedness of elements tagged in the videos
CN105139516A (zh) * 2015-08-26 2015-12-09 上海古鳌电子科技股份有限公司 一种纸张类残破程度识别结构及纸币交易装置
CN107895144A (zh) * 2017-10-27 2018-04-10 重庆工商大学 一种手指静脉图像防伪鉴别方法及装置
CN108520285A (zh) * 2018-04-16 2018-09-11 清华大学 物品鉴别方法、系统、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3627392A4 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242094A (zh) * 2020-02-25 2020-06-05 深圳前海达闼云端智能科技有限公司 商品识别方法、智能货柜及智能货柜系统
CN113762292A (zh) * 2020-06-03 2021-12-07 杭州海康威视数字技术股份有限公司 一种训练数据获取方法、装置及模型训练方法、装置
CN113762292B (zh) * 2020-06-03 2024-02-02 杭州海康威视数字技术股份有限公司 一种训练数据获取方法、装置及模型训练方法、装置
CN111738415A (zh) * 2020-06-17 2020-10-02 北京字节跳动网络技术有限公司 模型同步更新方法、装置及电子设备
CN111738415B (zh) * 2020-06-17 2023-07-04 北京字节跳动网络技术有限公司 模型同步更新方法、装置及电子设备
CN111787351A (zh) * 2020-07-01 2020-10-16 百度在线网络技术(北京)有限公司 信息查询方法、装置、设备和计算机存储介质
CN111787351B (zh) * 2020-07-01 2022-09-06 百度在线网络技术(北京)有限公司 信息查询方法、装置、设备和计算机存储介质
CN113243018A (zh) * 2020-08-01 2021-08-10 商汤国际私人有限公司 目标对象的识别方法和装置
US20220051040A1 (en) * 2020-08-17 2022-02-17 CERTILOGO S.p.A Automatic method to determine the authenticity of a product
CN112446829A (zh) * 2020-12-11 2021-03-05 成都颜创启新信息技术有限公司 图片方向调整方法、装置、电子设备及存储介质
CN112767389A (zh) * 2021-02-03 2021-05-07 紫东信息科技(苏州)有限公司 基于fcos算法的胃镜图片病灶识别方法及装置
CN112906656A (zh) * 2021-03-30 2021-06-04 自然资源部第三海洋研究所 水下照片珊瑚礁识别方法、系统及存储介质
CN115100517A (zh) * 2022-06-08 2022-09-23 北京市农林科学院信息技术研究中心 田间昆虫识别方法及装置
CN115100517B (zh) * 2022-06-08 2023-10-24 北京市农林科学院信息技术研究中心 田间昆虫识别方法及装置

Also Published As

Publication number Publication date
EP3627392A4 (en) 2021-03-10
EP3627392A1 (en) 2020-03-25
CN108520285A (zh) 2018-09-11
CN108520285B (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
WO2019201187A1 (zh) 物品鉴别方法、系统、设备及存储介质
Luce Artificial intelligence for fashion: How AI is revolutionizing the fashion industry
CN110121118A (zh) 视频片段定位方法、装置、计算机设备及存储介质
US11816773B2 (en) Music reactive animation of human characters
CN113892096A (zh) 动态媒体选择菜单
CN109543714A (zh) 数据特征的获取方法、装置、电子设备及存储介质
CN111814620A (zh) 人脸图像质量评价模型建立方法、优选方法、介质及装置
CN111491187B (zh) 视频的推荐方法、装置、设备及存储介质
KR102668172B1 (ko) 메시징 시스템에서의 증강 현실 경험을 위한 물리적 제품들의 식별
CN111126347B (zh) 人眼状态识别方法、装置、终端及可读存储介质
CN116704079B (zh) 图像生成方法、装置、设备及存储介质
KR20240052043A (ko) 대화 안내 증강 현실 경험
CN111598651A (zh) 物品捐赠系统、物品的捐赠方法、装置、设备及介质
CN115735231A (zh) 基于产品数据的增强现实内容
CN112651410A (zh) 用于鉴别的模型的训练、鉴别方法、系统、设备及介质
CN112329586A (zh) 基于情绪识别的客户回访方法、装置及计算机设备
KR102531572B1 (ko) 사용자를 위한 영상 메이킹 플랫폼 생성 방법
CN112150347A (zh) 从有限的修改后图像集合中学习的图像修改样式
CN116798129A (zh) 一种活体检测方法、装置、存储介质及电子设备
US20220254188A1 (en) Methods for Creating Personalized Items Using Images Associated with a Subject and Related Systems and Computers
TW201140468A (en) Image texture extraction method, image identification method and image identification apparatus
WO2022213031A1 (en) Neural networks for changing characteristics of vocals
CN117136404A (zh) 从歌曲中提取伴奏的神经网络
US20210312257A1 (en) Distributed neuromorphic infrastructure
KR20220012784A (ko) 데이터 증강 기반 공간 분석 모델 학습 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19789249

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019789249

Country of ref document: EP

Effective date: 20191219

NENP Non-entry into the national phase

Ref country code: DE