CN111241893B - Identification recognition method, device and system - Google Patents

Identification recognition method, device and system Download PDF

Info

Publication number
CN111241893B
CN111241893B CN201811448571.6A CN201811448571A CN111241893B CN 111241893 B CN111241893 B CN 111241893B CN 201811448571 A CN201811448571 A CN 201811448571A CN 111241893 B CN111241893 B CN 111241893B
Authority
CN
China
Prior art keywords
identification
layer
commodity
feature layer
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811448571.6A
Other languages
Chinese (zh)
Other versions
CN111241893A (en
Inventor
金炫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811448571.6A priority Critical patent/CN111241893B/en
Publication of CN111241893A publication Critical patent/CN111241893A/en
Application granted granted Critical
Publication of CN111241893B publication Critical patent/CN111241893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device and a system for identifying an identifier. Wherein the method comprises the following steps: extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures; according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained; connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture; and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity. By adopting the method provided by the application, the problem of low identification accuracy aiming at the commodity pictures in the prior art is solved.

Description

Identification recognition method, device and system
Technical Field
The present application relates to the field of data mining, and in particular, to a method, an apparatus, and a system for identifying a identifier.
Background
The electronic commerce platform provides massive commodities for consumers, and facilitates shopping demands of the consumers. But quickly detecting counterfeit goods from a huge amount of goods, so as to avoid loss of consumers and infringement of intellectual property rights is a problem to be solved urgently.
The prior art provides a method for identifying counterfeit goods aiming at the goods identification in the goods pictures, which comprises a retrieval method based on local characteristics and a target detection method. The retrieval method based on the local features has the advantages that the category expansion is convenient and fast, but the description of the identification features is poor; the target detection mode is based on the problems of small targets, class expansion, model feedforward network performance and the like.
Disclosure of Invention
The application provides a method, a device and a system for identifying an identifier, which are used for solving the problem of low accuracy of identifying the identifier in a commodity picture in the prior art.
The application provides an identification recognition method, which comprises the following steps:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
Optionally, the extracting features of the commodity picture to be identified to obtain an original feature layer reflecting features of the commodity picture includes:
acquiring commodity pictures to be identified;
labeling the identification information of the commodity picture in the commodity picture to obtain a labeled commodity picture;
and extracting features of the marked commodity picture by using the identification prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
Optionally, the performing convolution operation and splicing operation with the original feature layer by using convolution kernels with different sizes according to the shortcut of the original feature layer to obtain a fusion feature layer of the commodity picture includes:
performing convolution operation on a plurality of convolution kernels with different sizes and the original feature layer respectively to obtain convolution feature data corresponding to the convolution kernels with different sizes;
performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer;
and carrying out addition operation on the spliced characteristic data and the shortcut of the original characteristic layer to obtain a fusion characteristic layer of the commodity picture.
Optionally, the cross-layer connection of the fused feature layer to obtain the identification prediction feature layer of the commodity picture includes:
According to the first fusion feature layer, a first identification prediction feature layer corresponding to the first fusion feature layer is obtained;
upsampling is carried out on the first identification prediction feature layer to obtain upsampled data corresponding to the first identification prediction feature layer;
and adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampling data to obtain a second identification prediction feature layer corresponding to the second fusion feature layer.
Optionally, the identifying the commodity picture according to the identifying prediction feature layer includes:
suppressing the identification prediction neural network according to a detection frame output by the identification prediction feature layer to obtain the possibility that each area in the commodity picture is identified;
according to the possibility, obtaining an identification candidate region in the commodity picture;
acquiring the characteristic code data of the identification of the commodity picture according to the identification candidate region;
and carrying out identification and recognition on the commodity picture according to the feature code data.
Optionally, the obtaining, according to the identification candidate region, the feature code data of the identification of the commodity picture includes:
Performing feature extraction on the identification candidate region by using an identification recognition neural network to obtain identification feature data of the identification candidate region;
clustering operation is carried out on the identification characteristic data, and clustering information of the commodity pictures is obtained;
and carrying out feature extraction and product quantization on the clustering information to obtain the feature code data of the identification of the commodity picture.
Optionally, the feature extraction is performed on the identification candidate region by using an identification recognition network to obtain identification feature data of the identification candidate region, including:
and carrying out feature extraction by utilizing the deformable convolution layer in the identification recognition neural network to obtain identification feature data of the identification candidate region.
Optionally, the identification recognition method further includes:
determining an anchor essence loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether a target area output by the identification prediction neural network contains an identification or not;
and training the identification prediction neural network by utilizing the anchor essence loss function.
Optionally, the identification recognition method further includes:
determining a detection loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing function loss, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether the output of the identification prediction neural network is identification;
and training the identification recognition neural network by utilizing the detection loss function.
The application provides an identification recognition device, including:
the original feature layer obtaining unit is used for carrying out feature extraction on the commodity picture to be identified to obtain an original feature layer reflecting the features of the commodity picture;
the fusion feature layer obtaining unit is used for respectively carrying out convolution operation and splicing operation with the original feature layer by using convolution kernels with different sizes according to shortcuts of the original feature layer to obtain the fusion feature layer of the commodity picture;
the identification prediction feature layer obtaining unit is used for performing cross-layer connection on the fusion feature layer to obtain an identification prediction feature layer of the commodity picture;
and the identification and recognition unit is used for carrying out identification and recognition on the commodity picture according to the identification and prediction characteristic layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
Optionally, the original feature layer obtaining unit is specifically configured to:
acquiring commodity pictures to be identified;
labeling the identification information of the commodity picture in the commodity picture to obtain a labeled commodity picture;
and extracting features of the marked commodity picture by using the identification prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
Optionally, the fusion feature layer obtaining unit is specifically configured to:
performing convolution operation on a plurality of convolution kernels with different sizes and the original feature layer respectively to obtain convolution feature data corresponding to the convolution kernels with different sizes;
performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer;
and carrying out addition operation on the spliced characteristic data and the shortcut of the original characteristic layer to obtain a fusion characteristic layer of the commodity picture.
Optionally, the identifying prediction feature layer obtaining unit is specifically configured to:
according to the first fusion feature layer, a first identification prediction feature layer corresponding to the first fusion feature layer is obtained:
upsampling is carried out on the first identification prediction feature layer to obtain upsampled data corresponding to the first identification prediction feature layer;
And adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampling data to obtain a second identification prediction feature layer corresponding to the second fusion feature layer.
Optionally, the identification unit is specifically configured to:
suppressing the identification prediction neural network according to a detection frame output by the identification prediction feature layer to obtain the possibility that each area in the commodity picture is identified;
according to the possibility, obtaining an identification candidate region in the commodity picture;
acquiring the characteristic code data of the identification of the commodity picture according to the identification candidate region;
and carrying out identification and recognition on the commodity picture according to the feature code data.
Optionally, the identification unit is further configured to:
performing feature extraction on the identification candidate region by using an identification recognition neural network to obtain identification feature data of the identification candidate region;
clustering operation is carried out on the identification characteristic data, and clustering information of the commodity pictures is obtained:
and carrying out feature extraction and product quantization on the clustering information to obtain the feature code data of the identification of the commodity picture.
Optionally, the identification unit is further configured to:
and carrying out feature extraction by utilizing the deformable convolution layer in the identification recognition neural network to obtain identification feature data of the identification candidate region.
Optionally, the identification device further includes a first training unit, where the first training unit is configured to:
determining an anchor essence loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether a target area output by the identification prediction neural network contains an identification or not;
and training the identification prediction neural network by utilizing the anchor essence loss function.
Optionally, the identification device further includes a second training unit, where the second training unit is configured to:
determining a detection loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing function loss, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether the output of the identification prediction neural network is identification;
and training the identification recognition neural network by utilizing the detection loss function.
The application provides an electronic device, the electronic device includes:
a processor;
a memory for storing a program which, when read for execution by the processor, performs the following operations:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
The present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
Connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
The application provides a method for identifying counterfeit goods, which comprises the following steps:
the commodity picture is used for acquiring the commodity to be identified;
extracting features of the commodity picture to obtain an original feature layer reflecting the features of the commodity picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture after cross-layer connection;
carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer;
and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
The application provides a counterfeit commodity detection system, which comprises a counterfeit commodity information inquiry unit;
the fake commodity information inquiry unit is used for extracting characteristics of commodity pictures of commodities to be identified and obtaining an original characteristic layer reflecting the characteristics of the commodity pictures; according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained; connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture; carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer; and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
The application provides a method for identifying a target pattern in a picture, which comprises the following steps:
extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture;
and according to the target pattern prediction feature layer, target pattern recognition is carried out on the picture.
The application provides a method for obtaining a target pattern prediction feature layer in a picture, which comprises the following steps:
extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
and connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture.
Compared with the prior art, the application has the following advantages:
According to the method provided by the application, convolution kernels with different sizes are used for carrying out convolution operation and splicing operation with the original feature layer respectively according to shortcuts of the original feature layer, so that a fusion feature layer of the commodity picture is obtained; and connecting the fusion feature layers in a cross-layer manner to obtain the identification prediction feature layer of the commodity picture, thereby solving the problem of low identification recognition accuracy aiming at the commodity picture in the prior art.
Drawings
FIG. 1 is a flow chart of a first embodiment of the present application;
FIG. 2 is a schematic diagram of obtaining an identified predicted feature layer according to a first embodiment of the present application;
FIG. 3 is a schematic illustration of a deformable convolution according to a first embodiment of the present application;
FIG. 4 is a clustering diagram of identification feature data according to a first embodiment of the present application;
FIG. 5 is a schematic diagram of an application system according to a first embodiment of the present application;
fig. 6 is a flow chart of a second embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The first embodiment of the application provides an identification recognition method. In this embodiment, the identifier may be a trademark, a geographical mark, or the like. In this embodiment, a trademark is used as an example for the detailed description. In fact, the identifier in the present embodiment is not limited to a trademark, but may be extended to a geographic mark, a third party authentication identifier, and the like. Referring to fig. 1, a flowchart of a first embodiment of the present application is shown. A method for identifying a logo according to a first embodiment of the present application is described in detail below with reference to fig. 1. The method comprises the following steps:
step S101: and extracting features of the commodity picture to be identified, and obtaining an original feature layer reflecting the features of the commodity picture.
The method comprises the steps of extracting features of a commodity picture to be identified, and obtaining an original feature layer reflecting features of the commodity picture.
In this embodiment, the feature extraction is performed on the commodity picture to be identified to obtain an original feature layer reflecting the features of the commodity picture, and the method includes the following steps:
and acquiring a commodity picture to be identified, wherein the commodity picture can be a commodity picture containing trademark information.
And marking trademark information of the commodity picture in the commodity picture to obtain the marked commodity picture.
For example, labels in different commodity pictures are marked, and the marking content is the position of the label on the commodity picture and the attribute of the label, such as: (100, 100, 200, ABC), where the numbers represent the coordinate locations of the trademarks on the picture and ABC is the literal of the trademark.
And extracting features of the marked commodity picture by utilizing a trademark prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
The trademark prediction neural network can adopt VGG-16, resNet-50, resNet-101, resNet-152, inceptionV1-V3, resNeXt-152 and other neural networks. The trademark prediction neural network is used for carrying out feature extraction on an area on a commodity picture, and is used for judging whether the area is a target area possibly containing a trademark, and then carrying out feature extraction on the target area possibly containing the trademark, and is used for judging whether the target area is the trademark.
In this embodiment, the training of the trademark prediction neural network may include the following steps:
determining an anchor essence loss function of the trademark prediction neural network according to a cross entropy loss function and a first layer smooth loss function, wherein the cross entropy loss function is used as a normalized exponential function and is used for judging whether a target area output by the trademark prediction neural network contains a trademark or not;
And training the trademark prediction neural network by utilizing the anchor essence loss function.
The cross entropy loss function is a Cross Entropy Loss (cls) function and the first layer smoothing loss function is a smoothL1 function. The anchor loss function is anchor refine loss. The following is a calculation formula for the anchor essence loss function.
Anchor refine Loss=Loss(cls)+Loss(smoothL1)
The normalized exponential function, or softmax function, is used herein to determine whether the target area of the brand prediction neural network output contains a brand.
In this embodiment, the training of the trademark prediction neural network may further include the following steps:
determining a detection loss function of the trademark prediction neural network according to a cross entropy loss function and a first layer smoothing function loss, wherein the cross entropy loss function is used as a normalized exponential function and is used for judging whether the output of the trademark prediction neural network is a trademark or not;
and training the trademark identification neural network by utilizing the detection loss function.
The cross entropy loss function is a Cross Entropy Loss (cls) function and the first layer smoothing loss function is a smoothL1 function. The detection loss function is the detection loss. The following is a calculation formula for the probe loss function.
Detection loss=Loss(cls)+Loss(smoothL1)
The normalized exponential function, the softmax function, is used here to determine whether the output of the brand prediction neural network is a brand.
Step S102: and according to the shortcuts of the original feature layers, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layers, so that the fusion feature layers of the commodity picture are obtained.
The step is used for carrying out convolution operation and splicing operation with the original feature layer respectively by using convolution kernels with different sizes according to shortcuts of the original feature layer to obtain a fusion feature layer of the commodity picture.
In this embodiment, the performing convolution operation and splicing operation with the original feature layer by using convolution kernels with different sizes according to the shortcut of the original feature layer to obtain a fusion feature layer of the commodity picture includes:
and respectively carrying out convolution operation on a plurality of convolution kernels with different sizes and the original feature layers to obtain convolution feature data corresponding to the convolution kernels with different sizes, for example, extracting 4-6 layers from the feature layers output by the commercial prediction neural network, wherein the size of each layer is named (F1-F4) from large to small, and carrying out convolution operation on different convolution kernels 1 x 1,3 x 3 and 5 x 5 respectively by 3 branches of each layer to obtain the convolution feature data of each layer.
Performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer, wherein the splicing operation is concat operation;
and adding the spliced characteristic data and shortcuts of the original characteristic layer to obtain a fused characteristic layer of the commodity picture, wherein the shortcuts are shortcuts in a resnet, after the splicing operation, the spliced data and the shortcuts of the original characteristic layer are added according to element positions to obtain (R1-R4), and the R1-R4 are the fused characteristic layer.
Step S103: and connecting the fusion feature layers in a cross-layer manner to obtain the identification prediction feature layer of the commodity picture.
The step is used for performing cross-layer connection on the fusion feature layer to obtain the identification prediction feature layer of the commodity picture.
In this embodiment, the cross-layer connection of the fusion feature layer to obtain the trademark prediction feature layer of the commodity picture includes:
according to the first fusion feature layer, a first trademark prediction feature layer corresponding to the first fusion feature layer is obtained, for example, assuming that the first fusion feature layer is R4, the first trademark prediction feature layer corresponding to the first fusion feature layer is P4, and the first trademark prediction feature layer can be obtained through p4=r4.
And up-sampling is carried out on the first trademark prediction feature layer to obtain up-sampling data corresponding to the first trademark prediction feature layer, wherein the up-sampling can be realized by adopting a library function up sample.
And adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampled data to obtain a second trademark prediction feature layer corresponding to the second fusion feature layer, for example, the second fusion feature layer is R3, the second trademark prediction feature layer is P3, and the calculation method of P3 may be p3=upsamples (p4) +r3. Similarly, p2=sample (p3) +r2, p1=sample (p2) +r1. Please refer to fig. 2, which is a schematic diagram of a layer for obtaining trademark prediction features. In FIG. 2, the leftmost four layers are from large to small F1-F4, the four original feature layers, the rightmost four layers are from large to small P1-P4, and the four brand prediction feature layers.
Step S104: and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
The trademark identification method comprises the steps of carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer.
In this embodiment, the trademark identification for the commodity picture according to the trademark prediction feature layer includes:
And suppressing the trademark prediction neural network according to the detection frame output by the trademark prediction feature layer, and obtaining the possibility that each area in the commodity picture is a trademark. This may be implemented using NMS (non-maximum suppression) or Soft-NMS. Suppressing a detection frame of network output by using NMS: for example, the regions of IoU (intersection over Union) greater than 0.5 are sorted by classification score, and the highest score is selected for output, where the classification score is used to indicate the likelihood that each region is a trademark. Alternatively, with Soft-NMS, the score is reduced for a detection with a score below the highest score, and the weight reduction function may use a linear function and a gaussian function.
And obtaining trademark candidate areas in the commodity pictures according to the possibility, wherein the trademark candidate areas are areas determined according to the size of the possibility.
And obtaining feature code data of the trademark of the commodity picture according to the trademark candidate area.
First, feature extraction is performed on the brand candidate region using a brand recognition neural network, and brand feature data of the brand candidate region is obtained. The trademark identification neural network can be one of AlexNet, leNet, caffeNet, googleNet, VGG-19, resNet-50, resNet-101, resNet-152, inceptionV1-V3 and ResNeXt-152.
And performing feature extraction by using the deformable convolution layer in the trademark identification neural network to obtain trademark feature data of the trademark candidate region. For example, a 7-layer CNN network may be used and a deformable convolution (deformable convolution) layer added at the conv4 layer to make the extracted features more focused on the target area. Please refer to fig. 3, which is a schematic diagram of a deformable convolution.
And secondly, clustering operation is carried out on the trademark characteristic data, and clustering information of the commodity pictures is obtained. Because trademark characteristic data are massive, the trademark characteristic data are clustered for convenience of use. For example, the trademark characteristic data in 100000 is divided into 20 categories, and 5000 trademark characteristic data in each category is found very fast. Please refer to fig. 4, which is a schematic diagram of clustering of trademark characteristic data. The clustering operation comprises performing unsupervised clustering on feature dimension reduction by using SNR, checking specific feature categories, performing pair pairing on different categories, and performing feature space optimization by using a triple-loss, retrain feature network.
And finally, carrying out feature extraction and product quantization on the clustering information to obtain feature code data of the trademark of the commodity picture. And after the feature extraction, obtaining feature vectors of different trademarks. Product quantization (Product Quantization), a search algorithm for improving search speed.
And carrying out trademark identification on the commodity picture according to the feature code data, and searching in a database by utilizing the feature code data after obtaining the feature code data to obtain a corresponding trademark identification result.
In this embodiment, the trademark library of the commodity can be established by using the feature code data, and the trademark name can be established: trademark standard chinese name: trademark standard english name: trademark ID: and matching each query result input by the user and outputting the matched query result.
For an application example of the technical solution provided in the present application, please refer to fig. 5. Which is a schematic diagram of an application system employing the trademark identification method provided in the present application.
In the above embodiment, a method for identifying a mark is provided, and correspondingly, the application also provides a device for identifying a mark. In this embodiment, the identifier may be a trademark, a geographical mark, or the like. In this embodiment, a trademark is used as an example for the detailed description. In fact, the identifier in the present embodiment is not limited to a trademark, but may be extended to a geographic mark, a third party authentication identifier, and the like. Referring to fig. 6, a flowchart of an embodiment of an identification recognition device of the present application is shown. Since this embodiment, i.e. the second embodiment, is substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description of the method embodiment for relevant points. The device embodiments described below are merely illustrative.
An identification recognition device of the present embodiment includes:
an original feature layer obtaining unit 601, configured to perform feature extraction for a commodity picture to be identified, to obtain an original feature layer reflecting features of the commodity picture;
a fused feature layer obtaining unit 602, configured to perform convolution operation and splicing operation with the original feature layer respectively by using convolution kernels with different sizes according to shortcuts of the original feature layer, to obtain a fused feature layer of the commodity picture;
an identifier prediction feature layer obtaining unit 603, configured to cross-layer connect the fusion feature layers to obtain an identifier prediction feature layer of the commodity picture;
the identification identifying unit 604 is configured to identify the commodity picture according to the identification prediction feature layer, so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
In this embodiment, the original feature layer obtaining unit is specifically configured to:
acquiring commodity pictures to be identified;
labeling the identification information of the commodity picture in the commodity picture to obtain a labeled commodity picture;
and extracting features of the marked commodity picture by using the identification prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
In this embodiment, the fusion feature layer obtaining unit is specifically configured to:
performing convolution operation on a plurality of convolution kernels with different sizes and the original feature layer respectively to obtain convolution feature data corresponding to the convolution kernels with different sizes;
performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer;
and carrying out addition operation on the spliced characteristic data and the shortcut of the original characteristic layer to obtain a fusion characteristic layer of the commodity picture.
In this embodiment, the identifier prediction feature layer obtaining unit is specifically configured to:
according to the first fusion feature layer, a first identification prediction feature layer corresponding to the first fusion feature layer is obtained:
upsampling is carried out on the first identification prediction feature layer to obtain upsampled data corresponding to the first identification prediction feature layer;
and adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampling data to obtain a second identification prediction feature layer corresponding to the second fusion feature layer.
In this embodiment, the identifier identifying unit is specifically configured to:
suppressing the identification prediction neural network according to a detection frame output by the identification prediction feature layer to obtain the possibility that each area in the commodity picture is identified;
According to the possibility, obtaining an identification candidate region in the commodity picture;
acquiring the characteristic code data of the identification of the commodity picture according to the identification candidate region;
and carrying out identification and recognition on the commodity picture according to the feature code data.
In this embodiment, the identifier identifying unit is further configured to:
performing feature extraction on the identification candidate region by using an identification recognition neural network to obtain identification feature data of the identification candidate region;
clustering operation is carried out on the identification characteristic data, and clustering information of the commodity pictures is obtained;
and carrying out feature extraction and product quantization on the clustering information to obtain the feature code data of the identification of the commodity picture.
In this embodiment, the identifier identifying unit is further configured to:
and carrying out feature extraction by utilizing the deformable convolution layer in the identification recognition neural network to obtain identification feature data of the identification candidate region.
In this embodiment, the identification device further includes a first training unit, where the first training unit is configured to:
determining an anchor essence loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether a target area output by the identification prediction neural network contains an identification or not;
And training the identification prediction neural network by utilizing the anchor essence loss function.
In this embodiment, the identification device further includes a second training unit, where the second training unit is configured to:
determining a detection loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing function loss, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether the output of the identification prediction neural network is identification;
and training the identification recognition neural network by utilizing the detection loss function.
A third embodiment of the present application provides an electronic device, including:
a processor:
a memory for storing a program which, when read for execution by the processor, performs the following operations:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
And carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
In this embodiment, the identifier may be a trademark, a geographical mark, or the like. In this embodiment, a trademark is used as an example for the detailed description. In fact, the identifier in the present embodiment is not limited to a trademark, but may be extended to a geographic mark, a third party authentication identifier, and the like.
A fourth embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
In this embodiment, the identifier may be a trademark, a geographical mark, or the like. In this embodiment, a trademark is used as an example for the detailed description. In fact, the identifier in the present embodiment is not limited to a trademark, but may be extended to a geographic mark, a third party authentication identifier, and the like.
A fifth embodiment of the present application provides a method for identifying counterfeit goods, including:
acquiring commodity pictures of commodities to be identified;
extracting features of the commodity picture to obtain an original feature layer reflecting the features of the commodity picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture;
carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer;
and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
By adopting the identification method of the counterfeit goods, the accuracy of identifying the counterfeit goods is improved by identifying the trademark contained in the goods picture. Since this embodiment, i.e., the fifth embodiment, is very similar to the first embodiment, it will not be described in detail here. For related information, please refer to the first embodiment.
A sixth embodiment of the present application provides a counterfeit commodity detection system, including a counterfeit commodity information inquiry unit;
the fake commodity information inquiry unit is used for extracting characteristics of commodity pictures of commodities to be identified and obtaining an original characteristic layer reflecting the characteristics of the commodity pictures; according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained; connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture; carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer; and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
A seventh embodiment of the present application provides a method for identifying a target pattern in a picture, including:
extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
Connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture;
and according to the target pattern prediction feature layer, target pattern recognition is carried out on the picture.
The first embodiment of the present application provides a method for identifying a trademark in a commodity picture, and the method can also be applied to other non-trademark fields, such as a place of origin mark in a picture, etc., and the seventh embodiment of the present application extends the trademark. Since this embodiment, i.e., the seventh embodiment, is very similar to the first embodiment, it will not be described in detail here. For related information, please refer to the first embodiment.
An eighth embodiment of the present application provides a method for obtaining a target pattern prediction feature layer in a picture, including:
extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
and connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture.
Since this embodiment, i.e., the eighth embodiment, is very similar to the first embodiment, it will not be described in detail here. For related information, please refer to the first embodiment.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, a computing device includes one or more operators (CPUs), an input/output interface, a network interface, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (24)

1. A method of identifying a logo, comprising:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
2. The identification recognition method according to claim 1, wherein the feature extraction is performed for the commodity picture to be recognized to obtain an original feature layer reflecting features of the commodity picture, and the method comprises:
acquiring commodity pictures to be identified;
labeling the identification information of the commodity picture in the commodity picture to obtain a labeled commodity picture;
and extracting features of the marked commodity picture by using the identification prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
3. The identification method according to claim 1, wherein the step of performing convolution operation and splicing operation with the original feature layer by using convolution kernels of different sizes according to shortcuts of the original feature layer to obtain a fused feature layer of the commodity picture includes:
performing convolution operation on a plurality of convolution kernels with different sizes and the original feature layer respectively to obtain convolution feature data corresponding to the convolution kernels with different sizes;
performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer;
and carrying out addition operation on the spliced characteristic data and the shortcut of the original characteristic layer to obtain a fusion characteristic layer of the commodity picture.
4. The identification recognition method according to claim 1, wherein the step of cross-layer connecting the fusion feature layer to obtain the identification prediction feature layer of the commodity picture includes:
according to the first fusion feature layer, a first identification prediction feature layer corresponding to the first fusion feature layer is obtained;
upsampling is carried out on the first identification prediction feature layer to obtain upsampled data corresponding to the first identification prediction feature layer;
and adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampling data to obtain a second identification prediction feature layer corresponding to the second fusion feature layer.
5. The identification method according to claim 2, wherein the predicting the feature layer according to the identification, and identifying the commodity picture, includes:
suppressing the identification prediction neural network according to a detection frame output by the identification prediction feature layer to obtain the possibility that each area in the commodity picture is identified;
according to the possibility, obtaining an identification candidate region in the commodity picture;
acquiring the characteristic code data of the identification of the commodity picture according to the identification candidate region;
And carrying out identification and recognition on the commodity picture according to the feature code data.
6. The identification recognition method according to claim 5, wherein the obtaining the feature code data of the identification of the commodity picture according to the identification candidate region includes:
performing feature extraction on the identification candidate region by using an identification recognition neural network to obtain identification feature data of the identification candidate region;
clustering operation is carried out on the identification characteristic data, and clustering information of the commodity pictures is obtained;
and carrying out feature extraction and product quantization on the clustering information to obtain the feature code data of the identification of the commodity picture.
7. The method for identifying a tag according to claim 6, wherein the feature extraction is performed on the tag candidate region by using a tag identification network to obtain tag feature data of the tag candidate region, comprising:
and carrying out feature extraction by utilizing the deformable convolution layer in the identification recognition neural network to obtain identification feature data of the identification candidate region.
8. The identification recognition method according to claim 2, further comprising:
determining an anchor essence loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether a target area output by the identification prediction neural network contains an identification, and the cross entropy loss function is Cross Entropy Loss function and expressed as cls; the first layer smoothing loss function is a smoothL1 function; an anchor loss function, anchor refine loss; the calculation formula of the anchor essence loss function is as follows: anchor refine Loss = Loss (cls) +loss (smoothL 1);
And training the identification prediction neural network by utilizing the anchor essence loss function.
9. The identification recognition method according to claim 2, further comprising:
determining a detection loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether the output of the identification prediction neural network is identification;
and training the identification recognition neural network by utilizing the detection loss function.
10. An identification recognition device, characterized by comprising:
the original feature layer obtaining unit is used for carrying out feature extraction on the commodity picture to be identified to obtain an original feature layer reflecting the features of the commodity picture;
the fusion feature layer obtaining unit is used for respectively carrying out convolution operation and splicing operation with the original feature layer by using convolution kernels with different sizes according to shortcuts of the original feature layer to obtain the fusion feature layer of the commodity picture;
the identification prediction feature layer obtaining unit is used for performing cross-layer connection on the fusion feature layer to obtain an identification prediction feature layer of the commodity picture;
And the identification and recognition unit is used for carrying out identification and recognition on the commodity picture according to the identification and prediction characteristic layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
11. The identification device according to claim 10, wherein the raw feature layer obtaining unit is specifically configured to:
acquiring commodity pictures to be identified;
labeling the identification information of the commodity picture in the commodity picture to obtain a labeled commodity picture;
and extracting features of the marked commodity picture by using the identification prediction neural network to obtain an original feature layer reflecting the features of the commodity picture.
12. The identification device according to claim 10, wherein the fusion feature layer obtaining unit is specifically configured to:
performing convolution operation on a plurality of convolution kernels with different sizes and the original feature layer respectively to obtain convolution feature data corresponding to the convolution kernels with different sizes;
performing splicing operation on the convolution characteristic data to obtain splicing characteristic data of the original characteristic layer;
and carrying out addition operation on the spliced characteristic data and the shortcut of the original characteristic layer to obtain a fusion characteristic layer of the commodity picture.
13. The identification recognition device according to claim 10, wherein the identification prediction feature layer obtaining unit is specifically configured to:
according to the first fusion feature layer, a first identification prediction feature layer corresponding to the first fusion feature layer is obtained;
upsampling is carried out on the first identification prediction feature layer to obtain upsampled data corresponding to the first identification prediction feature layer;
and adding a second fusion feature layer adjacent to the first fusion feature layer with the up-sampling data to obtain a second identification prediction feature layer corresponding to the second fusion feature layer.
14. The identification device according to claim 11, wherein the identification unit is specifically configured to:
suppressing the identification prediction neural network according to a detection frame output by the identification prediction feature layer to obtain the possibility that each area in the commodity picture is identified;
according to the possibility, obtaining an identification candidate region in the commodity picture;
acquiring the characteristic code data of the identification of the commodity picture according to the identification candidate region;
and carrying out identification and recognition on the commodity picture according to the feature code data.
15. The identification recognition device of claim 14, wherein the identification recognition unit is further configured to:
performing feature extraction on the identification candidate region by using an identification recognition neural network to obtain identification feature data of the identification candidate region;
clustering operation is carried out on the identification characteristic data, and clustering information of the commodity pictures is obtained;
and carrying out feature extraction and product quantization on the clustering information to obtain the feature code data of the identification of the commodity picture.
16. The identification recognition device of claim 15, wherein the identification recognition unit is further configured to:
and carrying out feature extraction by utilizing the deformable convolution layer in the identification recognition neural network to obtain identification feature data of the identification candidate region.
17. The identification recognition device of claim 11, further comprising a first training unit to:
determining an anchor essence loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing loss function, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether a target area output by the identification prediction neural network contains an identification, and the cross entropy loss function is Cross Entropy Loss function and expressed as cls; the first layer smoothing loss function is a smoothL1 function; an anchor loss function, anchor refine loss; the calculation formula of the anchor essence loss function is as follows: anchor refine Loss = Loss (cls) +loss (smoothL 1);
And training the identification prediction neural network by utilizing the anchor essence loss function.
18. The identification recognition device of claim 11, further comprising a second training unit for:
determining a detection loss function of the identification prediction neural network according to a cross entropy loss function and a first layer smoothing function loss, wherein the cross entropy loss function is used as a normalization exponential function and is used for judging whether the output of the identification prediction neural network is identification;
and training the identification recognition neural network by utilizing the detection loss function.
19. An electronic device, the electronic device comprising:
a processor;
a memory for storing a program which, when read for execution by the processor, performs the following operations:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
And carrying out identification and recognition on the commodity picture according to the identification prediction characteristic layer.
20. A computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, realizes the steps of:
extracting features of commodity pictures to be identified to obtain an original feature layer reflecting the features of the commodity pictures;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain an identification prediction feature layer of the commodity picture;
and carrying out identification recognition on the commodity picture according to the identification prediction feature layer so as to determine whether the commodity corresponding to the commodity picture is a preset type commodity.
21. A method for identifying counterfeit goods, comprising:
acquiring commodity pictures of commodities to be identified;
extracting features of the commodity picture to obtain an original feature layer reflecting the features of the commodity picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained;
Connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture;
carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer;
and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
22. A counterfeit commodity detection system is characterized by comprising a counterfeit commodity information inquiry unit;
the fake commodity information inquiry unit is used for extracting characteristics of commodity pictures of commodities to be identified and obtaining an original characteristic layer reflecting the characteristics of the commodity pictures; according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the commodity picture is obtained; connecting the fusion characteristic layers in a cross-layer manner to obtain a trademark prediction characteristic layer of the commodity picture; carrying out trademark identification on the commodity picture according to the trademark prediction characteristic layer; and judging whether the commodity to be identified is a counterfeit commodity or not according to the trademark identification result.
23. The method for identifying the target pattern in the picture is characterized by comprising the following steps of:
Extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture;
and according to the target pattern prediction feature layer, target pattern recognition is carried out on the picture.
24. The method for obtaining the target pattern prediction characteristic layer in the picture is characterized by comprising the following steps of:
extracting features of a picture to be identified to obtain an original feature layer reflecting the features of the picture;
according to the shortcuts of the original feature layer, convolution operation and splicing operation are respectively carried out on convolution kernels with different sizes and the original feature layer, so that a fusion feature layer of the picture is obtained;
and connecting the fusion feature layers in a cross-layer manner to obtain a target pattern prediction feature layer of the picture.
CN201811448571.6A 2018-11-29 2018-11-29 Identification recognition method, device and system Active CN111241893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811448571.6A CN111241893B (en) 2018-11-29 2018-11-29 Identification recognition method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811448571.6A CN111241893B (en) 2018-11-29 2018-11-29 Identification recognition method, device and system

Publications (2)

Publication Number Publication Date
CN111241893A CN111241893A (en) 2020-06-05
CN111241893B true CN111241893B (en) 2023-06-16

Family

ID=70874005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811448571.6A Active CN111241893B (en) 2018-11-29 2018-11-29 Identification recognition method, device and system

Country Status (1)

Country Link
CN (1) CN111241893B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508072B (en) * 2020-11-30 2024-04-26 云南省烟草质量监督检测站 Cigarette true and false identification method, device and equipment based on residual convolution neural network
CN113903026A (en) * 2021-10-09 2022-01-07 数贸科技(北京)有限公司 Commodity picture identification method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110057919A (en) * 2009-11-25 2011-06-01 엘지전자 주식회사 Managing multimedia contents using general objects
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
CN108171260A (en) * 2017-12-15 2018-06-15 百度在线网络技术(北京)有限公司 A kind of image identification method and system
CN108460403A (en) * 2018-01-23 2018-08-28 上海交通大学 The object detection method and system of multi-scale feature fusion in a kind of image
CN108520273A (en) * 2018-03-26 2018-09-11 天津大学 A kind of quick detection recognition method of dense small item based on target detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110057919A (en) * 2009-11-25 2011-06-01 엘지전자 주식회사 Managing multimedia contents using general objects
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN107862287A (en) * 2017-11-08 2018-03-30 吉林大学 A kind of front zonule object identification and vehicle early warning method
CN108171260A (en) * 2017-12-15 2018-06-15 百度在线网络技术(北京)有限公司 A kind of image identification method and system
CN108460403A (en) * 2018-01-23 2018-08-28 上海交通大学 The object detection method and system of multi-scale feature fusion in a kind of image
CN108520273A (en) * 2018-03-26 2018-09-11 天津大学 A kind of quick detection recognition method of dense small item based on target detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曾志等.《基于多特征融合和深度学习的商品图像分类》.《计算机工程与设计》.2017,第第38卷卷(第第11期期),第3093-3098页. *
王慧玲等.《基于深度卷积神经网络的目标检测技术的研究进展》.《计算机科学》.2018,第第45卷卷(第第9期期),第11-19页. *

Also Published As

Publication number Publication date
CN111241893A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
US20190188729A1 (en) System and method for detecting counterfeit product based on deep learning
CN110390054B (en) Interest point recall method, device, server and storage medium
CN110968654B (en) Address category determining method, equipment and system for text data
CN106610972A (en) Query rewriting method and apparatus
CN110866930B (en) Semantic segmentation auxiliary labeling method and device
US9141883B1 (en) Method, hard negative proposer, and classifier for supporting to collect hard negative images using a similarity map
CN104424302B (en) A kind of matching process and device of homogeneous data object
KR20010053788A (en) System for content-based image retrieval and method using for same
CN111291765A (en) Method and device for determining similar pictures
US20180307399A1 (en) Dynamic Thumbnails
CN111241893B (en) Identification recognition method, device and system
CN109934218A (en) A kind of recognition methods and device for logistics single image
CN111859002A (en) Method and device for generating interest point name, electronic equipment and medium
CN113887821A (en) Method and device for risk prediction
CN107577660B (en) Category information identification method and device and server
CN108076439B (en) Method and device for pushing messages based on wireless access point
CN112445926A (en) Image retrieval method and device
CN117743665A (en) Illegal network station identification method, illegal network station identification device, illegal network station identification equipment and storage medium
Angeli et al. Making paper labels smart for augmented wine recognition
CN113569873B (en) Image processing method, device and equipment
CN115168575A (en) Subject supplement method applied to audit field and related equipment
Tian et al. Semantic region proposals for adaptive license plate detection in open environment
CN111597368A (en) Data processing method and device
CN113139121A (en) Query method, model training method, device, equipment and storage medium
CN111445375A (en) Watermark embedding scheme and data processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant