CN112560697A - Cup building identification method and system based on local features and storage medium - Google Patents

Cup building identification method and system based on local features and storage medium Download PDF

Info

Publication number
CN112560697A
CN112560697A CN202011502341.0A CN202011502341A CN112560697A CN 112560697 A CN112560697 A CN 112560697A CN 202011502341 A CN202011502341 A CN 202011502341A CN 112560697 A CN112560697 A CN 112560697A
Authority
CN
China
Prior art keywords
cup
feature points
template
building
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011502341.0A
Other languages
Chinese (zh)
Inventor
田辉
李卫海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei High Dimensional Data Technology Co ltd
Original Assignee
Hefei High Dimensional Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei High Dimensional Data Technology Co ltd filed Critical Hefei High Dimensional Data Technology Co ltd
Priority to CN202011502341.0A priority Critical patent/CN112560697A/en
Publication of CN112560697A publication Critical patent/CN112560697A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cup building identification method, a system and a storage medium based on local characteristics, which comprises the following steps: taking pictures of the front and the back of the building cup as template pictures; cutting out parts of the calendars in the template photo by using a circle cutting algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database; and acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with the feature points of the template calendula in the database to distinguish the authenticity. According to the method, through setting a specific characteristic extraction and characteristic matching scheme, even if the photo of the cup is partially reflected and the cup in the photo has a certain deviation, the matching can be completed, and the robustness is high.

Description

Cup building identification method and system based on local features and storage medium
Technical Field
The invention relates to the technical field of identification, in particular to a built cup identification method and system based on local features and a storage medium.
Background
The cup building is a tea set fired by a kiln, has high collection value, and in recent years, the commercial value of the cup building is gradually increased along with the gradual recognition of the value of the cup building. Meanwhile, a plurality of imitated calendars appear in the market, and how to trace the source and prevent counterfeiting of the calendars becomes an important problem. At present, the identification of the cup is often dependent on expert identification, and the cost is high. The project mainly provides an efficient and reliable solution for identifying the imitated calendars for manufacturers and users of the calendars. At present, no automatic cup building and source tracing scheme exists in the market.
In the prior art, experts are required for identification, the cost is high, and the automation degree is insufficient.
Disclosure of Invention
The invention provides a cup building identification method, system and storage medium based on local features, which can solve the problem that local feature points are extracted from a cup building picture to serve as fingerprints of the cup building and are matched with the local features of a template picture to identify authenticity.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cup building identification method based on local features comprises the following steps:
taking pictures of the front and the back of the building cup as template pictures;
cutting out parts of the calendars in the template photo by using a circle cutting algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
and acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with the feature points of the template calendula in the database to distinguish the authenticity.
Further, the circle cutting algorithm comprises the following steps:
given the coordinates of four points, P1, P2, P3, P4, the center coordinates (a, b) of the circle are fitted to, assuming r is known;
for each point Pi, the parameters (a, b) under the constraint of Pi and r are distributed on a circle, and the intersection formed by the constraints of all Pi is the parameter (a, b) to be solved.
Further, the circle-cutting algorithm further comprises the following steps:
combining all Pi, the intersection formed by the constraints is the solved parameter (a, b), then the Hough parameter space is discretized into a two-dimensional grid, if a circle passes through the cell, the number of votes of the cell grid is increased by one, and the value (a, b) corresponding to the grid with the highest number of votes is selected, namely the solved parameter.
Further, the specific steps of taking the pictures of the front and the back of the building cup as template pictures comprise:
in the process of building a cup feature acquisition, a plurality of photos at different angles are shot for the same building cup, the photos at all angles and the photos at the front side are independently matched and mismatched point elimination is carried out, and the key point with the most successful matching is reserved as the feature of the original building cup by a voting method.
Further, acquiring a photo of the calendula establishment to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with feature points of template calendars in a database to distinguish authenticity, wherein the method specifically comprises the following steps:
carrying out feature point matching on the characteristic points of the building cup to be identified and the collected original building cup characteristic points, regarding the feature point matching process as a retrieval task, and calculating recall rate and accuracy rate;
the recall rate is a ratio R of the number of successfully matched building cup feature points to be identified to the original building cup feature points;
the accuracy rate is the ratio P of the number of the successfully matched building cup characteristic points to be identified to the original building cup characteristic points to the building cup characteristic points to be identified;
f is then calculated to be 1/(1/P +1/R), and matching is considered successful when F exceeds 0.85.
Further, for the back photo img after the circle cutting pretreatment, the bottom end is at the right center, and the edge of the opening of the cup is at the outermost side of the circle;
cutting out a boundary between a glazed part and a non-glazed part by carrying out binarization on img so as to cut out an annular image of the glazed part on a back image, which is called as a 'glaze ring';
and simultaneously, removing the part of the glaze ring close to the circle center and the edge, only keeping the image in the middle of the glaze layer, and then extracting SIFT features from the image to be used as the SIFT features of the back image.
In another aspect, the present invention is a calendula identification system based on local features, comprising the following units,
the template photo acquisition unit is used for shooting front and back pictures of the building cup as template photos;
the template processing unit is used for cutting out parts of the calendars in the template picture by using a cyclotomic algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
and the identification unit is used for acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points and comparing the SIFT feature points with the feature points of the template calendula in the database to identify the authenticity.
In a third aspect, the present invention also discloses a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above method.
According to the technical scheme, the novel automatic cup building identification method provided by the invention extracts the characteristics of the cup building patterns in the cup building picture as the unique 'fingerprint' of the cup building. Before the calendars are sold, all calendars are photographed and subjected to feature extraction by a manufacturer to construct a template library. When it is necessary to identify whether the cup is genuine, the cup features are extracted from the photo of the cup and matched with the cup features in the template library, if so, the cup is genuine.
By setting a specific feature extraction and feature matching scheme, even if the photo of the cup is partially reflected and the cup in the photo has a certain offset, matching can be completed, and the robustness is high.
The product is a cup-building identification method, which is characterized in that before a batch of cups are put on the market, a texture feature file photo is shot for each cup; the consumer uses the smart phone to take one or more delivery pictures of the purchased built calendars, and the pictures are sent to the database of the computer recognition system for comparison and authenticity identification.
Compared with the prior art, the scheme has the following advantages: 1. automation, wherein the part needing manual operation in the whole process is only a photographing part; 2. the operation is convenient, the consumer does not need to take a lot of discount to send the cup to an expert for identification, and the identification can be completed remotely only by taking a picture and uploading.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of the use of the present invention;
fig. 3 is a flow chart of the use of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
As shown in fig. 1, the cup building identification method based on local features in this embodiment includes the following steps:
taking pictures of the front and the back of the building cup as template pictures;
cutting out parts of the calendars in the template photo by using a circle cutting algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
and acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with the feature points of the template calendula in the database to distinguish the authenticity.
Wherein the content of the first and second substances,
the circle-cutting algorithm comprises the following steps:
given the coordinates of four points, P1, P2, P3, P4, the center coordinates (a, b) of the circle are fitted to, assuming r is known;
for each point Pi, the parameters (a, b) under the constraint of Pi and r are distributed on a circle, and the intersection formed by the constraints of all Pi is the parameter (a, b) to be solved.
The circle-cutting algorithm further comprises the following steps:
combining all Pi, the intersection formed by the constraints is the solved parameter (a, b), then the Hough parameter space is discretized into a two-dimensional grid, if a circle passes through the cell, the number of votes of the cell grid is increased by one, and the value (a, b) corresponding to the grid with the highest number of votes is selected, namely the solved parameter.
The specific steps of taking the pictures of the front and the back of the building cup as the template pictures comprise:
in the process of building a cup feature acquisition, a plurality of photos at different angles are shot for the same building cup, the photos at all angles and the photos at the front side are independently matched and mismatched point elimination is carried out, and the key point with the most successful matching is reserved as the feature of the original building cup by a voting method.
The method comprises the following steps of obtaining a photo of the cup to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with feature points of a template cup in a database to distinguish authenticity, wherein the steps specifically comprise:
carrying out feature point matching on the characteristic points of the building cup to be identified and the collected original building cup characteristic points, regarding the feature point matching process as a retrieval task, and calculating recall rate and accuracy rate;
the recall rate is a ratio R of the number of successfully matched building cup feature points to be identified to the original building cup feature points;
the accuracy rate is the ratio P of the number of the successfully matched building cup characteristic points to be identified to the original building cup characteristic points to the building cup characteristic points to be identified;
f is then calculated to be 1/(1/P +1/R), and matching is considered successful when F exceeds 0.85.
The following is a detailed description:
due to various problems of various illumination, shadows and the like when a user takes a picture, preprocessing is needed to improve the matching effect;
1. circle cutting algorithm: firstly, recognizing the cup building edge target in the photo by adopting a circle cutting algorithm so as to remove useless information such as background and the like, and simultaneously carrying out certain zooming and stretching to a specified size and shape according to the recognized cup building target.
2. The building cup is circular, and the patterns rotate due to different shooting angles, directions and illumination, so that an SIFT algorithm with unchanged rotation is used as a means for feature extraction;
3. and (3) extracting the image characteristics of the back of the cup: the use of a feature extraction algorithm optimized for the glaze takes into account that the building cup is subjected to some inevitable wear and impact during storage to generate certain flaws that are not obvious for the parts protected by the glaze layer. The phenomenon that the middle part of the back surface is thick and the two ends are thin can occur due to the influence of the firing process, and the bottom end is always not protected by a glaze film.
In order to alleviate the interference of these flaws on the identification, the present invention specifically processes the back photographs. For the back photograph img after the rounding pretreatment, the bottom end is at the very center, and the rim of the mouth of the cup is at the outermost side of the circle. Meanwhile, obvious edges exist between the unglazed part and the glazed part, the boundary between the glazed part and the unglazed part can be effectively cut out by carrying out binarization on img, and therefore an annular image of the glazed part on a back image is cut out and is called as a 'glaze ring'. Meanwhile, the part of the glaze ring close to the circle center and the edge is removed, only the image in the middle of the glaze layer is reserved, and then SIFT features are extracted from the image and serve as the SIFT features of the back image.
4. Building a cup matching algorithm: feature point screening and matching algorithm using special optimization aiming at texture features
In the SIFT feature point matching process, a nearest neighbor matching mode is generally adopted, that is, for two images img1 and img2 and for a certain SIFT feature point1 on img1, SIFT key points closest to point1 are searched for on all SIFT key points on the img2 image, and the process is repeated, so that the matching points of all feature points in the image img1 in the img2 can be obtained.
Since many mismatched points may occur in this matching method, generally, all SIFT keypoint searches are performed on the img2 image to find the SIFT keypoint 21 (taking the distance from the keypoint 21 to the point1 as dis1) closest to the point1 and the next keypoint 22 (taking the distance from the keypoint 22 to the point1 as dis2), if the dis1/dis2 is less than 0.8, we regard it as a correctly matched point pair, otherwise, remove the mismatched point pair.
In the process of identifying the task, the invention relates to a special key point matching algorithm. Firstly, in the process of collecting the characteristics of an original cup, a plurality of pictures at different angles are shot for the same cup, the pictures at all angles and the front picture are independently matched and mismatched point elimination is carried out, and the key point with the most successful matching is reserved as the characteristics of the original cup by a voting method.
In the process of cup building identification, feature point matching is carried out on the cup building feature points to be identified and the collected original cup building feature points, the feature point matching process is regarded as a retrieval task, and the recall rate and the accuracy rate are calculated. The recall rate is the ratio R of the number of successfully matched building cup feature points to be identified to the original building cup feature points; the accuracy rate is the ratio P of the number of the successfully matched building cup characteristic points to be identified to the original building cup characteristic points to the building cup characteristic points to be identified. F is then calculated to be 1/(1/P +1/R), and matching is considered successful when F exceeds 0.85.
Description of the principles of the invention:
SIFT algorithm implementation
The SIFT algorithm can be roughly divided into the following steps:
1. detecting an extreme value of the scale space;
2. positioning key points;
3. orientation;
4. a keypoint descriptor;
1. extremum detection in scale space
In the extreme detection stage of the scale space, the interest points, i.e., the key points in the SIFT framework, are mainly detected. Images were convolved (convolved) at different scales with Gaussian filters (Gaussian filters) and then the image differences were blurred with successive gaussians to find keypoints.
The key point is the maximum and minimum of the Difference in Gaussians (DoG) according to different scales. That is, D (x, y, σ) of the DoG image is represented by:
D(xy,σ)=L(xy,kiσ)-L(x,y,kjσ)
l (x, y, k σ) is convolved with a gaussian blur G (x, y, k σ) from the original image I (x, y) at a scale k σ, for example:
L(x,y,kσ)=G(x,y,kσ)*I(x,y)
g (x, y, k σ) is a scale-variable Gaussian function
Figure BDA0002843813130000071
Once the DoG image is obtained, the maximum and minimum values in the DoG image can be found as key points. To determine the keypoint, each pixel in the DoG image is compared with eight pixels around its center and nine pixels at the same position of the same scale magnification in the same group of DoG images for a total of twenty-six points, and if the pixel is the maximum value and the minimum value of the twenty-six pixels, the pixel is called the keypoint.
2. Key point localization
An excessive number of keypoints may be found in different size spaces, some of which may be relatively indistinguishable or susceptible to noise interference. The next step of the SIFT algorithm is to locate each keypoint by the information of the pixels near the keypoint, the size of the keypoint, and the principal curvature of the keypoint, thereby eliminating the keypoint located on the edge or susceptible to noise interference.
3. Orientation of orientation
In azimuth orientation, the distribution of the gradient directions of adjacent pixels of the keypoints is used as a specified direction parameter, so that the keypoint descriptors can be represented according to the direction and have rotation invariance.
4. Key point descriptor
After the positions and the sizes of the key points are found and the directions are given to the key points, invariance of movement, scaling and rotation of the key points can be ensured. In addition, a descriptor vector needs to be established for each keypoint, so that the keypoint can maintain its invariance under different light rays and viewing angles, and can be easily distinguished from other keypoints.
In order to make the descriptor invariant under different light, the descriptor is normalized to a 128-dimensional unit vector. Firstly, an eight-direction histogram is established in each 4 × 4 sub-region, and in 16 × 16 regions around the key point, 4 × 4 sub-regions are in total, the gradient magnitude and direction of each pixel are calculated and then added into the histogram of the sub-region, and a 128-dimensional data, 16 × 8 directions, can be generated in total. In addition, in order to reduce the influence of nonlinear brightness, the vector value larger than 0.2 is set to be 0.2, finally the normalized vector is multiplied by 256 and is stored in 8-bit unsigned number, thereby effectively reducing the storage space.
SIFT Properties
The translation is unchanged: SIFT is a local feature, and only samples of a rectangular area near a key point are extracted, so features extracted when the object moves anywhere are similar. Meanwhile, because the extraction is performed by dividing grid, features basically do not change even if the key points are slightly shifted, and the method is somewhat similar to posing of HOG or CNN.
The rotation is not changed: before calculating the gradient bin in grid, the rotation needs to be carried out to the main direction, so that certain rotation invariance exists.
The illumination is unchanged: normalization is carried out when the feature vector is calculated, and normalization is carried out again after the card threshold value, so that the influence of partial illumination is counteracted.
The scale is unchanged: the range of feature calculation is determined by the scale obtained by the previous LoG calculation, so similar features can be obtained by different scales.
Hough circle detection
The Hough circle transformation is a process of transforming a circle in a two-dimensional image space into a point in a three-dimensional parameter space determined by the radius of the circle and the horizontal and vertical coordinates of the circle center, so determined by any three points on the circumference corresponds to a point in the three-dimensional parameter space after Hough transformation. The process is similar to the voting process, any three points on the circumference are voters, circles determined by the three points are candidates (hereinafter referred to as candidate circles), determined by the highest point of the number of votes (theoretically, circles determined by any three points on the circumference correspond to the same point in the three-dimensional parameter space after Hough transformation) is determined as the circumference after traversal, and circles determined by most points (hereinafter referred to as elected circles) are on the circumference of the elected circle, so that the circle is determined.
In a two-dimensional cartesian coordinate system, the analytical equation for a circle is expressed as follows:
(x-a)2+(y-b)2=r2
wherein, (x, y) is the coordinate of any point on the circle, (a, b) is the center of circle, and r is the radius. Assuming that any point (x, y) on the circle is given, a space formed by three parameters (a, b, r) is an inverted conical surface with (x, y,0) as a vertex in a three-dimensional space, and the space is the hough parameter space.
If r is known
I.e. given a point (x, y) on the circle and the radius r of the circle, the hough parameter is spatially degenerated to a circle on the plane, i.e. the parameters (a, b) are distributed on the circle with the center at (x, y) and the radius r at.
Examples are as follows:
given the coordinates of the four points, P1, P2, P3, P4, it is fitted to the center coordinates (a, b) of the circle (assuming that r is known here).
For each point Pi, the parameters (a, b) under the constraints of Pi and r are distributed on a circle, and the intersection formed by combining all Pi constraints is the required parameter (a, b).
The Hough parameter space is discretized into a two-dimensional grid, if a circle passes through the cell, the number of tickets of the cell grid is increased by one, and the value (a, b) corresponding to the grid with the highest number of tickets is selected, namely the parameter.
In application, the steps of the invention are as follows:
1. before a batch of built calendars is put into the market, at least two template photos (the front side of the calendars and the back side of the calendars) are taken for each calendars by a professional camera;
2. cutting out parts of the calendars in the template photo by using a circle cutting algorithm, scaling the cut calendars to be uniform in size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
3. when the authenticity is identified, a positive cup building photo (which needs to contain the whole cup) is shot by a smart phone against a purchased cup building, and the photo for delivery and inspection is sent to a computer identification system database on the internet through the existing communication tools such as the smart phone, such as short message, multimedia message, WeChat, easy-to-believe, QQ or APP; the method has the advantages that consumers do not need to download special identification instruments such as the anti-counterfeiting identification APP client and the like in advance, the consumers can conveniently identify the authenticity, the anti-counterfeiting query rate can be greatly improved, and the anti-counterfeiting effect can be greatly improved;
4. the computer recognition system firstly extracts SIFT features on the censorship photos according to the step 2, compares the SIFT features with feature points of template photos in a database, and feeds back information of which the identification conclusion is true to the smart phone of the consumer if the features extracted from the censorship photos are consistent with the features extracted from a certain archive photo; and if the SIFT feature on the submission photo does not accord with the feature of any archive photo, feeding back information of which the identification conclusion is false to the smart phone of the consumer.
In conclusion, the product is a cup building identification method, and texture feature file photos are taken for each cup before a batch of cups are put on the market; the consumer uses the smart phone to take one or more delivery pictures of the purchased built calendars, and the pictures are sent to the database of the computer recognition system for comparison and authenticity identification.
The invention extracts the characteristic of the cup-building pattern in the cup-building picture as the unique 'fingerprint' of the cup-building. Before the calendars are sold, all calendars are photographed and subjected to feature extraction by a manufacturer to construct a template library. When it is necessary to identify whether the cup is genuine, the cup features are extracted from the photo of the cup and matched with the cup features in the template library, if so, the cup is genuine. According to the method, through setting a specific characteristic extraction and characteristic matching scheme, even if the photo of the cup is partially reflected and the cup in the photo has a certain deviation, the matching can be completed, and the robustness is high.
Meanwhile, the invention also discloses a cup building identification system based on local characteristics, which comprises the following units,
the template photo acquisition unit is used for shooting front and back pictures of the building cup as template photos;
the template processing unit is used for cutting out parts of the calendars in the template picture by using a cyclotomic algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
and the identification unit is used for acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points and comparing the SIFT feature points with the feature points of the template calendula in the database to identify the authenticity.
In another aspect, the present invention also discloses a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to perform the steps of the method as described above.
The invention also discloses a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method as described above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A cup building identification method based on local features is characterized by comprising the following steps:
the method comprises the following steps:
taking pictures of the front and the back of the building cup as template pictures;
cutting out parts of the calendars in the template photo by using a circle cutting algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database;
and acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with the feature points of the template calendula in the database to distinguish the authenticity.
2. The local feature-based cup creation identification method as claimed in claim 1, wherein: the circle-cutting algorithm comprises the following steps:
given the coordinates of four points, P1, P2, P3, P4, the center coordinates (a, b) of the circle are fitted to, assuming r is known;
for each point Pi, the parameters (a, b) under the constraint of Pi and r are distributed on a circle, and the intersection formed by the constraints of all Pi is the parameter (a, b) to be solved.
3. The local feature-based cup creation identification method as claimed in claim 2, wherein: the circle-cutting algorithm further comprises the following steps:
combining all Pi, the intersection formed by the constraints is the solved parameter (a, b), then the Hough parameter space is discretized into a two-dimensional grid, if a circle passes through the cell, the number of votes of the cell grid is increased by one, and the value (a, b) corresponding to the grid with the highest number of votes is selected, namely the solved parameter.
4. The local feature-based cup creation identification method as claimed in claim 1, wherein: the specific steps of taking the pictures of the front and the back of the building cup as the template pictures comprise:
in the process of building a cup feature acquisition, a plurality of photos at different angles are shot for the same building cup, the photos at all angles and the photos at the front side are independently matched and mismatched point elimination is carried out, and the key point with the most successful matching is reserved as the feature of the original building cup by a voting method.
5. The local feature-based cup creation identification method as claimed in claim 1, wherein:
the method comprises the following steps of obtaining a photo of the cup to be identified, extracting corresponding SIFT feature points, and comparing the SIFT feature points with feature points of a template cup in a database to distinguish authenticity, wherein the steps specifically comprise:
carrying out feature point matching on the characteristic points of the building cup to be identified and the collected original building cup characteristic points, regarding the feature point matching process as a retrieval task, and calculating recall rate and accuracy rate;
the recall rate is a ratio R of the number of successfully matched building cup feature points to be identified to the original building cup feature points;
the accuracy rate is the ratio P of the number of the successfully matched building cup characteristic points to be identified to the original building cup characteristic points to the building cup characteristic points to be identified;
f is then calculated to be 1/(1/P +1/R), and matching is considered successful when F exceeds 0.85.
6. The local feature-based cup creation identification method as claimed in claim 1, wherein:
for the back photo img after the rounding pretreatment, the bottom end is at the right center, and the edge of the mouth of the cup is at the outermost side of the circle;
cutting out a boundary between a glazed part and a non-glazed part by carrying out binarization on img so as to cut out an annular image of the glazed part on a back image, which is called as a 'glaze ring';
and simultaneously, removing the part of the glaze ring close to the circle center and the edge, only keeping the image in the middle of the glaze layer, and then extracting SIFT features from the image to be used as the SIFT features of the back image.
7. A build a cup identification system based on local features, characterized by: comprises the following units of a first unit, a second unit,
the template photo acquisition unit is used for shooting front and back pictures of the building cup as template photos;
the template processing unit is used for cutting out parts of the calendars in the template picture by using a cyclotomic algorithm, zooming the cut calendars to a set size, then extracting SIFT feature points, and storing and recording the feature points in a template database on the Internet;
and the identification unit is used for acquiring the photo of the calendula to be identified, extracting corresponding SIFT feature points and comparing the SIFT feature points with the feature points of the template calendula in the database to identify the authenticity.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1-3.
CN202011502341.0A 2020-12-17 2020-12-17 Cup building identification method and system based on local features and storage medium Pending CN112560697A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011502341.0A CN112560697A (en) 2020-12-17 2020-12-17 Cup building identification method and system based on local features and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011502341.0A CN112560697A (en) 2020-12-17 2020-12-17 Cup building identification method and system based on local features and storage medium

Publications (1)

Publication Number Publication Date
CN112560697A true CN112560697A (en) 2021-03-26

Family

ID=75063463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011502341.0A Pending CN112560697A (en) 2020-12-17 2020-12-17 Cup building identification method and system based on local features and storage medium

Country Status (1)

Country Link
CN (1) CN112560697A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN106056121A (en) * 2016-05-27 2016-10-26 河北工业大学 Satellite assembly workpiece fast-identification method based on SIFT image feature matching
CN108288012A (en) * 2017-01-09 2018-07-17 北京艺鉴通科技有限公司 A kind of art work realized based on mobile phone is put on record verification method and its system
CN109212377A (en) * 2018-09-27 2019-01-15 国网山东省电力公司电力科学研究院 A kind of high-tension line obstacle recognition method, device, crusing robot
WO2020082577A1 (en) * 2018-10-26 2020-04-30 平安科技(深圳)有限公司 Seal anti-counterfeiting verification method, device, and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN106056121A (en) * 2016-05-27 2016-10-26 河北工业大学 Satellite assembly workpiece fast-identification method based on SIFT image feature matching
CN108288012A (en) * 2017-01-09 2018-07-17 北京艺鉴通科技有限公司 A kind of art work realized based on mobile phone is put on record verification method and its system
CN109212377A (en) * 2018-09-27 2019-01-15 国网山东省电力公司电力科学研究院 A kind of high-tension line obstacle recognition method, device, crusing robot
WO2020082577A1 (en) * 2018-10-26 2020-04-30 平安科技(深圳)有限公司 Seal anti-counterfeiting verification method, device, and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨丹: "《数据空间中基于语义的实体检索》", 31 October 2019 *
赵小川: "《机器人技术创意设计》", 31 January 2013 *

Similar Documents

Publication Publication Date Title
US10510152B2 (en) Systems, methods, and devices for image matching and object recognition in images using textures
US9508151B2 (en) Systems, methods, and devices for image matching and object recognition in images using image regions
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN108491498B (en) Bayonet image target searching method based on multi-feature detection
CN110546651B (en) Method, system and computer readable medium for identifying objects
CN108288012B (en) Artwork filing verification method and system based on mobile phone
Bi et al. Multi-scale feature extraction and adaptive matching for copy-move forgery detection
CN102209975A (en) Method for acquiring region-of-interest and/or cognitive information from eye image
CN108171127A (en) A kind of invoice automatic identifying method based on deep learning
CN104182973A (en) Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
Liu et al. Copy move forgery detection based on keypoint and patch match
CN113011426A (en) Method and device for identifying certificate
Maaten et al. Computer vision and machine learning for archaeology
Emam et al. A robust detection algorithm for image Copy-Move forgery in smooth regions
CN110309831B (en) Non-intelligent water meter reading method based on machine vision
Rey-Otero et al. Comparing feature detectors: A bias in the repeatability criteria
CN112560697A (en) Cup building identification method and system based on local features and storage medium
CN116503622A (en) Data acquisition and reading method based on computer vision image
Agarwal et al. The advent of deep learning-based image forgery detection techniques
CN115994996A (en) Collation apparatus, storage medium, and collation method
Akoum et al. Image Forgery Analyse and Detection
Rodrigues 3D pose estimation for bin-picking: A data-driven approach using multi-light images
CN116485887B (en) Unsupervised 3D carton detection method and system
RU2778906C1 (en) Method for automatically recognizing scenes and objects in an image
CN115994906B (en) Material image positioning method and device based on contour position index

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 230088 21 / F, building A1, phase I, Zhongan chuanggu Science Park, No. 900, Wangjiang West Road, high tech Zone, Hefei, Anhui

Applicant after: HEFEI HIGH DIMENSIONAL DATA TECHNOLOGY Co.,Ltd.

Address before: 230088 Block C, j12 Innovation Industrial Park, 2800 innovation Avenue, high tech Zone, Hefei City, Anhui Province

Applicant before: HEFEI HIGH DIMENSIONAL DATA TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication