CN109299654B - Artwork single piece tracing identification method - Google Patents

Artwork single piece tracing identification method Download PDF

Info

Publication number
CN109299654B
CN109299654B CN201810894545.XA CN201810894545A CN109299654B CN 109299654 B CN109299654 B CN 109299654B CN 201810894545 A CN201810894545 A CN 201810894545A CN 109299654 B CN109299654 B CN 109299654B
Authority
CN
China
Prior art keywords
layer
image
view
field image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810894545.XA
Other languages
Chinese (zh)
Other versions
CN109299654A (en
Inventor
王仰池
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Chifang Artwork Assessment Identification Co ltd
Original Assignee
Jiangxi Chifang Artwork Assessment Identification Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Chifang Artwork Assessment Identification Co ltd filed Critical Jiangxi Chifang Artwork Assessment Identification Co ltd
Priority to CN201810894545.XA priority Critical patent/CN109299654B/en
Publication of CN109299654A publication Critical patent/CN109299654A/en
Application granted granted Critical
Publication of CN109299654B publication Critical patent/CN109299654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an artwork uniqueness identification method, in particular to an artwork single piece traceability identification method for judging whether an artwork is an original piece or not; it comprises the following steps: (1) selecting a total view field at a certain position of the original artwork; (2) using a microscope to amplify the total view field image and then extracting the image, (3) determining the total view field on an artwork to be determined, wherein whether the artwork to be determined is the original artwork or not, and the total view field position on the original artwork is the same as the total view field position on the artwork to be determined; (4) and carrying out image scanning on the total view field on the artwork to be determined, carrying out comparison analysis on a comparison image acquired by the microscope and the total view field image by using an image recognition technology during scanning, determining that the artwork to be determined is the original artwork if the comparison between the comparison image and the standard image is successful, and otherwise, determining that the artwork to be determined is the non-original artwork.

Description

Artwork single piece tracing identification method
Technical Field
The invention relates to the technical field of artwork identification, in particular to an artwork single piece traceability identification method for identifying the uniqueness of an artwork, more precisely judging whether the artwork is an original.
Background
The artworks comprise calligraphy and painting, metal appliances, wood, ceramics and other appliances, and the artworks have extremely high value, so that the artworks are loved by multiple persons particularly for the treasures which are released in the ancient times, and after the artworks are handed over by multiple personal persons, the final holder cannot effectively judge whether the held artworks are the original artworks, namely cannot accurately judge whether the held artworks are original artworks.
With the development of science, imaging technology plays an important role in various fields, and various detection methods and display means tend to be more accurate, more intuitive and more complete, so that people can observe object tissues and know the structure of materials, and the development of the method is the result of the mutual combination of physics, mathematical electronics, computer science and multiple disciplines. The optical tomography technology is a new non-contact non-invasive tomography of object tissues and microscope structures.
The face recognition technology is based on the face features of people, and the identity features contained in each face are extracted from the input face image or video stream and are compared with the known faces, so that the identity of each face is recognized.
Image recognition, also called image recognition, is an important field of artificial intelligence, and in image recognition, the image recognition technology is based on the main features of images, each image has its features, such as the letter a has a tip, P has a circle, and the center of Y has an acute angle, etc., and the recognition and analysis can be performed on the images through the operation of an internal program of a computer by means of the features.
In conclusion, if the identification of the artwork is provided by means of modern science, a powerful theoretical basis can be provided for the identification conclusion of the artwork.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, adapt to the practical requirement and provide a method for tracing and identifying a single piece of an artwork.
In order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
discloses a tracing identification method for a single piece of artwork; the method is characterized in that: it comprises the following steps:
(1) selecting a total view field at a certain position of the original artwork;
(2) the microscope is used for extracting the total field image after amplifying the total field,
(3) determining a total view field on the artwork to be determined, wherein whether the artwork to be determined is the original artwork or not is determined, and the total view field position on the original artwork is correspondingly the same as the total view field position on the artwork to be determined;
(4) and carrying out image scanning on the total field of view on the artwork to be determined, carrying out comparison analysis on a comparison image acquired by a microscope and the total field of view image extracted from the original artwork by using an image recognition technology during scanning, determining that the artwork to be determined is the original artwork if the comparison between the comparison image and the total field of view image extracted from the original artwork is successful, and otherwise, determining that the artwork is not the original artwork.
The total view field image in the step (2) is formed by overlapping at least two view field images distributed from top to bottom, and the obtaining method comprises the following steps:
a. determining a focus by a microscope;
b. extracting a first layer view field image;
c. downwards scanning, and extracting a second layer of view field image, wherein the second layer of view field image is positioned right below the first layer of view field image;
d. repeating the step c, and sequentially extracting the (N + 1) th layer of view field image, wherein N is a natural number more than or equal to 2, and the (N + 1) th layer of view field image is positioned right below the (N) th layer of view field image;
e. and sequentially arranging the extracted first-layer view field image to the (N + 1) th-layer view field image from top to bottom according to the depth during extraction, and then combining the images to obtain a total view field image.
The above-mentioned every layer view field image is combined by several sub view field images respectively, the acquisition method of every layer view field image in steps b, c, d is as follows:
constructing a first layer view field image, and extracting a 1 st sub view field image from the center of the first layer view field image;
taking the 1 st sub-field image as a center, extracting a 2 nd sub-field image at the edge of the 1 st sub-field image, wherein the 2 nd sub-field image is overlapped with the edge of the 1 st sub-field image and then is combined into a 1 st sub-field image unit;
extracting a 3 rd sub-field image at the edge of the 1 st sub-field image unit, overlapping the 3 rd sub-field image with the edge of the 1 st sub-field image unit, and combining the 3 rd sub-field image and the 1 st sub-field image unit into a 2 nd sub-field image unit;
repeating the step III, and constructing an Mth sub-field image unit until the area of the Mth sub-field image unit is the same as that of the first layer field image, wherein M is a natural number more than 2;
v, moving the lens downwards, and constructing a second layer view field image under the first layer view field image;
VI, repeating the steps I to V to complete the extraction of the second layer view field image;
and VII, repeating the step VI, and sequentially finishing the extraction of the (N + 1) th layer view field image.
The step (4) is specifically as follows:
according to the same method of the step I, a first layer view sub-field image is extracted from a first layer view;
downward scanning, namely extracting a second layer view sub-field image from a second layer view under the first layer view sub-field image extracted from the first layer view;
repeating the step two, and sequentially extracting a third layer of view field sub-view field image and a fourth layer of view field sub-view field image … …, namely an N +1 layer of view field sub-view field image under the second layer of view field sub-view field image;
superposing and combining the extracted first layer view field sub-view field image, the second layer view field sub-view field image, the third layer view field sub-view field image and the … … (N + 1) th layer view field sub-view field image according to the upper and lower positions during extraction to form a contrast image;
and fifthly, comparing and analyzing the comparison image and the total view field image by using an image recognition technology.
The invention has the beneficial effects that:
the invention takes the unique characteristics of a single independent artwork on the microcosmic aspect as the basis, and takes an electron microscope amplification technology, an image recognition technology and a face recognition technology as main technical supports to extract an image after amplifying a certain part of the artwork, the extracted image can be used as a judgment basis in the subsequent artwork discrimination, and whether the artwork needing to be discriminated and the initially extracted artwork belong to the same artwork can be judged on the microcosmic aspect by carrying out microcosmic contrast analysis on the image retained by the initial extraction, so that the method can be used for serving in the artwork identification by means of modern scientific technology, and provides reliable and powerful support for the artwork identification.
Drawings
FIG. 1 is a schematic diagram of a single-layer view field image construction method according to the present invention;
FIG. 2 is a schematic diagram of a total field of view image construction method in the method of the present invention;
FIG. 3 is a schematic diagram of a method for combining five sub-field images extracted from an artwork to be determined into a comparison image according to the present invention;
fig. 4 is an image of a certain part on the blue and white porcelain (an image obtained by combining the first layer view field image and the fifth layer view field image) obtained by the method:
FIGS. 4-01 are first layer field of view images corresponding to FIG. 4;
4-02 are second layer field of view images corresponding to FIG. 4;
FIGS. 4-03 are third layer field of view images corresponding to FIG. 4;
4-04 are fourth layer field of view images corresponding to FIG. 4;
FIGS. 4-05 are fifth layer field of view images corresponding to FIG. 4;
fig. 5 is an image of another part on the blue and white porcelain (an image obtained by combining the first layer view field image and the fifth layer view field image) obtained by the method:
5-01 are first layer field of view images corresponding to FIG. 5;
5-02 are second layer field of view images corresponding to FIG. 5;
5-03 are third layer field of view images corresponding to FIG. 5;
5-04 are fourth layer field of view images corresponding to FIG. 5;
5-05 are fifth layer field of view images corresponding to FIG. 5;
fig. 6 is an image of a certain part on the woodware obtained by the method (an image obtained by combining the first layer view field image and the fifth layer view field image):
6-01 are first layer field of view images corresponding to FIG. 6;
6-02 are second layer field of view images corresponding to FIG. 6;
6-03 are third layer field of view images corresponding to FIG. 6;
6-04 are fourth layer field of view images corresponding to FIG. 6;
6-05 are fifth layer field of view images corresponding to FIG. 6;
fig. 7 is an image of a certain portion of the jade article (an image obtained by combining the first layer view field image and the fifth layer view field image) obtained by the method:
FIGS. 7-01 are first layer field of view images corresponding to FIG. 7;
7-02 are second layer field of view images corresponding to FIG. 7;
7-03 are third layer field of view images corresponding to FIG. 7;
7-04 are fourth layer field of view images corresponding to FIG. 7;
7-05 are fifth layer field of view images corresponding to FIG. 7;
fig. 8 is an image of a certain portion on the painting (an image obtained by combining the first layer view field image and the fifth layer view field image) obtained by the method:
8-01 are first layer field of view images corresponding to FIG. 8;
8-02 are second layer field of view images corresponding to FIG. 8;
8-03 are third layer field of view images corresponding to FIG. 8;
8-04 are fourth layer field of view images corresponding to FIG. 8;
8-05 are fifth layer field of view images corresponding to FIG. 8;
FIG. 9 is a plurality of Ming dynasty blue and white porcelain piece specimens.
Detailed Description
The invention is further illustrated with reference to the following figures and examples:
example (b): an artwork single-piece tracing identification method is shown in fig. 1 to 3, and comprises the following steps:
step 1-pick a total field of view of 2mm by 2mm at a certain position of the original artwork.
And Step2, carrying out image extraction on the total field of view after the total field of view is magnified by 700-1200 times by using an electron microscope to obtain a total field of view image, wherein the total field of view image consists of at least 2 layers of field of view images, the field of view images of each layer are arranged from top to bottom according to the depth of the position where the total field of view image is extracted, and each layer of field of view image consists of a plurality of sub-field of view images.
For ease of understanding and explanation, the present embodiment is described with the following data as an example:
the total view field image is composed of five view field images distributed from top to bottom, each layer of view field image is composed of 96 sub view field images, wherein the areas of the total view field image and each layer of view field image are both 2mm by 2mm, and the area of each sub view field image in each layer of view field image is 0.2mm by 0.3 mm.
The total view field image acquisition method comprises the following steps:
1. the selected position (such as a pattern at a certain position of a bottle body) of the total view field image is determined on an original artwork (such as a Ming generation blue and white porcelain) through an electron microscope, namely the total view field is determined, and the focus is determined through the electron microscope.
2. The method comprises the following steps of (1) respectively extracting first to fifth layer view field images by adopting a tomography technology, wherein the method specifically comprises the following steps:
(1) extracting a first layer view field image at the uppermost layer; the method comprises the following specific steps:
constructing a first layer view field image, and shooting a 1 st sub view field image of 0.2mm x 0.3mm at the center of the first layer view field image;
taking the 1 st sub-field image as a center, shooting a 2 nd sub-field image of 0.2mm x 0.3mm at the edge of the 1 st sub-field image, and after shooting, overlapping and combining the 1 st sub-field image and the 2 nd sub-field image, wherein the 2 nd sub-field image is overlapped with the edge of the 1 st sub-field image, the overlapping width is 0.1mm, and the 2 nd sub-field image is overlapped with the 1 st sub-field image and then is combined into a 1 st sub-field image unit;
shooting a 3 rd sub-field image at the edge of the 1 st sub-field image unit, overlapping the 3 rd sub-field image and the edge of the 1 st sub-field image unit, and combining the 3 rd sub-field image and the 1 st sub-field image unit into a 2 nd sub-field image unit, wherein the overlapping width is 0.1 mm;
repeating the step III until a 95 th sub-field image unit is constructed, wherein the 95 th sub-field image unit is formed by combining 96 sub-field images, and the area of the 95 th sub-field image unit is the same as that of the first layer field image, so that the first layer field image can be extracted;
(2) moving the lens downwards by 0.1mm, repeating the steps I to IV, constructing a second layer of view field under the first layer of view field image and finishing the extraction of the second layer of view field image, wherein the interval between the first layer of view field image and the second layer of view field image is 0.1 mm;
(3) and (5) repeating the step (2), and sequentially finishing the extraction of the third layer view field image, the fourth layer view field image and the fifth layer view field image.
3. The method comprises the steps of sequentially arranging a first layer view image, a second layer view image, a third layer view image, a fourth layer view image and a fifth layer view image from top to bottom according to the extracted depth (the left position, the right position, the front position and the rear position of the five layer view images are kept unchanged), and then combining the images into a total image, wherein the total image is the total view image, the total view image is composed of 96 sub-images, and a single sub-image in the total view image is formed by overlapping five sub-view images which are sequentially distributed from top to bottom (the five sub-view images are respectively positioned on the first layer view to the fifth view).
Step3 the total field of view image is saved as a standard image.
After the total view field image of the original artwork is extracted, the original artwork can be returned to a holder, and the total view field image is always stored and serves as a basis for subsequent original artwork source tracing judgment; if any person holds an artwork to be determined, which is consistent with the appearance of the original artwork in the future, but the holder cannot judge whether the artwork to be determined is the original artwork or not, or whether the artwork to be determined is the original artwork or not, the judgment can be carried out through the method.
When judging:
and Step4, determining the total view field position on the artwork to be determined, wherein whether the artwork to be determined is the original artwork or not, at the moment, the total view field position on the artwork to be determined is correspondingly the same as the total view field position on the original artwork, and the selected total view field area is not lower than the total view field area on the original artwork.
Step5, selecting the total field of view range on the artwork to be determined according to the method in the Step2:
firstly, extracting a first layer view sub-field image on a first layer view;
downward scanning, namely extracting a second layer view sub-field image from a second layer view under the first layer view sub-field image extracted from the first layer view;
repeating the step two, and sequentially extracting a third layer of view field sub-view image, a fourth layer of view field sub-view image and a fifth layer of view field sub-view image under the second layer of view field sub-view image;
superposing and combining the extracted first layer view sub-field image, second layer view sub-field image, third layer view sub-field image, fourth layer view sub-field image and fifth layer view sub-field image according to the upper and lower positions during extraction to form a comparison image, wherein the area of the comparison image is 0.2mm to 0.3mm, as shown in FIG. 3;
in the above Step, the areas of the first layer view sub-field image, the second layer view sub-field image, the third layer view sub-field image, the fourth layer view sub-field image, and the fifth layer view sub-field image are the same as the areas of the sub-field images in Step2, and are also 0.2mm by 0.3 mm;
sending the comparison image into a cloud computing center (prior art) at the rear end to compare and analyze the comparison image with the total view field image by using an image recognition technology, wherein during comparison, the comparison image is compared with 96 sub-images in the total view field image acquired from the original artwork, and during comparison, a face recognition technology or an image recognition technology (prior art) is used, and if the comparison and analysis are carried out:
a. the comparison image and a certain subimage in 96 subimages in the total view field image extracted from the original artwork have the same characteristic, namely the comparison is successful, the artwork to be determined is the original artwork, namely the artwork to be determined and the original artwork belong to the same object, if the accuracy of the determination needs to be improved, a plurality of comparison images can be extracted again from the total view field of the artwork to be determined and are compared with 96 subimages in the total view field image acquired from the original artwork one by one, and if the artwork to be determined and the artwork belong to the same object, a plurality of comparison images acquired from the artwork to be determined can be successful.
b. If the to-be-determined artwork is a non-original artwork, none of the comparison images in the total view field range selected on the to-be-determined artwork can be successfully compared with any sub-image in 96 sub-images in the total view field image extracted from the original artwork, and then the to-be-determined artwork can be judged to be the non-original artwork.
The total field-of-view image, the layer field-of-view image, and the sub-field-of-view image described herein are magnified images taken by an electron microscope.
By the method, features of the data sets in a plurality of layers of field-of-view images can be combined into a total feature by using superposition analysis. Then, a specific location or area having a certain set of attribute values can be found, i.e. in accordance with the specified conditions, and the method finds a location suitable for a specific use. For example, different object types, gradients of ceramic surfaces, unevenness of oil painting surfaces, combination of woodware surface cells and cell nucleuses, and patterns such as diamond jade and jade growing lines can be superposed. The total field of view image, the first layer field of view image to the fifth layer field of view image extracted in the implementation by the method of the present invention may be respectively referred to fig. 4, fig. 4-01 to fig. 4-05, fig. 5-01 to fig. 5-05, fig. 6-01 to fig. 6-05, fig. 7-01 to fig. 7-05, or fig. 8, fig. 8-01 to fig. 8-05, or fig. 9.
By taking fig. 4, 4-01 to 4-05 as an example, fig. 4, 4-01 to 4-05 show images on the Ming generation blue and white porcelain extracted by the method of the invention, the Ming generation blue and white porcelain mainly uses Hetang blue produced in Jiangxi Leping county, also called Heng-equ blue, has fine pigment refining, less impurity content, soft and elegant color development, grey flashing in blue, stable and calm color, has a water-black painting style, and is contrary to white and warm tyre shafts and fine decorations, and is off the colloquial. However, the cloud-shading fog barrier is usually hidden and appearing due to the thick enamel and the light blue and white color. Especially, the color tone of the blue-and-white pattern of the foot is most prominently expressed. The finished blue-and-white porcelain piece has pure white and fine blank, compact and beautiful blank body, and is similar to the stripping of the blank in some cases. The glazed white and fat food has unique enamel which is like condensed fat and is semitransparent, and if the food is seen through light, the matrix of the food is stealthily seen to be light fleshy red, and the glaze color is an important characteristic of the formed blue and white porcelain; fig. 4 is a total view field image obtained by combining five layers of view field images, and five pictures in fig. 4-01 to 4-05 are five pictures corresponding to the first layer view field image to the fifth layer view field image respectively.
Referring to fig. 4-01, the bright spots marked at b 1-b 8 in fig. 4-01 are unique shapes of the inner particles of the porcelain (Ming dynasty blue and white porcelain), and have other shapes such as round, square, oval, column, etc., and the positions at b 1-b 8 are fixed, because the specific position and shape information in the figure is caused only by the current process level, production method, long-time oxidation and specific factors, the shapes and positions at b 1-b 8 are not changed with the time, but are permanently fixed, and the shapes and positions at b 1-b 8 are fired into a natural shape in the porcelain. The features are obtained by magnifying by a thousand times through an electron microscope, and a counterfeiter can make a product which is the same as the artwork in appearance, but cannot manufacture the features with the image on a micro scale, or cannot have microscopically same features in a plurality of artworks produced in the same batch, and one object has unique features on an internal micro scale just like human DNA (deoxyribonucleic acid), so that the artwork to be determined can be determined to be the original artwork by identifying and comparing the features of the same position on the micro scale through image recognition or face recognition technology, and single piece tracing can be carried out.
Fig. 5, 5-01 to 5-05 are images of another part on the blue and white porcelain obtained by the method.
Fig. 6, fig. 6-01 to fig. 6-05 are images of a certain part on the wooden ware obtained by the method.
Fig. 7, 7-01 to 7-05 are images of another portion of the jade article obtained by the method.
Fig. 8, fig. 8-01 to fig. 8-05 are images of a certain part on the painting obtained by the method.
The embodiments of the present invention are disclosed as the preferred embodiments, but not limited thereto, and those skilled in the art can easily understand the spirit of the present invention and make various extensions and changes without departing from the spirit of the present invention.

Claims (1)

1. An artwork single-piece tracing identification method; the method is characterized in that: it comprises the following steps:
(1) selecting a total view field at a certain position of the original artwork;
(2) the microscope is used for extracting the total field image after amplifying the total field,
(3) determining a total view field on the artwork to be determined, wherein whether the artwork to be determined is the original artwork or not is determined, and the total view field position on the original artwork is correspondingly the same as the total view field position on the artwork to be determined;
(4) carrying out image scanning on a total field of view on the artwork to be determined, carrying out comparison analysis on a comparison image acquired by a microscope and a total field of view image extracted from the original artwork by using an image recognition technology during scanning, determining that the artwork to be determined is the original artwork if the comparison between the comparison image and the total field of view image extracted from the original artwork is successful, and otherwise, determining that the artwork to be determined is the non-original artwork;
the total view field image in the step (2) is formed by overlapping at least two view field images distributed from top to bottom, and the obtaining method comprises the following steps:
a. determining a focus by a microscope;
b. extracting a first layer view field image;
c. downwards scanning, and extracting a second layer of view field image, wherein the second layer of view field image is positioned right below the first layer of view field image;
d. repeating the step c, and sequentially extracting the (N + 1) th layer of view field image, wherein N is a natural number more than or equal to 2, and the (N + 1) th layer of view field image is positioned right below the (N) th layer of view field image;
e. arranging the extracted first-layer view field image to the (N + 1) th-layer view field image from top to bottom according to the depth during extraction, and combining to obtain a total view field image;
each layer of view field image is formed by combining a plurality of sub view field images, and the method for acquiring each layer of view field image in the steps b, c and d is as follows:
constructing a first layer view field image, and extracting a 1 st sub view field image from the center of the first layer view field image;
taking the 1 st sub-field image as a center, extracting a 2 nd sub-field image at the edge of the 1 st sub-field image, wherein the 2 nd sub-field image is overlapped with the edge of the 1 st sub-field image and then is combined into a 1 st sub-field image unit;
extracting a 3 rd sub-field image at the edge of the 1 st sub-field image unit, overlapping the 3 rd sub-field image with the edge of the 1 st sub-field image unit, and combining the 3 rd sub-field image and the 1 st sub-field image unit into a 2 nd sub-field image unit;
repeating the step III, and constructing an Mth sub-field image unit until the area of the Mth sub-field image unit is the same as that of the first layer field image, wherein M is a natural number more than 2;
v, moving the lens downwards, and constructing a second layer view field image under the first layer view field image;
VI, repeating the steps I to V to complete the extraction of the second layer view field image;
step VI is repeated, and the extraction of the N +1 layer view field images of the third layer view field and the fourth layer view field … … is completed in sequence;
the step (4) is specifically as follows:
according to the same method of the step I, a first layer view sub-field image is extracted from a first layer view;
downward scanning, namely extracting a second layer view sub-field image from a second layer view under the first layer view sub-field image extracted from the first layer view;
repeating the step two, and sequentially extracting a third layer of view field sub-view field image and a fourth layer of view field sub-view field image … …, namely an N +1 layer of view field sub-view field image under the second layer of view field sub-view field image;
superposing and combining the extracted first layer view field sub-view field image, the second layer view field sub-view field image, the third layer view field sub-view field image and the … … (N + 1) th layer view field sub-view field image according to the upper and lower positions during extraction to form a contrast image;
and fifthly, comparing and analyzing the comparison image and the total view field image by using an image recognition technology.
CN201810894545.XA 2018-08-05 2018-08-05 Artwork single piece tracing identification method Active CN109299654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810894545.XA CN109299654B (en) 2018-08-05 2018-08-05 Artwork single piece tracing identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810894545.XA CN109299654B (en) 2018-08-05 2018-08-05 Artwork single piece tracing identification method

Publications (2)

Publication Number Publication Date
CN109299654A CN109299654A (en) 2019-02-01
CN109299654B true CN109299654B (en) 2021-09-28

Family

ID=65168066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810894545.XA Active CN109299654B (en) 2018-08-05 2018-08-05 Artwork single piece tracing identification method

Country Status (1)

Country Link
CN (1) CN109299654B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157242A (en) * 2016-05-26 2016-11-23 朱建宗 The method that calligraphy and painting transaction identifier docks with calligraphy and painting material object
CN106251340A (en) * 2016-07-24 2016-12-21 朱建宗 A kind of feature pattern data calculate the method for comparison micro image
CN107505340A (en) * 2017-07-27 2017-12-22 中国科学院高能物理研究所 A kind of ceramics authentication method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166839A (en) * 2014-07-18 2014-11-26 刘宝旭 Method and system for calligraphy and painting authentication

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157242A (en) * 2016-05-26 2016-11-23 朱建宗 The method that calligraphy and painting transaction identifier docks with calligraphy and painting material object
CN106251340A (en) * 2016-07-24 2016-12-21 朱建宗 A kind of feature pattern data calculate the method for comparison micro image
CN107505340A (en) * 2017-07-27 2017-12-22 中国科学院高能物理研究所 A kind of ceramics authentication method

Also Published As

Publication number Publication date
CN109299654A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
Wells et al. Building face composites can harm lineup identification performance.
CN104391651B (en) Calligraphy person's handwriting rendering method based on optical principle
CN108920490A (en) Assist implementation method, device, electronic equipment and the storage medium of makeup
CN109886256A (en) Intelligence evaluation and test equipment and system
CN109978899A (en) Contour detecting device, drawing apparatus, profile testing method and recording medium
JP2007128171A (en) Face image synthesizer, face image synthesizing method and face image synthesizing program
JP2012527665A (en) Apparatus and method for identifying the original author of a work of art
CN111402409B (en) Exhibition hall design illumination condition model system
CN109299654B (en) Artwork single piece tracing identification method
Lee et al. Heterogeneity in chromatic distance in images and characterization of massive painting data set
CN109191145B (en) Database establishing method for judging times of art and method for judging times of art
CN109754644A (en) A kind of teaching method and system based on augmented reality
CN107392953A (en) Depth image recognition methods based on contour
Dondi et al. DAFNE: A dataset of fresco fragments for digital anastlylosis
CN104408978B (en) Calligraphy person's handwriting based on optical principle is presented system
CN109859284B (en) Dot-based drawing implementation method and system
Chen et al. Exploring facial asymmetry using optical flow
JP4530173B2 (en) Method and system for detecting the position of a facial part
CN107027067A (en) Obtain the method and system of caption information in MV video resources
TW200910222A (en) Identifying method for fingerprints and palm print
Pedersen Analysis of two-dimensional electrophoresis gel images
CN108268533A (en) A kind of Image Feature Matching method for image retrieval
JP6209298B1 (en) Information providing apparatus and information providing method
Cooper Depth of field: relief sculpture in Renaissance Italy
Bhaumik et al. Recognition techniques in Buddhist iconography and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant