CN111291675A - Hyperspectral ancient painting detection and identification method based on deep learning - Google Patents
Hyperspectral ancient painting detection and identification method based on deep learning Download PDFInfo
- Publication number
- CN111291675A CN111291675A CN202010080017.8A CN202010080017A CN111291675A CN 111291675 A CN111291675 A CN 111291675A CN 202010080017 A CN202010080017 A CN 202010080017A CN 111291675 A CN111291675 A CN 111291675A
- Authority
- CN
- China
- Prior art keywords
- hyperspectral
- data
- ancient painting
- painting
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010422 painting Methods 0.000 title claims abstract description 109
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013135 deep learning Methods 0.000 title claims abstract description 17
- 230000003595 spectral effect Effects 0.000 claims abstract description 54
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 5
- 238000001228 spectrum Methods 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 10
- 238000000926 separation method Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 238000000513 principal component analysis Methods 0.000 claims description 7
- 230000001629 suppression Effects 0.000 claims description 5
- 241001465754 Metazoa Species 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 3
- 238000000701 chemical imaging Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims 1
- 230000007547 defect Effects 0.000 abstract description 5
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 239000000049 pigment Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 8
- 241000282414 Homo sapiens Species 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Astronomy & Astrophysics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a hyperspectral ancient painting detection and identification method based on deep learning, which comprises the following steps: collecting hyperspectral data of ancient painting, and constructing a hyperspectral ancient painting data set; performing data expansion on the hyperspectral ancient painting data set; performing mixed pixel decomposition by utilizing a pseudo-projection removal matching unmixing algorithm; constructing a multi-element feature extraction model based on deep learning, and extracting hyperspectral spectral information and spatial information of the ancient painting; constructing a multi-information multi-scale feature fusion detection recognition model; and randomly selecting a test sample in the hyperspectral ancient painting data set to form a new data set, and verifying the detection and identification model. The method utilizes the technical advantages of abundant hyperspectral image information and the advantages of rapidity, accuracy, high efficiency and the like of neural network target detection based on deep learning to detect and identify the ancient painting, has the characteristics of rapidness and high efficiency, and overcomes the defect of insufficient spectral information in the image processing of common painting.
Description
Technical Field
The invention relates to the technical field of hyperspectral image processing, in particular to a hyperspectral ancient painting detection and identification method based on deep learning.
Background
The ancient painting is necessary for appreciation and research of painting art due to the particularity of cultural transmission, Chinese culture is profound, the painting has the characteristics of wide content, large information quantity and huge quantity, the painting in the same period comprises various painting types, the painting types in the same period are embodied in the painting of different dynasties, the current painting image appreciation detection work mainly depends on a large amount of manual labels for analysis and processing, and the accurate recognition of the painting age has the challenge. In recent years, the application of hyperspectral technology to historical relic antiquity is rising day by day, hyperspectral images have rich spectral characteristic information and spatial characteristic information, and hyperspectral data are used as a data cube with massive information, so that the detection research on ancient paintings has great mining value. The painting of different ages, because the different of the age that the author belongs to, the pigment and the drawing style that use all respectively are different, through detecting the discernment to high spectrum ancient painting, draw characteristic information and detect the discernment from the art image, both can satisfy people to the demand of art and cultural research, also can provide help for the repair work of ancient painting historical relic simultaneously.
The detection of the painting figures needs the common support of the fields of hyperspectral image identification, artistry, computer vision, feature extraction, mode identification, artificial intelligence and the like, and the detection research of the ancient painting years is very challenging while the interdiscipline brings technical innovation. The painting art often causes the difference of the pigment usage due to the difference of the ages, and in addition, the creators are influenced by the difference of the culture of the generations at the time, and the particularity of the art expression of lines, so that the contents, styles and expression feelings of the painting of different ages also have different characteristics. The cultural relics of each dynasty, especially the painting cultural relics, have political symbolic features of each dynasty, but the change between the painting styles and contents of similar dynasties is delicate, such as: the murals of the middle Tang and the late Tang are similar in style and content, so that the era attributes of the two dynasty painting creations are difficult to distinguish only by observing with human eyes. At present, the research on hyperspectral ancient painting detection and identification is lacked, and the research on painting art is necessary as the pursuit of human beings on the mental world to carry out deep and multi-angle research on the painting art.
Disclosure of Invention
The invention aims to provide a hyperspectral ancient painting detection and identification method based on deep learning, and aims to solve the technical problems of detection and identification of the ancient painting ages, genuineness, content expression and the like.
To solve the above technical problem, an embodiment of the present invention provides the following solutions:
a hyperspectral ancient painting detection and identification method based on deep learning comprises the following steps:
s1, collecting hyperspectral data of the ancient painting, and constructing a hyperspectral ancient painting data set;
s2, performing data expansion on the hyperspectral ancient painting data set;
s3, performing mixed pixel decomposition by using a pseudo-projection-removing matching unmixing algorithm;
s4, constructing a multi-feature extraction model based on deep learning, and extracting hyperspectral spectral information and spatial information of the ancient painting;
s5, constructing a multi-information multi-scale feature fusion detection recognition model;
s6, randomly selecting a test sample in the hyperspectral ancient painting data set to form a new data set, and verifying the detection and identification model.
Preferably, the step S1 includes:
constructing a hyperspectral ancient painting dataset by acquiring hyperspectral data of ancient painting through existing hyperspectral ancient painting public data and hyperspectral imaging equipment, wherein the hyperspectral ancient painting dataset comprises hyperspectral ancient painting data of figure painting images, landscape painting images and animal and flower images of different ages;
marking sample data in the hyperspectral ancient painting dataset, and dividing the sample data into training samples and test samples;
and simultaneously establishing a target end member spectrum library.
Preferably, the step S2 includes:
the acquired hyperspectral drawing data are expanded and augmented respectively in a sampling mode, and the data are expanded by randomly cutting and reserving 70% -85% of the area of the original hyperspectral drawing data.
Preferably, the step S3 includes:
respectively extracting hyperspectral data and a target end member of a spectrum library;
performing minimum noise separation transformation on the hyperspectral data and the target end member;
performing matched filtering on the hyperspectral data and the target end member to obtain an abundance image of a possible target end member;
and establishing a high-dimensional convex geometric model of the hyperspectrum, eliminating false positive results, and finally obtaining a target distribution map.
Preferably, in step S4, the step of extracting the spectral information of the paleo drawing hyperspectral image includes:
the spectral information is converted to the space dimension of the image through spectral angle conversion, the spectral information is converted into a two-dimensional gray image through a one-dimensional vector, the gray value of a place with large spectral difference is high, the gray value of a place with small spectral difference is low, and therefore feature extraction of the spectral information is achieved.
Preferably, in step S4, the step of extracting spatial information of the paleo drawing hyperspectral image includes:
and (4) performing principal component analysis processing on the hyperspectral image, and extracting spatial information of hyperspectral data.
Preferably, the step S5 includes:
the method comprises the following steps of (1) importing hyperspectral spectral information and spatial information of ancient painting as input into a multivariate information multi-scale feature fusion detection recognition model, wherein the realization process of the model is as follows:
using a depth residual error connection network as a main feature extraction network, adding a plurality of convolution layers behind the main feature extraction network, wherein the convolution layers gradually reduce the size of the feature maps and fuse a plurality of scale feature maps together;
inputting the characteristic fusion into a full connection layer, outputting a vector probability matrix, determining whether a real data mark is matched with a prediction mark in the data training process, filtering out an optimal prediction result by a non-maximum suppression method, and finally realizing multi-scale detection and identification.
Preferably, in step S3, the matched filtering implementation process is divided into a projection of the average modified target spectrum onto the generalized inverse matrix of the covariance data by performing contrast balance between the image variance and the target background:
in the formula, PV is a matching projection vector, Tsmnf is a target spectrum converted into MNF space; dsmnf is the pixel average spectrum of the hyperspectral data converted into MNF space; the score range from 0 to 1 then gives the score PVI value of the zero spectrum to the target spectrum:
PVI=PV*Dmnf
and the Dmnf is an MNF data set and is realized by searching a part of contrast vectors which are vertical to a finite space by utilizing covariance data, obtaining a projection vector PV according to matched projection, balancing image variance of background target separation and output, and positioning known information under the condition that the background is unknown and mixed pixels exist.
Preferably, in the step S3, the elimination of false positive result is realized by:
and directly identifying and rejecting common false positive results in projection by using the high-dimensional convex geometric model of the mixed spectrum, and eliminating partial false detection results by establishing the high-dimensional convex geometric model of the hyperspectral image to finally obtain a target distribution map.
Preferably, the implementation process of converting the spectrum angle into the gray scale image is as follows:
the spectral angular distance between a sample point pixel and 8 adjacent pixels thereof is calculated, the spectral angular distance value is used as a coordinate value, so that the spectral dimension is mapped to a new space dimension, the Euclidean distance between the sample point and the origin of coordinates of the 8-dimensional space is calculated, the obtained value is converted into a gray value to be assigned to the current pixel, the same operation is carried out on each pixel in the hyperspectral image, and finally a grayscale image is obtained.
The scheme of the invention at least comprises the following beneficial effects:
according to the scheme, the ancient painting is detected and identified by utilizing the technical advantages of abundant hyperspectral image information and the advantages of rapidity, accuracy, high efficiency and the like of neural network target detection based on deep learning, the defects of large workload, large errors caused by human factors and the like of traditional painting information characteristic extraction and detection identification are overcome, the ancient painting has the characteristics of rapidness and high efficiency, and the defect of insufficient spectral information in the common painting image processing is overcome.
Drawings
FIG. 1 is a flowchart of a hyper-spectral ancient painting detection and identification method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of a hyperspectral ancient painting detection and identification process in an embodiment of the invention;
FIG. 3 is an overall schematic diagram of a hyperspectral ancient painting detection and identification process in an embodiment of the invention;
FIG. 4 is a flow chart of a de-pseudo projection matching unmixing algorithm in an embodiment of the present invention;
FIG. 5 is a schematic diagram of extraction of spectral information features of hyperspectral data in an embodiment of the invention;
FIG. 6 is a schematic diagram of extraction of spatial information features of hyperspectral data in an embodiment of the invention;
FIG. 7 is a block diagram of a multi-information multi-scale feature fusion detection recognition model according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a hyperspectral ancient painting detection and identification method based on deep learning, and as shown in figure 1, the method comprises the following steps:
s1, collecting hyperspectral data of the ancient painting, and constructing a hyperspectral ancient painting data set;
s2, performing data expansion on the hyperspectral ancient painting data set;
s3, performing mixed pixel decomposition by using a pseudo-projection-removing matching unmixing algorithm;
s4, constructing a multi-feature extraction model based on deep learning, and extracting hyperspectral spectral information and spatial information of the ancient painting;
s5, constructing a multi-information multi-scale feature fusion detection recognition model;
s6, randomly selecting a test sample in the hyperspectral ancient painting data set to form a new data set, and verifying the detection and identification model.
The method disclosed by the invention utilizes the technical advantages of abundant hyperspectral image information and the advantages of rapidity, accuracy, high efficiency and the like of neural network target detection based on deep learning to detect and identify the ancient painting, overcomes the defects of large workload, large error caused by human factors and the like of traditional painting information characteristic extraction and detection and identification, has the characteristics of rapidness and high efficiency, and overcomes the defect of insufficient spectral information in the common painting image processing.
As a specific embodiment of the method of the present invention, as shown in fig. 2 and 3, the detection and identification process of the hyperspectral ancient painting is as follows: acquiring hyperspectral data, constructing a hyperspectral ancient painting data set and expanding the data; mixed pixel decomposition is carried out by a pseudo-projection matching unmixing algorithm, so that the mixed pixel problem caused by pigment mixing is solved, the data volume is reduced, and redundant information is eliminated; constructing a multi-feature extraction module, and extracting the spectral information and the spatial information of the ancient painting pigment; the model can realize the feature extraction of various hyperspectral information and effectively identify the age, truth, falsity, content and the like of the ancient painting.
Further, step S1 includes:
constructing a hyperspectral ancient painting data set by using the existing hyperspectral ancient painting public data and hyperspectral data of ancient painting with a waveband of 400-2500 nm acquired by using hyperspectral imaging equipment, wherein the hyperspectral ancient painting data set comprises hyperspectral ancient painting data of figure painting images, landscape painting images and animal and flower images of different ages;
marking sample data in the hyperspectral ancient painting dataset, and dividing the sample data into training samples and test samples;
and simultaneously establishing a target end member spectrum library.
Further, if the obtained hyperspectral ancient painting data volume is small, the self-built data set can be constructed in an expanding manner, and the step S2 includes:
the acquired hyperspectral drawing data are expanded and augmented respectively in a sampling mode, and the data are expanded by randomly cutting and reserving 70% -85% of the area of the original hyperspectral drawing data.
It is known from a large number of literatures that ancient pigments are not many in types and are basically classified into three major types, namely mineral pigments, animal pigments and plant pigments, so that international standard spectral libraries, such as a United States clinical survey (USGS) spectral library, a Jet Progress Laboratory (JPL) spectral library and an italian national institute of art (IFCA) spectral library, which are most commonly recognized internationally, are selected, different target end-member spectra are obtained from the international standard spectral libraries, and a target end-member spectral library is established.
Further, step S3 includes:
respectively extracting hyperspectral data and a target end member of a spectrum library;
performing minimum noise separation transformation on the hyperspectral data and the target end member;
performing matched filtering on the hyperspectral data and the target end member to obtain an abundance image of a possible target end member;
and establishing a high-dimensional convex geometric model of the hyperspectrum, eliminating false positive results, and finally obtaining a target distribution map.
The matched filtering implementation process comprises the steps of carrying out contrast balance between image variance and a target background, and projecting an average corrected target spectrum onto a generalized inverse matrix of covariance data:
in the formula, PV is a matching projection vector, Tsmnf is a target spectrum converted into MNF space; dsmnf is the pixel average spectrum of the hyperspectral data converted into MNF space; the score range from 0 to 1 then gives the score PVI value of the zero spectrum to the target spectrum:
PVI=PV*Dmnf
and the Dmnf is an MNF data set and is realized by searching a part of contrast vectors which are vertical to a finite space by utilizing covariance data, obtaining a projection vector PV according to matched projection, balancing image variance of background target separation and output, and positioning known information under the condition that the background is unknown and mixed pixels exist.
According to the pigment used by the target object and the environmental background in the general painting contents in the ancient painting, the painting target content and the background can be separated, the target color of the obtained gray image is brighter, the background color is darker, and in addition, the problem of mistaken separation caused by high pigment similarity can be solved.
The implementation process for eliminating false positive results is as follows: and directly identifying and rejecting common false positive results in projection by using the high-dimensional convex geometric model of the mixed spectrum, and eliminating partial false detection results by establishing the high-dimensional convex geometric model of the hyperspectral image to finally obtain a target distribution map.
FIG. 4 is a flow chart of a de-pseudo projection matching unmixing algorithm in an embodiment of the present invention. The method comprises the steps of firstly carrying out minimum noise separation (MNF) transformation on hyperspectral data and a target end member, then carrying out matched filtering on the hyperspectral data and the target end member, and carrying out contrast balance between an image variance and a target background. And projecting the average corrected target spectrum onto a generalized inverse matrix of covariance data by searching a part of contrast vectors which are vertical to a finite space by using the covariance data, and giving a score PVI value from a zero spectrum to the target spectrum according to a projection score range from 0 to 1. The projection optimally balances the image variance of background target separation and output, locates the known information under the condition that the background is unknown and the mixed pixel exists, obtains the abundance image of the possible target end member, and realizes abundance estimation and background suppression.
In order to eliminate a large number of false positive values possibly existing in a projection result, a high-dimensional convex geometric model of a mixed spectrum is used for directly identifying and rejecting common false positive results in the projection, a high-dimensional convex geometric model of a high spectrum is established, partial false detection results are eliminated, and a target distribution diagram is finally obtained.
Further, in step S4, the step of extracting the spectral information of the paleopainting hyperspectral image includes:
the spectral information is converted to the space dimension of the image through spectral angle conversion, the spectral information is converted into a two-dimensional gray image through a one-dimensional vector, the gray value of a place with large spectral difference is high, the gray value of a place with small spectral difference is low, and therefore feature extraction of the spectral information is achieved.
The implementation process of converting the spectrum angle into the gray level image comprises the following steps:
the spectral angular distance between a sample point pixel and 8 adjacent pixels thereof is calculated, the spectral angular distance value is used as a coordinate value, so that the spectral dimension is mapped to a new space dimension, the Euclidean distance between the sample point and the origin of coordinates of the 8-dimensional space is calculated, the obtained value is converted into a gray value to be assigned to the current pixel, the same operation is carried out on each pixel in the hyperspectral image, and finally a grayscale image is obtained.
FIG. 5 is a schematic diagram of extraction of spectral information features of hyperspectral data in an embodiment of the invention. Let the current pixel be xi,jThen its 8 neighboring pixels are x respectivelyi-1,j-1、xi-1,j、xi-1,j+1、xi,j-1、xi,j+1、xi+1,j-1、xi+1,j、xi+1,j+1Calculating the spectrum angle between the pixel and 8 adjacent pixels, and mapping the 8 spectrum angles as coordinate values to an 8-dimensional space coordinate axis, wherein the 8 coordinate values of the sample point are the spectrum angles of the current pixel and the 8 adjacent pixels around, and the Euclidean distance from the sample point to the origin of the 8-dimensional space coordinate represents the x-ray image elementi,jThe magnitude of the integrated similarity with its neighboring picture elements. If the spectral difference between a certain pixel element and an adjacent pixel element is larger, the spectral angle between the certain pixel element and the adjacent pixel element is larger, the 8 coordinate values of the certain pixel element are larger, and the distance from a sample point to the origin of coordinates is far; on the contrary, if the spectral difference between the current pixel and the adjacent pixel is smaller, the 8 coordinate values of the pixel are smaller, and the sample point is closer to the origin of coordinates. And converting the distance value into a gray value to be given to the current pixel, and executing the same operation on each pixel in the hyperspectral image to finally obtain a gray image.
Further, in step S4, the step of extracting spatial information of the paleopainting hyperspectral image includes:
and (4) performing principal component analysis processing on the hyperspectral image, and extracting spatial information of hyperspectral data.
FIG. 6 is a schematic diagram of extraction of spatial information features of hyperspectral data in an embodiment of the invention.
Firstly, Principal Component Analysis (PCA) processing is carried out on a hyperspectral drawing image, spatial information of a target, such as features of form, color, texture and the like of content in the drawing, is extracted, and feature extraction is carried out through spatial differences existing among content information in different drawings.
The method comprises the steps of obtaining spatial information of pixels in hyperspectral image data, firstly adopting PCA (principal component analysis) to reduce dimensions, greatly reducing the problem of low detection efficiency caused by high dimensionality of the hyperspectral data, extracting useful spatial information, eliminating useless information, extracting target pixels from each wave band of an original image, and then sending pixel blocks of M multiplied by N (for example, 12 multiplied by 12) around the target pixels as training samples and test samples into a target detection model.
Further, step S5 includes:
the method comprises the following steps of (1) importing hyperspectral spectral information and spatial information of ancient painting as input into a multivariate information multi-scale feature fusion detection recognition model, wherein the realization process of the model is as follows:
using a deep residual error connection network (ResNet-101) as a main feature extraction network, and adding a plurality of convolution layers later, wherein the convolution layers gradually reduce the size of the feature map and fuse a plurality of scale feature maps together;
inputting the characteristic fusion into a full connection layer, outputting a vector probability matrix, determining whether a real data mark is matched with a prediction mark in the data training process, filtering out an optimal prediction result by a non-maximum suppression method, and finally realizing multi-scale detection and identification.
FIG. 7 is a block diagram of a multi-information multi-scale feature fusion detection recognition model according to an embodiment of the present invention. The realization process of the model is as follows: using ResNet-101 as a main feature extraction network, introducing the 3 rd block into a feature fusion layer, as a first part of the feature fusion layer, after an input image passes through the ResNet-101 network, a feature map is changed into 19 × 19 × 512, then a feature map is changed into 19 × 19 × 1024 after passing through a 3 × 3 convolutional layer, as a second part of the feature fusion layer, then the input image is changed into 10 × 10 × 512 after passing through a pooling layer, as a third part of the feature fusion layer, and is changed into 5 × 5 × 256 after passing through the 3 × 3 convolutional layer, as a fourth part of the feature fusion layer, finally the 1 × 1 convolutional layer and the 3 × 3 convolutional layer are changed into 1 × 1 × 256, as a fifth part of the feature fusion layer, the convolutional layers gradually reduce the size of the feature map, and fuse a plurality of scale feature maps together, wherein the fusion method is as follows:
Ai=[αXi,(1-α)Yi]
wherein A isiRepresenting the ith feature value, X, of the new fused featureiDenotes the ith spectral feature, Yiα is a scaling factor, which belongs to 0-1, a weighted feature splicing scheme is carried out during feature fusion, the scaling factor is an empirical value and needs to be adjusted in the experimental process, the fused features are input into a full connection layer, a vector probability matrix is output, and real data labels are determined in the data training processAnd (4) whether the prediction marks are matched or not is judged, and the optimal prediction result is filtered out by a non-maximum suppression method, so that the aim of multi-scale detection and identification is finally fulfilled.
In conclusion, the method provided by the invention applies the deep learning technology based on the convolutional neural network to the research and analysis of the hyperspectral image of the ancient painting, can effectively identify and detect the age, the truth and the drawn content of the ancient painting, and further solves the problem that manual painting identification is time-consuming and labor-consuming.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A hyperspectral ancient painting detection and identification method based on deep learning is characterized by comprising the following steps:
s1, collecting hyperspectral data of the ancient painting, and constructing a hyperspectral ancient painting data set;
s2, performing data expansion on the hyperspectral ancient painting data set;
s3, performing mixed pixel decomposition by using a pseudo-projection-removing matching unmixing algorithm;
s4, constructing a multi-feature extraction model based on deep learning, and extracting hyperspectral spectral information and spatial information of the ancient painting;
s5, constructing a multi-information multi-scale feature fusion detection recognition model;
s6, randomly selecting a test sample in the hyperspectral ancient painting data set to form a new data set, and verifying the detection and identification model.
2. The hyperspectral ancient painting detection and identification method according to claim 1, wherein the step S1 comprises:
constructing a hyperspectral ancient painting dataset by acquiring hyperspectral data of ancient painting through existing hyperspectral ancient painting public data and hyperspectral imaging equipment, wherein the hyperspectral ancient painting dataset comprises hyperspectral ancient painting data of figure painting images, landscape painting images and animal and flower images of different ages;
marking sample data in the hyperspectral ancient painting dataset, and dividing the sample data into training samples and test samples;
and simultaneously establishing a target end member spectrum library.
3. The hyperspectral ancient painting detection and identification method according to claim 2, wherein the step S2 comprises:
the acquired hyperspectral drawing data are expanded and augmented respectively in a sampling mode, and the data are expanded by randomly cutting and reserving 70% -85% of the area of the original hyperspectral drawing data.
4. The hyperspectral ancient painting detection and identification method according to claim 3, wherein the step S3 comprises:
respectively extracting hyperspectral data and a target end member of a spectrum library;
performing minimum noise separation transformation on the hyperspectral data and the target end member;
performing matched filtering on the hyperspectral data and the target end member to obtain an abundance image of a possible target end member;
and establishing a high-dimensional convex geometric model of the hyperspectrum, eliminating false positive results, and finally obtaining a target distribution map.
5. The hyperspectral paleo drawing detection and identification method according to claim 4, wherein in the step S4, the step of extracting hyperspectral spectral information of paleo drawing comprises:
the spectral information is converted to the space dimension of the image through spectral angle conversion, the spectral information is converted into a two-dimensional gray image through a one-dimensional vector, the gray value of a place with large spectral difference is high, the gray value of a place with small spectral difference is low, and therefore feature extraction of the spectral information is achieved.
6. The hyperspectral paleo drawing detection and identification method according to claim 5, wherein in the step S4, the step of extracting hyperspectral spatial information of paleo drawing comprises:
and (4) performing principal component analysis processing on the hyperspectral image, and extracting spatial information of hyperspectral data.
7. The hyperspectral ancient painting detection and identification method according to claim 6, wherein the step S5 comprises:
the method comprises the following steps of (1) importing hyperspectral spectral information and spatial information of ancient painting as input into a multivariate information multi-scale feature fusion detection recognition model, wherein the realization process of the model is as follows:
using a depth residual error connection network as a main feature extraction network, adding a plurality of convolution layers behind the main feature extraction network, wherein the convolution layers gradually reduce the size of the feature maps and fuse a plurality of scale feature maps together;
inputting the characteristic fusion into a full connection layer, outputting a vector probability matrix, determining whether a real data mark is matched with a prediction mark in the data training process, filtering out an optimal prediction result by a non-maximum suppression method, and finally realizing multi-scale detection and identification.
8. The hyperspectral ancient painting detection and identification method according to claim 4, wherein in the step S3, the matching filtering implementation process comprises the steps of comparing and balancing the image variance and the target background, and projecting the target spectrum after average correction onto the generalized inverse matrix of the covariance data:
in the formula, PV is a matching projection vector, Tsmnf is a target spectrum converted into MNF space; dsmnf is the pixel average spectrum of the hyperspectral data converted into MNF space; the score range from 0 to 1 then gives the score PVI value of the zero spectrum to the target spectrum:
PVI=PV*Dmnf
and the Dmnf is an MNF data set and is realized by searching a part of contrast vectors which are vertical to a finite space by utilizing covariance data, obtaining a projection vector PV according to matched projection, balancing image variance of background target separation and output, and positioning known information under the condition that the background is unknown and mixed pixels exist.
9. The hyperspectral ancient painting detection and identification method according to claim 4, wherein in the step S3, the implementation process of eliminating false positive results is as follows:
and directly identifying and rejecting common false positive results in projection by using the high-dimensional convex geometric model of the mixed spectrum, and eliminating partial false detection results by establishing the high-dimensional convex geometric model of the hyperspectral image to finally obtain a target distribution map.
10. The hyperspectral ancient painting detection and identification method according to claim 5, wherein the implementation process of converting the spectrum angle into a gray image is as follows:
the spectral angular distance between a sample point pixel and 8 adjacent pixels thereof is calculated, the spectral angular distance value is used as a coordinate value, so that the spectral dimension is mapped to a new space dimension, the Euclidean distance between the sample point and the origin of coordinates of the 8-dimensional space is calculated, the obtained value is converted into a gray value to be assigned to the current pixel, the same operation is carried out on each pixel in the hyperspectral image, and finally a grayscale image is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010080017.8A CN111291675B (en) | 2020-02-04 | 2020-02-04 | Deep learning-based hyperspectral ancient painting detection and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010080017.8A CN111291675B (en) | 2020-02-04 | 2020-02-04 | Deep learning-based hyperspectral ancient painting detection and identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111291675A true CN111291675A (en) | 2020-06-16 |
CN111291675B CN111291675B (en) | 2024-01-26 |
Family
ID=71024408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010080017.8A Active CN111291675B (en) | 2020-02-04 | 2020-02-04 | Deep learning-based hyperspectral ancient painting detection and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111291675B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329792A (en) * | 2020-10-30 | 2021-02-05 | 中国电子科技集团公司第五十四研究所 | Hyperspectral image target feature extraction method based on spectrum angle |
CN112396066A (en) * | 2020-11-27 | 2021-02-23 | 广东电网有限责任公司肇庆供电局 | Feature extraction method suitable for hyperspectral image |
CN113327218A (en) * | 2021-06-10 | 2021-08-31 | 东华大学 | Hyperspectral and full-color image fusion method based on cascade network |
CN114445720A (en) * | 2021-12-06 | 2022-05-06 | 西安电子科技大学 | Hyperspectral anomaly detection method based on spatial-spectral depth synergy |
CN115587298A (en) * | 2021-07-05 | 2023-01-10 | 中国矿业大学(北京) | Historical Jingdezhen blue and white porcelain age discrimination method based on deep learning |
CN116612337A (en) * | 2023-07-19 | 2023-08-18 | 中国地质大学(武汉) | Object detection method, device and system based on hyperspectral image and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060139479A1 (en) * | 2004-12-23 | 2006-06-29 | Dicarlo Jeffrey M | Image processing methods and systems for fine art reproduction |
CN102879099A (en) * | 2012-08-08 | 2013-01-16 | 北京建筑工程学院 | Wall painting information extraction method based on hyperspectral imaging |
CN107843593A (en) * | 2017-10-13 | 2018-03-27 | 上海工程技术大学 | A kind of textile material recognition methods and system based on high light spectrum image-forming technology |
CN108416357A (en) * | 2018-03-01 | 2018-08-17 | 北京建筑大学 | A kind of extracting method of colored drawing class historical relic implicit information |
-
2020
- 2020-02-04 CN CN202010080017.8A patent/CN111291675B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060139479A1 (en) * | 2004-12-23 | 2006-06-29 | Dicarlo Jeffrey M | Image processing methods and systems for fine art reproduction |
CN102879099A (en) * | 2012-08-08 | 2013-01-16 | 北京建筑工程学院 | Wall painting information extraction method based on hyperspectral imaging |
CN107843593A (en) * | 2017-10-13 | 2018-03-27 | 上海工程技术大学 | A kind of textile material recognition methods and system based on high light spectrum image-forming technology |
CN108416357A (en) * | 2018-03-01 | 2018-08-17 | 北京建筑大学 | A kind of extracting method of colored drawing class historical relic implicit information |
Non-Patent Citations (4)
Title |
---|
BOARDMAN, JOSEPH W.: "Analysis of Imaging Spectrometer Data Using N-Dimensional Geometry and a Mixture-Tuned Matching Filtering Approach", IEEE * |
侯妙乐 等: "高光谱成像技术在彩绘文物分析中的研究综述", 光谱学与光谱分析 * |
徐君 等: "一种基于光谱角空间变换的高光谱图像分割方法", 红外技术 * |
郭新蕾;张立福;吴太夏;张红明;罗旭东;: "成像光谱技术的古画隐藏信息提取", 中国图象图形学报 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329792A (en) * | 2020-10-30 | 2021-02-05 | 中国电子科技集团公司第五十四研究所 | Hyperspectral image target feature extraction method based on spectrum angle |
CN112396066A (en) * | 2020-11-27 | 2021-02-23 | 广东电网有限责任公司肇庆供电局 | Feature extraction method suitable for hyperspectral image |
CN112396066B (en) * | 2020-11-27 | 2024-04-30 | 广东电网有限责任公司肇庆供电局 | Feature extraction method suitable for hyperspectral image |
CN113327218A (en) * | 2021-06-10 | 2021-08-31 | 东华大学 | Hyperspectral and full-color image fusion method based on cascade network |
CN113327218B (en) * | 2021-06-10 | 2023-08-25 | 东华大学 | Hyperspectral and full-color image fusion method based on cascade network |
CN115587298A (en) * | 2021-07-05 | 2023-01-10 | 中国矿业大学(北京) | Historical Jingdezhen blue and white porcelain age discrimination method based on deep learning |
CN114445720A (en) * | 2021-12-06 | 2022-05-06 | 西安电子科技大学 | Hyperspectral anomaly detection method based on spatial-spectral depth synergy |
CN114445720B (en) * | 2021-12-06 | 2023-06-20 | 西安电子科技大学 | Hyperspectral anomaly detection method based on spatial spectrum depth synergy |
CN116612337A (en) * | 2023-07-19 | 2023-08-18 | 中国地质大学(武汉) | Object detection method, device and system based on hyperspectral image and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111291675B (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111291675A (en) | Hyperspectral ancient painting detection and identification method based on deep learning | |
CN110378196B (en) | Road visual detection method combining laser point cloud data | |
CN108537742B (en) | Remote sensing image panchromatic sharpening method based on generation countermeasure network | |
CN109766858A (en) | Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering | |
CN109993072A (en) | The low resolution pedestrian weight identifying system and method generated based on super resolution image | |
CN110543872B (en) | Unmanned aerial vehicle image building roof extraction method based on full convolution neural network | |
CN111008664B (en) | Hyperspectral sea ice detection method based on space-spectrum combined characteristics | |
CN112906550B (en) | Static gesture recognition method based on watershed transformation | |
CN106650798B (en) | A kind of indoor scene recognition methods of combination deep learning and rarefaction representation | |
CN112307919A (en) | Improved YOLOv 3-based digital information area identification method in document image | |
CN113033385A (en) | Deep learning-based violation building remote sensing identification method and system | |
CN111652273A (en) | Deep learning-based RGB-D image classification method | |
CN111882000A (en) | Network structure and method applied to small sample fine-grained learning | |
CN106407978A (en) | Unconstrained in-video salient object detection method combined with objectness degree | |
Shu et al. | Detecting 3D points of interest using projective neural networks | |
CN117437691A (en) | Real-time multi-person abnormal behavior identification method and system based on lightweight network | |
CN115359304B (en) | Single image feature grouping-oriented causal invariance learning method and system | |
CN107273793A (en) | A kind of feature extracting method for recognition of face | |
CN110766655A (en) | Hyperspectral image significance analysis method based on abundance | |
CN109636838A (en) | A kind of combustion gas Analysis of Potential method and device based on remote sensing image variation detection | |
CN114862883A (en) | Target edge extraction method, image segmentation method and system | |
CN117911814A (en) | Zero sample image processing system and processing method for cross-modal semantic alignment | |
CN111046883B (en) | Intelligent assessment method and system based on ancient coin image | |
CN113705731A (en) | End-to-end image template matching method based on twin network | |
Pornpanomchai et al. | Buddhist amulet recognition system (BARS) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |