WO2023015610A1 - Procédé et système à base d'intelligence artificielle d'authentification d'œuvres d'art antérieur et moderne - Google Patents

Procédé et système à base d'intelligence artificielle d'authentification d'œuvres d'art antérieur et moderne Download PDF

Info

Publication number
WO2023015610A1
WO2023015610A1 PCT/CN2021/114254 CN2021114254W WO2023015610A1 WO 2023015610 A1 WO2023015610 A1 WO 2023015610A1 CN 2021114254 W CN2021114254 W CN 2021114254W WO 2023015610 A1 WO2023015610 A1 WO 2023015610A1
Authority
WO
WIPO (PCT)
Prior art keywords
distribution
image information
sample
samples
classification
Prior art date
Application number
PCT/CN2021/114254
Other languages
English (en)
Chinese (zh)
Inventor
李應樵
马志雄
Original Assignee
万维数码智能有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 万维数码智能有限公司 filed Critical 万维数码智能有限公司
Publication of WO2023015610A1 publication Critical patent/WO2023015610A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the invention belongs to the field of identification of ancient and modern works of art, in particular to a method and system for identifying ancient and modern works of art by using artificial intelligence.
  • CN107341461A discloses a method and system for identifying the authenticity of artworks with intelligent identification and analysis technology, classifying artworks into dead artworks and living artworks; establishing a database for artworks, storing all information of artworks, and analyzing the artist's
  • the picture system performs intelligent analysis and storage; enters the information of artworks, and the system performs self-learning evolution, and performs target source matching for artworks that need to be identified; if it exists, it compares the authenticity; if it does not exist Then compare the style of the artwork deduced by self-learning, and get the final result of authenticity identification; this solves the current situation that the art identification field cannot be systematized and standardized; through intelligent identification and analysis technology, and based on A self-learning evolution system developed on the basis of the database established by the artwork image information.
  • CN109191145A discloses a method for establishing a database for judging the age of artworks and a method for judging the age of artworks.
  • the method for establishing a database for judging the age of artworks includes the following steps: (1) selecting the same age, same type, At least two artwork specimens of the same style; (2) extracting the total field of view image; (3) establishing a database; a method for identifying the age of artworks which includes the following steps: I. using the database established above; II. extracting the art to be determined III. Analyze the extracted image on the artwork to be determined and the image saved in the database using image recognition technology; IV. Judgment.
  • CN111339974A discloses a method for identifying modern ceramics and ancient ceramics, by constructing positive samples corresponding to ancient ceramics and negative samples corresponding to antique porcelain, converting RGB images to HSV color space to obtain HSV images, and obtaining feature descriptors of HSV images , input the feature descriptor into the support vector machine for training, obtain the training parameters of the support vector machine, input the RGB image into the deep convolutional neural network architecture for training to obtain the network parameters of the convolutional neural network, according to the training parameters of the support vector machine and
  • the network parameters of the convolutional neural network determine the deep learning model, input the grayscale image of the positive sample and the grayscale image of the negative sample into the deep learning model for training, obtain the identification model, obtain the image of the porcelain to be identified, and convert the image to be identified
  • the picture is input into the identification model, and according to the output result of the identification model, it is determined whether the porcelain to be identified is modern ceramics or ancient ceramics, so as to improve the efficiency of corresponding ceramic identification.
  • the object of the present invention is to provide a method and system for appraising ancient and modern artworks by using artificial intelligence.
  • One aspect of the present invention provides a method for identifying ancient and modern works of art, including: inputting authentic image information; and inputting image information of artworks to be identified; combining the image information of artworks to be identified with the Authentic image information detects the samples in the distribution and the samples out of the distribution through the detector; classifies the samples in the distribution; classifies the samples in the distribution after classification and class image information similar to the artwork to be identified Granular classification: output classified samples in the distribution or samples after fine-grained classification, and obtain the confidence degree of the image information of the artwork to be identified compared with the authentic image information as the identification conclusion.
  • the step of detecting the in-distribution samples and out-of-distribution samples through the detector to detect the image information of the artwork to be identified and the image information of the authentic works further includes: using a pre-trained model (The maximum normalized index probability output by the pre-trained model) is used for statistical analysis; the distribution of the normalized index probability of the OOD sample and the ID sample is found statistically; the distribution gap between the two is increased; an appropriate threshold is selected to judge a sample Whether it is an out-of-distribution sample or a sample in-distribution.
  • a pre-trained model The maximum normalized index probability output by the pre-trained model
  • the step of detecting the samples in the distribution and the samples out of the distribution by using the image information of the artwork to be identified and the image information of the authentic works through a detector further includes: using a model to learn a pair The uncertainty attribute of the input sample; to judge the test data, if the model input is a sample in the distribution, the uncertainty is low; on the contrary, if the model input is an out-of-distribution sample, the uncertainty is high.
  • the step of detecting the samples in the distribution and the samples out of the distribution by using the image information of the artwork to be identified and the image information of the authentic works through a detector further includes: using variational automatic coding Variational Autoencoder (Variational Autoencoder) reconstruction error (reconstruction error) or other measurement methods to determine whether a sample belongs to the distribution or out-of-distribution samples; the latent space of the encoder can learn the obviousness of the data in the distribution feature (silent vector), but not for out-of-distribution samples, so out-of-distribution samples will produce higher reconstruction errors.
  • Variational Autoencoder Variational Autoencoder
  • reconstruction error reconstruction error
  • the step of detecting the samples in the distribution and the samples out of the distribution by using the image information of the artwork to be identified and the image information of the authentic works through the detector further includes: using a classifier to extract Classify the features of the distribution to determine whether it is an out-of-distribution sample; some modify the network structure to an n+1 classifier, n is the number of categories of the original classification task, and the n+1th class is an out-of-distribution class; some directly take Extract features for classification without modifying the structure of the network.
  • the step of fine-grained classification of the classified samples in the distribution and image-like information similar to the artwork to be identified further includes: finding the image to be tested The feature area of the data; the feature area is input into the convolutional neural network; a part of the information of the feature area of the convolutional neural network enters the fully connected layer and the normalized exponential logistic regression layer for classification; through the volume Another part of the information of the feature region of the product neural network passes through the attention suggestion sub-network (APN) to obtain the candidate region; repeat the above-mentioned classification steps and APN steps, so that the feature region selected by the APN is the most discriminative region; introduce A loss function to obtain higher accuracy in identifying the image information of the class.
  • APN attention suggestion sub-network
  • the step of fine-grained classifying the classified samples in the distribution and the image-like information similar to the artwork to be identified further includes:
  • the local images (312) and (313) selected in the information (311) are input into two convolutional neural networks (314, A) and (315, B); the output of the convolutional neural network streams (A) and (B) is in Each location of the image is multiplied (318) using the outer product and combined to obtain a bilinear vector (316); the prediction is obtained through the classification layer (317).
  • classification layer (317) is a logistic regression or support vector machine classifier.
  • the step of fine-grained classifying the classified samples in the distribution and the image-like information similar to the artwork to be identified further includes: The information generates multiple candidate boxes on different scale feature maps, and the coordinates of each candidate box correspond to the pre-designed anchors; the "information content" of each candidate area is scored, and the area with a large amount of information has a high score;
  • the above feature map is followed by the feature extraction step, the fully connected layer (FC) and the normalized index step; the probability that the input area belongs to the target label is judged; the unnormalized probability extracted from each local area and the whole map is merged together Generates a long vector outputting the unnormalized probabilities for the 200 classes.
  • the present invention also provides a system for identifying ancient and modern works of art, including: an input module for inputting authentic image information; and inputting image information for artworks to be identified; The information and the authentic image information are detected by the detector to detect the samples in the distribution and the samples out of the distribution; the sample classification module classifies the samples in the distribution; the fine-grained classification module combines the classified samples in the distribution with the simulated Fine-grained classification of similar image information of the identified works of art; the output module outputs the samples in the distribution after classification or the samples after fine-grained classification, and obtains the image information of the artwork to be identified and the authentic The confidence level compared with the image information is used as the identification conclusion.
  • Fig. 1 is the flow chart of the ancient and modern works of art appraisal method of the present invention.
  • Figure 2(a)-(d) is a flow chart of the steps of detecting samples in distribution (in distribution) and out of distribution (out of distribution (OOD)) in the steps of the ancient and modern art identification method of the present invention.
  • Figure 2(a) is a flowchart of a normalized index based embodiment.
  • Fig. 2(b) is a flowchart of an embodiment of uncertainty.
  • Figure 2(c) is a flowchart of an embodiment of a probabilistic generative model.
  • Figure 2(d) is a flowchart of an embodiment of a classification model.
  • Fig. 3 (a) is a flow chart of the implementation of the attention convolutional neural network in the fine-grained classification step in the steps of the ancient and modern art identification method of the present invention.
  • Fig. 3(b) is a schematic diagram of the framework of a recurrent attention convolutional neural network (“RA-CNN”) for an implementation of the fine-grained classification step in the identification method of the present invention.
  • RA-CNN recurrent attention convolutional neural network
  • Fig. 3(c) is a schematic diagram of the bilinear vector network structure of another embodiment of the fine-grained classification step in the identification method of the present invention.
  • Fig. 3(d) is a flow chart of the implementation mode of bilinear vector network in the step of fine-grained classification in the steps of the ancient and modern art identification method of the present invention.
  • Fig. 3(e) is a flow chart of an embodiment in which the fine-grained classification step adopts the navigation-teaching-examination network (NTS-Net) classification in the steps of the ancient and modern artwork appraisal method of the present invention.
  • NTS-Net navigation-teaching-examination network
  • Fig. 4 is a structural diagram of the ancient and modern art identification system of the present invention.
  • Fig. 5 is a computer product diagram of the portable or fixed storage unit of the ancient and modern art identification system of the present invention.
  • Fig. 6(1) is an example of authentic image information involved in an embodiment of the ancient and modern artwork identification method of the present invention.
  • Fig. 6(2) is an example of authentic image information used to train the model involved in an embodiment of the ancient and modern artwork identification method of the present invention.
  • Fig. 6(3) is an example of the image information of the artwork to be authenticated involved in one embodiment of the ancient and modern artwork authentication method of the present invention.
  • Fig. 6 (4) is an example of the classification involved in an implementation of the ancient and modern artwork identification method of the present invention.
  • Fig. 1 is the flow chart of the ancient and modern works of art appraisal method of the present invention.
  • step 101 the image information and authentic image information of the artwork to be identified are input;
  • step 102 the image information of the artwork to be identified and the authentic image information are detected by a detector in distribution ) samples and distribution (out of distribution, OOD) samples;
  • step 103 the samples in the distribution are classified;
  • step 104 the samples in the distribution after classification are similar to the artwork to be identified Fine-grained classifier is performed on the class image information; in step 105, the classified samples in the distribution or samples after fine-grained classification are output to obtain the identification conclusion.
  • Figure 2(a)-(d) is a flow chart of the steps of detecting samples in distribution (in distribution) and out of distribution (out of distribution (OOD)) in the steps of the ancient and modern art identification method of the present invention.
  • Fig. 2 (a) is the flow chart of the embodiment based on normalized index
  • Fig. 2 (b) is the flow chart of the embodiment of uncertainty
  • Fig. 2 (c) is the flow chart of the embodiment of probability generation model
  • Figure 2(d) is a flowchart of an embodiment of a classification model.
  • the image-like data for model training and testing are independent and identically distributed (IID, Independent Identical Distribution) samples.
  • ID samples the data obtained after the model is deployed and launched is often not fully controlled, that is to say, the data received by the model may be out-of-distribution (OOD) samples, also known as outlier samples (outlier, abnormal).
  • OOD out-of-distribution
  • the depth model will consider an out-of-distribution (OOD) sample as a certain class in the distribution (ID) sample, and give a high degree of confidence.
  • the confidence degree described here is a normalized value of 0-1. Find out-of-distribution samples, but their settings may be different. For example, out-of-distribution detection (OOD detection) is modified on the model task, which requires not only to be able to effectively detect out-of-distribution (OOD) samples, but also to ensure that the performance of the model is not affected.
  • the detection distribution and out-of-distribution steps for the image-like data of ancient and modern artworks can be based on normalization index (Softmax-based), uncertainty (Uncertainty), probability generation model (Generative model), classification model (Classifier) method to detect in-distribution and out-of-distribution sample methods.
  • Softmax-based normalization index
  • Uncertainty uncertainty
  • Geneative model probability generation model
  • Classifier classification model
  • step 201 statistical analysis is performed using the maximum normalized index probability output by the pre-trained model (pre-trained model), and in step 202, statistically found OOD samples and For the distribution of the normalized index probability of the ID sample, in step 203, the distribution gap between the two is increased, and in step 204, an appropriate threshold is selected to determine whether a sample belongs to an out-of-distribution sample or an in-distribution sample.
  • This type of method is simple and effective, without modifying the structure of the classification model, and without training an out-of-distribution sample classifier.
  • the model is used to learn an uncertainty attribute for the input samples.
  • the test data is judged. If the model input is a sample in the distribution, the uncertainty is low; on the contrary, if the model input is an out-of-distribution sample, the uncertainty is high.
  • Such methods need to modify the network structure of the model to learn the uncertainty property.
  • step 221 use the reconstruction error (reconstruction error) of the variational autoencoder (Variational Autoencoder) or other measurement methods to judge whether a sample belongs to the sample in the distribution or out of the distribution;
  • the hidden space (latent space) of the encoder can learn the obvious features (silent vector) of the data in the distribution, but not for the out-of-distribution samples, so the out-of-distribution samples will generate higher reconstruction errors .
  • This method only focuses on out-of-distribution detection performance, and does not focus on the original task of the data in the distribution.
  • a classifier is used to classify the extracted features to determine whether it is an out-of-distribution sample; in step 232, some modify the network structure to be an n+1 classifier, n is the number of categories of the original classification task, and the n+1th category is an out-of-distribution category; in step 233, some features are directly extracted for classification without modifying the structure of the network.
  • Fig. 3 (a) is a flow chart of the implementation of the attention convolutional neural network in the fine-grained classification step in the steps of the ancient and modern art identification method of the present invention.
  • step 321 the characteristic area of the image data to be tested is searched, and in step 322, the characteristic area is input into the convolutional neural network; in step 323, a part of the information of the characteristic area through the convolutional neural network is entered Fully connected layer and normalized exponential logistic regression layer are classified; in step 324, another part of the information of the feature region through the convolutional neural network is passed through the attention suggestion subnetwork (APN), to obtain the candidate region; in step 325, repeating the step 323 and the step 324, so that the feature region selected by the APN is the most discriminative region; in step 326, introducing a loss function to obtain a higher accuracy of identifying the type of image information.
  • APN attention suggestion subnetwork
  • the "fine-grained" classification step is under the ordinary classification. For a more fine-grained division, it is necessary to explicitly find the most “discriminative" features in the picture. For ancient and modern works of art, it is necessary to find the characteristics of details, such as the degree of upturning of petals, the nuances of patterns, etc.
  • Fig. 3(b) is a schematic diagram of the framework of a recurrent attention convolutional neural network ("RA-CNN") for an implementation of the fine-grained classification step in the identification method of the present invention.
  • RA-CNN recurrent attention convolutional neural network
  • the symbol It means to cut a part of the characteristic area of the identified image-like information and enlarge it.
  • Each row 301, 302, 303 represents a common CNN network respectively.
  • the input ranges from coarse full-scale images to finer region attention (from top to bottom).
  • the picture (a 1 ) in the first row 301 is the roughest, and the picture (a 3 ) in the third row is finer.
  • a 1 After the image information a 1 enters b 1 (several convolutional layers), it is divided into two paths, all the way to c 1 and connected to fully connected layers (fully connected layers, FC) and softmax logistic regression layer for simple classification, and the other path enters d 1 is the attention proposal sub-network ("Attention Proposal Network", APN), get a candidate area.
  • APN Attention Proposal Network
  • the feature area is continuously enlarged and refined after two APNs.
  • a loss function (Ranking loss) is introduced: that is, the forced area a 1 , a 2 , a 3
  • the classification confidence level (confidence score) is getting higher and higher (that is, the corresponding P t probability of the last column of the picture is getting higher and higher), which means that the accuracy of identifying image information is getting higher and higher.
  • the network continuously refines the discriminative attention region.
  • Fig. 3(c) is a schematic diagram of the bilinear vector network structure of another embodiment of the fine-grained classification step in the identification method of the present invention.
  • Partial images 312 and 313 selected from the identified class image information 311 are input into two convolutional neural networks 314(A) and 315(B).
  • the outputs of the convolutional neural network streams A and B are multiplied 318 by outer product at each position of the image and combined to obtain a bilinear vector 316 , which is then passed through a classification layer 317 to obtain a prediction result.
  • f A and f B represent feature extraction functions, that is, convolutional network A and convolutional network B in Figure 3(c)
  • P is a pooling function (Pooling function)
  • C is a classification function.
  • the feature extraction function f( ) (i.e., the convolutional neural network stream CNN stream) consists of convolutional layers, pooling layers, and activation functions. This part of the network structure can be regarded as a function map:
  • the output of the pooling function P is an M ⁇ N matrix.
  • the feature matrix Stretched into a list of MN-sized feature vectors.
  • a classification function is used to classify the extracted features, and the classification layer 317 is implemented using a logistic regression or a support vector machine (support vector machine, SVM) classifier.
  • the CNN network can achieve high-level semantic feature acquisition for fine-grained images, and filter irrelevant background information in the image by iteratively training the convolution parameters in the network model.
  • the convolutional neural network flow A and the convolutional neural network flow B play complementary roles in the image recognition task, that is, the network A can locate the object in the image, and the network B can complete the positioning of the network A to Feature extraction of the object position.
  • the two networks can cooperate to complete the class detection and target feature removal process of the input fine-grained image, and better complete the fine-grained image recognition task.
  • Fig. 3(d) is a flow chart of the implementation mode of bilinear vector network in the step of fine-grained classification in the steps of the ancient and modern art identification method of the present invention.
  • the partial images 312 and 313 selected in the identified class image information 311 are input into two convolutional neural networks 314 (A) and 315 (B); in step 332, the convolutional neural network streams A and B
  • the output is multiplied 318 using an outer product at each location in the image and combined to obtain a bilinear vector 316 , and at step 333 the prediction is obtained through the classification layer 317 .
  • Fig. 3(e) is a flow chart of an embodiment in which the fine-grained classification step adopts the navigation-teaching-examination network (NTS-Net) classification in the steps of the ancient and modern artwork appraisal method of the present invention.
  • step 341 generate a plurality of candidate boxes on different scale feature maps (Feature maps) with the identified class image information, and the coordinates of each candidate box correspond to pre-designed anchors (Anchors); in step 342, give The "information content" of each candidate area is scored, and the area with a large amount of information has a high score; in step 343, a feature extraction step (Feature Extractor), a fully connected layer (FC) and a normalization index ( softmax) step; in step 344, determine the probability that the input region belongs to the target label (target label); in step 345, merge (concat) together the unnormalized probability (logits) extracted from each local area and the whole picture Generates a long vector outputting unnormalized probabilities (logits) corresponding to 200 categories.
  • the fine-grained classification step in the identification method of the present invention can also be adopted, and the navigation-teaching-examination network (NTS-Net) classification method of dividing the network subject into three components of navigation (Navigator), teaching (Teacher), and examination (Scrutinizer),
  • NTS-Net navigation-teaching-examination network
  • multiple candidate boxes are generated on feature maps of different scales, and the coordinates of each candidate box correspond to the pre-designed anchors (Anchors).
  • the Navigator scores the "information content" of each candidate area, and the area with a large amount of information has a higher score.
  • the teaching step is the commonly used feature extraction step (Feature Extractor), fully connected layer (FC) and normalized index (softmax) step, to judge the probability that the input area belongs to the target label (target label);
  • the review step is a fully connected layer,
  • the input is to combine (concat) the unnormalized probability (logits) extracted from each local area and the whole image together to generate a long vector, and output the unnormalized probability (logits) corresponding to 200 categories.
  • the specific steps of using this NTS method are: 1) The original image of size (448, 448, 3) enters the network, and after entering the Resnet-50 to extract features, it becomes a (14, 14, 2048) feature map, a A 2048-dimensional feature vector after the global pooling layer and a 200-dimensional unnormalized probability after the global pooling layer and the fully connected layer. 2)
  • the preset network (RPN) for generating candidate regions generates correspondences according to different sizes (Size) and aspect ratios on the three scales of (14, 14) (7, 7) (4, 4) There are 1614 anchors in total.
  • NMS Non-Maximum Suppression
  • Fig. 4 is a structural diagram of the ancient and modern art identification system of the present invention.
  • the server 401 of the ancient and modern artwork appraisal system includes a processor 410, where the processor can be a general-purpose or special-purpose chip (ASIC/eASIC) or FPGA or NPU, etc., and a computer program product in the form of a memory 420 or a computer-programmable Read media.
  • Memory 420 may be electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 420 has a storage space 430 for program codes for performing any method steps in the methods described above.
  • the storage space 430 for program codes may include respective program codes 431 for respectively implementing various steps in the above methods.
  • These program codes can be read or written into the processor 410 .
  • These computer program products comprise program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG. 5 .
  • Fig. 5 is a computer product diagram of the portable or fixed storage unit of the ancient and modern art identification system of the present invention.
  • the storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory 420 in the server of FIG. 4 .
  • the program code can eg be compressed in a suitable form.
  • the storage unit includes computer readable code 431', i.e. code readable by, for example, a processor such as 410, which code, when executed by the server, causes the server to perform the various steps in the methods described above. These codes, when executed by the server, cause the server to perform the steps of the methods described above.
  • Fig. 6(1) is an example of authentic image information involved in an embodiment of the ancient and modern artwork identification method of the present invention.
  • Figure 6(2) is an example of authentic image information used to train the model.
  • Figure 6(3) is an example of the image information of the artwork to be identified.
  • Figure 6(4) is an example of classification.
  • Fig. 6(1) shows an example of a certain image information of an authentic artwork. Taking the multiple image information obtained from 360 degrees of the authentic artwork given in Fig. 6 (2) as a standard, Fig.
  • references herein to "one embodiment,” “an embodiment,” or “one or more embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Additionally, please note that examples of the word “in one embodiment” herein do not necessarily all refer to the same embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

Procédé et système d'authentification d'œuvres d'art antérieur et moderne utilisant une intelligence artificielle. Le procédé consiste : à entrer des informations d'image d'un travail authentique de l'art et des informations d'image d'une œuvre d'art à authentifier (101) ; à détecter, au moyen d'un détecteur, des échantillons de distribution et des échantillons hors distribution à partir des informations d'image de l'œuvre d'art à authentifier et des informations d'image du travail authentique de l'art (102) ; à classer des échantillons dans la distribution (103) ; à réaliser une classification à grains fins sur les échantillons de distribution dans la distribution classés et les informations d'image de classe qui sont similaires à l'œuvre d'art à authentifier (104) ; et à fournir en sortie les échantillons classés dans la distribution ou les échantillons classés à grains fins (105, 105'), et à obtenir la confiance des informations d'image de l'œuvre d'art à authentifier en comparaison avec les informations d'image du travail authentique de l'art en tant que conclusion d'authentification. Ainsi, la précision d'authentification est améliorée, et la quantité de calcul pendant la formation de modèle est réduite.
PCT/CN2021/114254 2021-08-10 2021-08-24 Procédé et système à base d'intelligence artificielle d'authentification d'œuvres d'art antérieur et moderne WO2023015610A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110916187.X 2021-08-10
CN202110916187.XA CN115705688A (zh) 2021-08-10 2021-08-10 基于人工智能的古代及近现代艺术品鉴定方法和系统

Publications (1)

Publication Number Publication Date
WO2023015610A1 true WO2023015610A1 (fr) 2023-02-16

Family

ID=85179636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114254 WO2023015610A1 (fr) 2021-08-10 2021-08-24 Procédé et système à base d'intelligence artificielle d'authentification d'œuvres d'art antérieur et moderne

Country Status (2)

Country Link
CN (1) CN115705688A (fr)
WO (1) WO2023015610A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140438A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class Augmented and Regularized Deep Learning for Fine-grained Image Classification
CN106446874A (zh) * 2016-10-28 2017-02-22 王友炎 一种艺术品真迹鉴定仪及鉴定方法
CN109657527A (zh) * 2017-10-12 2019-04-19 上海友福文化艺术有限公司 一种画作笔触鉴定系统及方法
CN109670365A (zh) * 2017-10-12 2019-04-23 上海友福文化艺术有限公司 一种书法鉴定系统及方法
CN110232445A (zh) * 2019-06-18 2019-09-13 清华大学深圳研究生院 一种基于知识蒸馏的文物真伪鉴定方法
CN111539469A (zh) * 2020-04-20 2020-08-14 东南大学 一种基于视觉自注意力机制的弱监督细粒度图像识别方法
CN111898577A (zh) * 2020-08-10 2020-11-06 腾讯科技(深圳)有限公司 一种图像检测方法、装置、设备及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140438A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Hyper-class Augmented and Regularized Deep Learning for Fine-grained Image Classification
CN106446874A (zh) * 2016-10-28 2017-02-22 王友炎 一种艺术品真迹鉴定仪及鉴定方法
CN109657527A (zh) * 2017-10-12 2019-04-19 上海友福文化艺术有限公司 一种画作笔触鉴定系统及方法
CN109670365A (zh) * 2017-10-12 2019-04-23 上海友福文化艺术有限公司 一种书法鉴定系统及方法
CN110232445A (zh) * 2019-06-18 2019-09-13 清华大学深圳研究生院 一种基于知识蒸馏的文物真伪鉴定方法
CN111539469A (zh) * 2020-04-20 2020-08-14 东南大学 一种基于视觉自注意力机制的弱监督细粒度图像识别方法
CN111898577A (zh) * 2020-08-10 2020-11-06 腾讯科技(深圳)有限公司 一种图像检测方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN115705688A (zh) 2023-02-17

Similar Documents

Publication Publication Date Title
US11868394B2 (en) Analyzing content of digital images
Zhou et al. Interpretable basis decomposition for visual explanation
Baró et al. Traffic sign recognition using evolutionary adaboost detection and forest-ECOC classification
CN111126482B (zh) 一种基于多分类器级联模型的遥感影像自动分类方法
CN109919252B (zh) 利用少数标注图像生成分类器的方法
JP2020522077A (ja) 画像特徴の取得
CN108492298B (zh) 基于生成对抗网络的多光谱图像变化检测方法
CN108334805B (zh) 检测文档阅读顺序的方法和装置
CN112633382A (zh) 一种基于互近邻的少样本图像分类方法及系统
CN111324765A (zh) 基于深度级联跨模态相关性的细粒度草图图像检索方法
CN112990282B (zh) 一种细粒度小样本图像的分类方法及装置
CN111325237A (zh) 一种基于注意力交互机制的图像识别方法
Soumya et al. Emotion recognition from partially occluded facial images using prototypical networks
Das et al. Determining attention mechanism for visual sentiment analysis of an image using svm classifier in deep learning based architecture
CN113792686A (zh) 基于视觉表征跨传感器不变性的车辆重识别方法
Al-Qudah et al. Synthetic blood smears generation using locality sensitive hashing and deep neural networks
CN116935411A (zh) 一种基于字符分解和重构的部首级古文字识别方法
WO2023015610A1 (fr) Procédé et système à base d'intelligence artificielle d'authentification d'œuvres d'art antérieur et moderne
Nithya et al. A review on automatic image captioning techniques
US20170293863A1 (en) Data analysis system, and control method, program, and recording medium therefor
Pryor et al. Deepfake Detection Analyzing Hybrid Dataset Utilizing CNN and SVM
CN113724261A (zh) 一种基于卷积神经网络的快速图像构图方法
El Barachi et al. A Hybrid Machine Learning Approach for Sentiment Analysis of Partially Occluded Faces
Ballary et al. Deep Learning based Facial Attendance System using Convolutional Neural Network
CN111340111B (zh) 基于小波核极限学习机识别人脸图像集方法

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE