TW202137079A - Image similarity analyzing system - Google Patents

Image similarity analyzing system Download PDF

Info

Publication number
TW202137079A
TW202137079A TW109109247A TW109109247A TW202137079A TW 202137079 A TW202137079 A TW 202137079A TW 109109247 A TW109109247 A TW 109109247A TW 109109247 A TW109109247 A TW 109109247A TW 202137079 A TW202137079 A TW 202137079A
Authority
TW
Taiwan
Prior art keywords
image
representation
graph
module
generate
Prior art date
Application number
TW109109247A
Other languages
Chinese (zh)
Other versions
TWI778341B (en
Inventor
張智堯
李嘉孟
蘇仁濬
Original Assignee
荷盛崧鉅智財顧問股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荷盛崧鉅智財顧問股份有限公司 filed Critical 荷盛崧鉅智財顧問股份有限公司
Priority to TW109109247A priority Critical patent/TWI778341B/en
Publication of TW202137079A publication Critical patent/TW202137079A/en
Application granted granted Critical
Publication of TWI778341B publication Critical patent/TWI778341B/en

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

An image similarity analyzing system comprising a trained first deep-learning module, a trained neural-network data processing module, a combination-and-learning unit, and a similarity analyzing unit. The first deep-learning module receives the image to generate an initial image representation. The neural-network data processing module receives the image rule data of the image under the specific image rule to generate an image rule representation. The combination-and-learning unit comprises a combination module and a trained second deep-learning module. The combination module combines the initial image representation and the image rule representation to generate an input data. The second deep-learning module receives the input data to generate a final image representation. The similarity analyzing unit compares the final image representation and a reference image representation. Thereby, the image rule of the IP territory can be included in the analyzing process to improve the weakness of image data comparison within the IP territory.

Description

圖像近似度分析系統 Image approximation analysis system

本發明涉及一種圖像近似度分析系統,尤其涉及一種利用深度學習以智慧處理圖像智慧財產權資料的圖像近似度分析系統。 The invention relates to an image similarity analysis system, in particular to an image similarity analysis system that uses deep learning to intelligently process image intellectual property rights data.

在面臨國際技術競爭與衝擊的當下,智慧財產權的發展成為產業升級上極重要之一環。在知識經濟的浪潮席捲全球的下,智慧財產權的重要性與價值已毋庸置疑,但隨著新的科技技術出現,也逐漸引發智慧財產權未來的服務走向。 In the face of international technological competition and impact, the development of intellectual property rights has become an extremely important link in industrial upgrading. With the wave of knowledge economy sweeping the world, there is no doubt about the importance and value of intellectual property rights. However, with the emergence of new technologies and technologies, it has gradually triggered the future service trend of intellectual property rights.

以往的智慧財產權需要耗費大量的人力,從技術、法律、商業利益等角度來解析,進而產生對權利人有益的策略與行為。 In the past, intellectual property rights required a lot of manpower to be analyzed from the perspectives of technology, law, and commercial interests, and then generated strategies and behaviors beneficial to the right holders.

其中,以智慧財產權中跟圖像有關的部分,例如商標圖像、著作權圖像、或外觀設計圖像,無論是在前案搜尋與比對,都非常耗費人力,其直接影響權利的範圍、核准率、侵害與被侵害的可能、無效或被無效的可能,在法律上與商業上,會使企業產生重大的獲利與損失。 Among them, the image-related parts of intellectual property rights, such as trademark images, copyright images, or design images, are very labor-intensive, whether it is searching and comparing previous cases, and directly affects the scope of rights, The approval rate, the possibility of infringement and infringement, and the possibility of invalidity or invalidation, legally and commercially, will cause significant profits and losses for the enterprise.

因此,有必要藉由現今漸趨成熟的人工智慧,來改善智慧財產權耗費人工、錯誤與爭議大、耗時效率低等問題。 Therefore, it is necessary to use artificial intelligence, which is becoming more and more mature, to solve the problems of labor-consuming, error and controversy, and time-consuming and low efficiency of intellectual property rights.

因此,本發明的主要目的在於提供一種利用深度學習以智慧處理圖像智慧財產權資料的圖像近似度分析系統,以解決上述問題。 Therefore, the main purpose of the present invention is to provide an image similarity analysis system that uses deep learning to intelligently process image intellectual property data to solve the above-mentioned problems.

本發明的目的在於提供一種圖像近似度分析系統,用於具有特定圖像規則的智慧財產權領域,用以分析圖像相較於參考圖像的近似度。此圖像近似度分析系統包括訓練後的第一深度學習模組、訓練後的神經網路資料處理模組、結合學習單元以及近似度分析單元。訓練後的第一深度學習模組接收所述圖像,以產生初始圖表徵。訓練後的神經網路資料處理模組接收圖像在特定圖像規範下的圖規範資訊,並依據圖規範資訊產生圖規範表徵。結合學習單元包括結合模組與訓練後的第二深度學習模組。結合模組是用以結合初始圖表徵與圖規範表徵以產生輸入資訊。訓練後的第二深度學習模組是用以接收輸入資訊以產生最終圖表徵。近似度分析單元是用以將最終圖表徵與參考圖像的參考圖表徵比對,以判定圖像與參考圖像的近似度。 The purpose of the present invention is to provide an image similarity analysis system, which is used in the field of intellectual property rights with specific image rules to analyze the similarity of the image compared to the reference image. The image similarity analysis system includes a trained first deep learning module, a trained neural network data processing module, a combined learning unit, and an approximate analysis unit. The first deep learning module after training receives the image to generate an initial image representation. The trained neural network data processing module receives the graph specification information of the image under a specific image specification, and generates a graph specification representation based on the graph specification information. The combined learning unit includes a combined module and a second deep learning module after training. The combining module is used to combine the initial graph representation and graph specification representation to generate input information. The second deep learning module after training is used to receive input information to generate the final graph representation. The similarity analysis unit is used to compare the final image representation with the reference image representation of the reference image to determine the similarity between the image and the reference image.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述神經網路資料處理模組是利用獨熱編碼(One Hot Encode)產生所述圖規範表徵。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the neural network data processing module uses one hot encoding (One Hot Encode) generates the specification representation of the graph.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述圖規範資訊是利用對應於所述特定圖像規範的圖分類資料庫、具有所述特定圖像規範的知識圖譜庫、或對應於所述特定圖像規範的量化規範法則所產生。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the image specification information uses images corresponding to the specific image specification. A classification database, a knowledge map library with the specific image specification, or a quantitative specification rule corresponding to the specific image specification.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述結合學習單元將所述初始圖表徵與所述圖規範表徵結合以產生所述輸入資訊的結合方法是採 用向量直接合併。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the combined learning unit combines the initial graph representation with the graph specification representation The method of combining to generate the input information is to adopt Use vector to merge directly.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述圖規則表徵與所述初始圖表徵的維數相同。其中所述結合學習單元是以所述圖規範表徵作為權重結合所述初始圖表徵與所述圖規範表徵。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the graph rule representation has the same dimension as the initial graph representation. The combined learning unit uses the graph specification representation as a weight to combine the initial graph representation and the graph specification representation.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述訓練後的第一深度學習模組與所述訓練後的第二深度學習模組是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路(Convolutional Neural Network;CNN)族群中的至少一個。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the trained first deep learning module and the trained The second deep learning module is selected from at least one of the Convolutional Neural Network (CNN) group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述訓練後的第一深度學習模組並用以接收所述參考圖像,以產生初始參考圖表徵,所述訓練後的神經網路資料處理模組用以接收所述參考圖像在所述特定圖像規範下的參考圖規範資訊,並依據所述參考圖規範資訊產生參考圖規範表徵,所述結合模組並用以結合所述初始參考圖表徵與所述參考圖規範表徵以產生參考輸入資訊,所述訓練後的第二深度學習模組並用以接收所述參考輸入資訊,以產生所述最終參考圖表徵。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the trained first deep learning module is used to receive the reference Image to generate an initial reference image representation, and the trained neural network data processing module is used to receive reference image specification information of the reference image under the specific image specification, and based on the reference image The specification information generates a reference image specification representation, the combining module is used to combine the initial reference image representation and the reference image specification representation to generate reference input information, and the trained second deep learning module is used to receive all The reference input information is used to generate the final reference image representation.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述最終圖表徵與所述參考圖表徵的維數相同。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image approximation analysis system, which is characterized in that the dimension of the final image representation is the same as the dimension of the reference image representation.

為達所述優點至少其中之一或其他優點,本發明之一實施例 提出一種圖像近似度分析系統,其特徵在於,其中所述近似度分析單元是比對所述最終圖表徵與所述參考圖表徵以產生多維空間內的幾何距離,並依據所述幾何距離判定所述圖像與所述參考圖像的近似度。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention An image approximation analysis system is proposed, wherein the approximation analysis unit compares the final image representation with the reference image representation to generate a geometric distance in a multi-dimensional space, and judges based on the geometric distance The degree of similarity between the image and the reference image.

為達所述優點至少其中之一或其他優點,本發明之一實施例提出一種圖像近似度分析系統,其特徵在於,其中所述近似度分析單元設定至少一閾值,並透過比較所述幾何距離與所述閾值的大小關係,判斷所述圖像相似于或相同於所述參考圖像。 In order to achieve at least one of the advantages or other advantages, an embodiment of the present invention provides an image similarity analysis system, wherein the similarity analysis unit sets at least one threshold, and compares the geometry The relationship between the distance and the threshold value is used to determine that the image is similar to or the same as the reference image.

因此,利用本發明所提供的用於圖像智慧財產權資料的圖像近似度分析系統,可以有效地納入智慧財產權領域既有的圖像規範,解決智慧財產權領域在圖像資料(例如商標圖像、著作權圖像、或外觀設計圖像等)的比對處理上耗費人工、錯誤與爭議大、耗時效率低等問題。 Therefore, the image similarity analysis system for image intellectual property rights data provided by the present invention can effectively incorporate the existing image specifications in the field of intellectual property rights, and solve the problem of image data (such as trademark images) in the field of intellectual property rights. , Copyright images, or design images, etc.) are labor-intensive, errors and disputes, time-consuming and low-efficiency.

上述說明僅是本發明技術方案的概述,為了能夠更清楚瞭解本發明的技術手段,而可依照說明書的內容予以實施,並且為了讓本發明的上述和其他目的、表徵和優點能夠更明顯易懂,以下特舉較佳實施例,並配合附圖,詳細說明如下。 The above description is only an overview of the technical solution of the present invention. In order to understand the technical means of the present invention more clearly, it can be implemented in accordance with the content of the description, and in order to make the above and other objectives, features and advantages of the present invention more obvious and understandable. In the following, the preferred embodiments are cited in conjunction with the drawings, and the detailed description is as follows.

100,200,300:圖表徵產生系統 100, 200, 300: graph representation generation system

120:第一深度學習模組 120: The first deep learning module

140:神經網路資料處理模組 140: Neural Network Data Processing Module

160:結合學習單元 160: Combined learning unit

I:圖像 I: Image

y:初始圖表徵 y: initial graph representation

Ir:圖規範資訊 Ir: Graph specification information

z:圖規範表徵 z: Graph specification representation

20:知識圖譜庫 20: Knowledge Graph Library

162,320:結合模組 162,320: Combining modules

164:第二深度學習模組 164: The second deep learning module

a:輸入資訊 a: input information

b:最終圖表徵 b: final graph representation

280:訓練模組 280: Training Module

282:比對圖像產生單元 282: Comparison image generation unit

284:優化單元 284: Optimization Unit

I’:比對圖像 I’: Compare images

340:深度學習模組 340: Deep Learning Module

400:圖像近似度分析系統 400: Image approximation analysis system

480:近似度分析單元 480: Approximation analysis unit

I0:參考圖像 I0: Reference image

c:參考圖表徵 c: Reference image characterization

所包括的附圖用來提供對本申請實施例的進一步的理解,其構成了說明書之一部分,用於例示本申請的實施方式,並與文字描述一起來闡釋本申請的原理。顯而易見地,下面描述中的附圖僅僅是本申請之一些實施例,對於本領域普通技術人員來講,在不付出創造性勞動性的前提下,還可以根據這些附圖獲得其他的附圖。在附圖中: The included drawings are used to provide a further understanding of the embodiments of the present application, which constitute a part of the specification, are used to illustrate the embodiments of the present application, and together with the text description, explain the principle of the present application. Obviously, the drawings in the following description are only some embodiments of the application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor. In the attached picture:

圖1是本發明圖表徵產生系統一實施例的示意圖; Fig. 1 is a schematic diagram of an embodiment of a graph characterization generating system of the present invention;

圖2是本發明圖表徵產生系統另一實施例的示意圖; Fig. 2 is a schematic diagram of another embodiment of the graph characterization generating system of the present invention;

圖3是本發明圖表徵產生方法一實施例的流程圖; Fig. 3 is a flowchart of an embodiment of a method for generating a graph characterization of the present invention;

圖4是本發明圖表徵產生方法另一實施例的流程圖; 4 is a flowchart of another embodiment of the method for generating a graph characterization according to the present invention;

圖5是本發明圖表徵智慧模組一實施例的示意圖;以及 FIG. 5 is a schematic diagram illustrating an embodiment of a smart module according to the present invention; and

圖6是本發明圖像近似度分析系統一實施例的示意圖。 Fig. 6 is a schematic diagram of an embodiment of the image approximation analysis system of the present invention.

這裡所公開的具體結構和功能細節僅僅是代表性的,並且是用於描述本發明的示例性實施例的目的。但是本發明可以通過許多替換形式來具體實現,並且不應當被解釋成僅僅受限於這裡所闡述的實施例。 The specific structure and functional details disclosed herein are only representative, and are used for the purpose of describing exemplary embodiments of the present invention. However, the present invention can be embodied in many alternative forms, and should not be construed as being limited only to the embodiments set forth herein.

在本發明的描述中,需要理解的是,術語“中心”、“橫向”、“上”、“下”、“左”、“右”、“豎直”、“水準”、“頂”、“底”、“內”、“外”等指示的方位或位置關為基於附圖所示的方位或位置關是,僅是為了便於描述本發明和簡化描述,而不是指示或暗示所指的裝置或元件必須具有特定的方位、以特定的方位構造和操作,因此不能理解為對本發明的限制。此外,術語“第一”、“第二”僅用於描述目的,而不能理解為指示或暗示相對重要性或者隱含指明所指示的技術表徵的數量。由此,限定有“第一”、“第二”的表徵可以明示或者隱含地包括一個或者更多個所述表徵。在本發明的描述中,除非另有說明,“多個”的含義是兩個或兩個以上。另外,術語“包括”及其任何變形,意圖在於覆蓋不排他的包含。 In the description of the present invention, it should be understood that the terms "center", "lateral", "upper", "lower", "left", "right", "vertical", "horizontal", "top", The directions or positions indicated by "bottom", "inner", and "outer" are based on the positions or positions shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, and are not indicative or implied. The device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the present invention. In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the characterization defined with "first" and "second" may explicitly or implicitly include one or more of the characterizations. In the description of the present invention, unless otherwise specified, "plurality" means two or more. In addition, the term "including" and any variations thereof is intended to cover non-exclusive inclusion.

在本發明的描述中,需要說明的是,除非另有明確的規定和限定,術語“安裝”、“相連”、“連接”應做廣義理解,例如,可以是 固定連接,也可以是可拆卸連接,或一體地連接;可以是機械連接,也可以是電連接;可以是直接相連,也可以通過中間媒介間接相連,可以是兩個元件內部的連通。對於本領域的普通技術人員而言,可以具體情況理解上述術語在本發明中的具體含義。 In the description of the present invention, it should be noted that, unless otherwise clearly defined and limited, the terms "installed", "connected", and "connected" should be understood in a broad sense, for example, they can be The fixed connection can also be a detachable connection or an integral connection; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be a connection between two components. For those of ordinary skill in the art, the specific meanings of the above-mentioned terms in the present invention can be understood in specific situations.

這裡所使用的術語僅僅是為了描述具體實施例而不意圖限制示例性實施例。除非上下文明確地另有所指,否則這裡所使用的單數形式“一個”、“一項”還意圖包括複數。還應當理解的是,這裡所使用的術語“包括”和/或“包含”規定所陳述的表徵、整數、步驟、操作、單元和/或元件的存在,而不排除存在或添加一個或更多其他表徵、整數、步驟、操作、單元、元件和/或其組合。 The terms used herein are only for describing specific embodiments and are not intended to limit the exemplary embodiments. Unless the context clearly dictates otherwise, the singular forms "a" and "one" used herein are also intended to include the plural. It should also be understood that the terms "comprising" and/or "comprising" used herein specify the existence of the stated features, integers, steps, operations, units and/or elements, and do not exclude the existence or addition of one or more Other characterizations, integers, steps, operations, units, elements, and/or combinations thereof.

圖1 是本發明圖表徵產生系統一實施例的示意圖。此圖表徵產生系統100是用於具有特定圖像規範的智慧財產權領域,用以將圖像轉化成具有領域適應性的圖表徵。前述圖像可以是商標圖形、圖像設計等智慧財產權領域的圖像。前述特定圖像規範可以是商標圖形分類或是工業品外觀設計分類,例如維也納分類(維也納協定建立之一種用於由圖形要素構成的或帶有圖形要素的商標的分類法)或是洛迦諾(Locarno)分類(由洛迦諾協定建立之一種工業品外觀設計註冊用國際分類)。 Fig. 1 is a schematic diagram of an embodiment of the graph characterization generating system of the present invention. The image representation generation system 100 is used in the field of intellectual property rights with specific image specifications, and is used to transform the image into a domain-adaptable image representation. The aforementioned images may be images in the field of intellectual property rights such as trademark graphics and image designs. The aforementioned specific image specification can be a trademark graphic classification or an industrial design classification, such as the Vienna Classification (a classification established by the Vienna Agreement for trademarks composed of or with graphic elements) or Locarno (Locarno) classification (an international classification for industrial design registration established by the Locarno Agreement).

如圖中所示,圖表徵產生系統100包括第一深度學習模組120、神經網路資料處理模組140以及結合學習單元160。 As shown in the figure, the graph representation generation system 100 includes a first deep learning module 120, a neural network data processing module 140, and a combined learning unit 160.

第一深度學習模組120是用以接收所述圖像I,以產生初始圖表徵(representation)y。在一實施例中,第一深度學習模組120可以是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族 群中的至少一個。 The first deep learning module 120 is used to receive the image I to generate an initial image representation (representation) y. In an embodiment, the first deep learning module 120 may be selected from the convolutional neural network family consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet At least one in the group.

神經網路資料處理模組140是用以接收所述圖像I在所述特定圖像規範下的圖規範資訊Ir,並依據所述圖規範資訊Ir產生圖規範表徵z。 The neural network data processing module 140 is used to receive the image specification information Ir of the image I under the specific image specification, and generate the image specification representation z according to the image specification information Ir.

在一實施例中,如圖中所示,可利用具有所述特定圖像規範的知識圖譜庫(knowledge graph)20分析圖像I自動產生圖規範資訊Ir。不過亦不限於此。在一實施例中,可利用對應於所述特定圖像規範的圖分類資料庫對圖像I進行歸類以產生圖規範資訊Ir。此歸類操作可由計算器自動進行,亦可由人力協助。在一實施例中,可對所述特定圖像規範予以量化以產生量化規範法則,並利用量化規範法則分析圖像I以產生圖規範資訊Ir。舉例來說,可設定參考圖像,並分析圖像I與參考圖像上的畫素是否相同,並在相同畫素的比例超過一預設值時,認定此二圖像為近似,屬於相同分類。 In one embodiment, as shown in the figure, a knowledge graph 20 with the specific image specification can be used to analyze the image I to automatically generate the graph specification information Ir. But it is not limited to this. In one embodiment, the image I can be classified by using the image classification database corresponding to the specific image specification to generate image specification information Ir. This sorting operation can be carried out automatically by a calculator or can be assisted by manpower. In one embodiment, the specific image specification may be quantified to generate a quantification specification law, and the quantification specification law may be used to analyze the image I to generate image specification information Ir. For example, you can set the reference image and analyze whether the pixels on the image I and the reference image are the same, and when the ratio of the same pixels exceeds a preset value, the two images are considered to be similar and belong to the same Classification.

在一實施例中,神經網路資料處理模組140可以是單一隱藏層的簡單神經網路或是其他淺神經網路(例如隱藏層數量少於10),其隱藏層(hidden layer)數量明顯少於前述第一深度學習模組120,以降低成本,簡化架構的複雜度。不過,本發明亦不限於此,若是圖像規範過於複雜,為了提升判斷的準確率,在一實施例中,此神經網路資料處理模組140亦可以是具有深度神經網路之深度學習模組。 In one embodiment, the neural network data processing module 140 may be a simple neural network with a single hidden layer or other shallow neural networks (for example, the number of hidden layers is less than 10), and the number of hidden layers is It is significantly less than the aforementioned first deep learning module 120 to reduce the cost and simplify the complexity of the architecture. However, the present invention is not limited to this. If the image specification is too complex, in order to improve the accuracy of the judgment, in one embodiment, the neural network data processing module 140 may also be a deep learning model with a deep neural network. Group.

在一實施例中,所述神經網路資料處理模組140是利用獨熱編碼產生所述圖規範表徵z。神經網路資料處理模組140所輸出的圖規範表徵z的維數可視用戶需求與此圖表徵產生系統的實際訓練與運作狀況進行調整。 In one embodiment, the neural network data processing module 140 uses one-hot encoding to generate the graph specification representation z. The dimension of the graph specification representation z output by the neural network data processing module 140 can be adjusted according to user requirements and the actual training and operating conditions of the graph representation generation system.

結合學習單元160包括結合模組162與第二深度學習模組164。所述結合模組162是用以結合所述初始圖表徵y與所述圖規範表徵z以產生輸入資訊a。在一實施例中,結合學習單元160將初始圖表徵y與圖規範表徵z結合以產生輸入資訊a的結合方法是採用向量直接合併。不過亦不限於此,在一實施例中,結合學習單元160的結合模組162將初始圖表徵y與圖規範表徵z結合以產生輸入資訊a的結合方法亦可以是以圖規範表徵z作為權重與初始圖表徵y合併。向量直接合併的方式不受到初始圖表徵y與圖規範表徵z的維數限制,但是會產生維數較多的輸入資訊a。以圖規範表徵z作為權重與初始圖表徵y合併的方式可以有效減少輸入資訊a的維數,但需要相同維數的初始圖表徵y與圖規範表徵z。 The combined learning unit 160 includes a combined module 162 and a second deep learning module 164. The combining module 162 is used to combine the initial graph representation y and the graph specification representation z to generate input information a. In one embodiment, the combination learning unit 160 combines the initial graph representation y with the graph specification representation z to generate the input information a by directly combining vectors. However, it is not limited to this. In one embodiment, the combining module 162 of the combined learning unit 160 combines the initial graph representation y with the graph specification representation z to generate the input information a. The combining method may also use the graph specification representation z as the weight. Merged with the initial graph characterization y. The direct merging of vectors is not limited by the dimensions of the initial graph representation y and graph norm representation z, but it will generate input information a with more dimensions. Using the graph specification representation z as the weight and the initial graph representation y can effectively reduce the dimensionality of the input information a, but the initial graph representation y and the graph specification representation z of the same dimension are required.

第二深度學習模組164是用以接收輸入資訊a,以產生最終圖表徵b。在一實施例中,第二深度學習模組是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群中的至少一個。 The second deep learning module 164 is used to receive the input information a to generate the final graph representation b. In an embodiment, the second deep learning module is selected from at least one of the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet.

由於第二深度學習模組164所接收的輸入資訊a包含對應於圖規範資訊Ir的圖規範表徵z,第二深度學習模組164所產生的最終圖表徵b可以有效地納入智慧財產權領域既有的圖像規範,使圖表徵產生系統100的最終圖表徵b輸出更接近智慧財產權領域的實際判斷分析結果。其次,本發明使用較為簡單的神經網路資料處理模組140處理圖像規範,一方面有助於降低成本,另一方面也有助於提升運算速度。 Since the input information a received by the second deep learning module 164 includes the graph specification representation z corresponding to the graph specification information Ir, the final graph representation b generated by the second deep learning module 164 can be effectively incorporated into the existing intellectual property field. The image specification of, makes the final image representation b output of the image representation generation system 100 closer to the actual judgment analysis result in the field of intellectual property rights. Secondly, the present invention uses a relatively simple neural network data processing module 140 to process image specifications, which on the one hand helps to reduce costs, and on the other hand, it also helps to increase the computing speed.

以下以一個2D商標圖形為例說明本實施例。假設輸入此圖表徵產生系統100之圖像I示一個包含16x16個灰階畫素的2D商標,特定圖像規範為維也納協定第八版的商標圖形分類法,具有29個類別(category)。假 設圖像I依據商標圖形分類法所產生的圖規範資訊為Ir={Ir1,Ir2…Ir29},Ir1,Ir2…Ir29是以二維數0或1表示圖像I與商標圖形分類法各個類別的關聯性。也就是說,若是屬於此類別即填入1,若否即填入0。換言之,此圖規範資訊Ir即為此圖像I依據商標圖形分類法的分類結果。 The following uses a 2D trademark graphic as an example to illustrate this embodiment. Assume that the image I input to the characterization generating system 100 shows a 2D trademark containing 16x16 gray-scale pixels, and the specific image specification is the eighth edition of the trademark graphic classification of the Vienna Agreement, which has 29 categories. Fake Suppose that the image specification information generated by the image I according to the trademark image classification method is Ir={Ir1,Ir2…Ir29}, Ir1, Ir2…Ir29 is a two-dimensional number 0 or 1 representing each category of the image I and the trademark image classification method Relevance. In other words, if it belongs to this category, fill in 1; if not, fill in 0. In other words, this image specification information Ir is the classification result of this image I according to the trademark image classification method.

如前述,輸入第一深度學習單元120的圖像I可表示為x={x1,x2…x256},其中,x1,x2…x256代表各個畫素的灰階;經過第一深度學習模組120所產生的初始圖表徵y={y1,y2…ym}。依據所述圖規範資訊Ir利用獨熱編碼產生的圖規範表徵z={z1,z2…zi}。前述m與i即表示初始圖表徵y與圖規範表徵z的維數,二者可依使用者實際需求進行調整。 As mentioned above, the image I input to the first deep learning unit 120 can be expressed as x={x1,x2...x256}, where x1,x2...x256 represents the gray scale of each pixel; after passing through the first deep learning module 120 The generated initial graph represents y={y1,y2...ym}. According to the graph specification information Ir, the graph specification representation z={z1, z2...zi} generated by one-hot encoding. The aforementioned m and i represent the dimensions of the initial graph representation y and the graph specification representation z, and both can be adjusted according to the actual needs of the user.

若是結合模組162將初始圖表徵y與圖規範表徵z結合以產生輸入資訊的結合方法是採用向量直接合併,輸入資訊a={y1,y2…ym,z1,z2…zi}。若是結合模組162將初始圖表徵y與圖規範表徵z結合以產生輸入資訊a的結合方法是以圖規範表徵z作為權重與初始圖表徵y合併,圖規範表徵z的維數i需要與初始圖表徵y的維數m相同,輸入資訊a={y1z1,y2z2…ymzm}。第二深度學習模組164接收輸入資訊a,據以產生依最終圖表徵b={b1,b2…bn}。前述n代表最終圖表徵b的維數,可依實際需求進行調整。 If the combination module 162 combines the initial graph representation y with the graph specification representation z to generate input information, the combining method is to use vectors to directly merge the input information a={y1,y2...ym,z1,z2...zi}. If the combination module 162 combines the initial graph representation y with the graph specification representation z to generate the input information a, the combining method is to use the graph specification representation z as the weight to merge with the initial graph representation y, and the dimension i of the graph specification representation z needs to be the same as the initial The dimension m of the graph representation y is the same, and the input information a={y1z1,y2z2...ymzm}. The second deep learning module 164 receives the input information a, and generates according to the final graph representation b={b1, b2...bn}. The aforementioned n represents the dimension of the final graph representation b, which can be adjusted according to actual needs.

最終圖表徵b會內含此圖像I關聯於商標圖形分類法的資訊。因此,將本實施例之圖表徵產生系統100所產生的最終圖表徵b作為前案搜尋與比對處理的物件,可以有效地納入智慧財產權領域既有的圖像規範,提高判斷的準確度,有效解決智慧財產權領域在圖像資料的處理上耗費人工、錯誤與爭議大、耗時效率低等問題。 The final image representation b will contain the information related to the trademark image classification of this image I. Therefore, using the final graph representation b generated by the graph representation generation system 100 of this embodiment as an object for previous search and comparison processing can be effectively incorporated into the existing image specifications in the field of intellectual property rights and improve the accuracy of judgment. Effectively solve the problems of labor-consuming, error and dispute, time-consuming and low efficiency in the processing of image data in the field of intellectual property rights.

圖2 是本發明圖表徵產生系統200另一實施例的示意圖。相 較於圖1的圖表徵產生系統100。本實施例的圖表徵產生系統200具有自動訓練功能,可直接利用編碼產生的最終圖表徵b反向修正第一深度學習模組120、神經網路資料處理模組140以及第二深度學習模組164的參數。 FIG. 2 is a schematic diagram of another embodiment of the graph characterization generating system 200 of the present invention. Mutually Compared with the diagram of FIG. 1, the generating system 100 is represented. The graph representation generation system 200 of this embodiment has an automatic training function, and can directly use the final graph representation b generated by encoding to reversely correct the first deep learning module 120, the neural network data processing module 140, and the second deep learning module 164 parameters.

如圖中所示,本實施例的圖表徵產生系統200除了第一深度學習模組120、神經網路資料處理模組140以及結合學習單元160,還包括訓練模組280。訓練模組280包括比對圖像產生單元282與優化單元284。比對圖像產生單元282接收第二深度學習模組164所產生的最終圖表徵b,並依據第一深度學習模組120、神經網路資料處理模組140與結合學習單元160產生最終圖表徵b的編碼方式,將最終圖表徵b解碼還原產生對應於圖像I之一個比對圖像I’。 As shown in the figure, the graph representation generation system 200 of this embodiment includes a training module 280 in addition to the first deep learning module 120, the neural network data processing module 140, and the combined learning unit 160. The training module 280 includes a comparison image generation unit 282 and an optimization unit 284. The comparison image generating unit 282 receives the final graph representation b generated by the second deep learning module 164, and generates the final graph representation according to the first deep learning module 120, the neural network data processing module 140, and the combined learning unit 160 The encoding method of b is to decode and restore the final image characterization b to generate a comparison image I'corresponding to the image I.

優化單元284接收比對圖像I’,並計算比對圖像I’與原始圖像I間的損失函數(loss function)以優化第一深度學習模組120的第一參數、神經網路資料處理模組140的第二參數與第二深度學習模組164的第三參數。也就是說,訓練模組280的優化單元284會以縮減損失函數為目標,修正第一參數、第二參數與第三參數。在一實施例中,前述損失函數可以是比對圖像I’與原始圖像I的所有對應畫素的灰階的均方誤差(mean square error,MSE)。在一實施例中,前述損失函數可以是比對圖像I’與原始圖像I的所有對應畫素的灰階的平均絕對值誤差(mean absolute error,MAE)。不過本發明亦不限於此,任何適用於圖像比對的損失函數,如Huber損失函數、Log-Cosh損失函數,均可適用於本發明。 The optimization unit 284 receives the compared image I', and calculates the loss function between the compared image I'and the original image I to optimize the first parameter and neural network data of the first deep learning module 120 The second parameter of the processing module 140 and the third parameter of the second deep learning module 164. In other words, the optimization unit 284 of the training module 280 will modify the first parameter, the second parameter, and the third parameter with the goal of reducing the loss function. In an embodiment, the aforementioned loss function may be the mean square error (MSE) of the gray levels of all corresponding pixels of the comparison image I'and the original image I. In an embodiment, the aforementioned loss function may be the mean absolute error (MAE) of the gray levels of all corresponding pixels of the comparison image I'and the original image I. However, the present invention is not limited to this. Any loss function suitable for image comparison, such as Huber loss function and Log-Cosh loss function, can be applied to the present invention.

透過前述訓練模組280的運作,本實施例的圖表徵產生系統200可自動將編碼產生之最終圖表徵b還原為比對圖像I’進行訓練流程以優 化第一深度學習模組120的第一參數、神經網路資料處理模組140的第二參數與第二深度學習模組164的第三參數,而毋須人工介入。 Through the operation of the aforementioned training module 280, the image representation generation system 200 of this embodiment can automatically restore the final image representation b generated by the encoding to the comparison image I'for the training process to optimize The first parameter of the first deep learning module 120, the second parameter of the neural network data processing module 140, and the third parameter of the second deep learning module 164 are changed without manual intervention.

圖3是本發明圖表徵產生方法一實施例的流程圖。此圖表徵產生方法用於具有特定圖像規範的智慧財產權領域,用以將圖像轉化成具有領域適應性的圖表徵。此圖表徵產生方法可使用如圖1所示的圖表徵產生系統100執行。 Fig. 3 is a flowchart of an embodiment of a method for generating a graph characterization of the present invention. This graph representation generation method is used in the field of intellectual property rights with specific image specifications to transform the image into a graph representation with domain adaptability. This graph characterization generation method can be executed using the graph characterization generation system 100 shown in FIG. 1.

如圖中所示,此圖表徵產生方法包括以下步驟。 As shown in the figure, this graph characterization generation method includes the following steps.

請一併參照圖1所示,首先,在步驟S120中,將圖像I提供至第一深度學習模型,以產生初始圖表徵y。此步驟可由圖1的第一深度學習模組120所執行。在一實施例中,第一深度學習模型是由選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群中的至少一個所提供。 Please refer to FIG. 1 together. First, in step S120, the image I is provided to the first deep learning model to generate the initial image representation y. This step can be performed by the first deep learning module 120 in FIG. 1. In an embodiment, the first deep learning model is provided by at least one selected from the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet.

隨後,在步驟S140中,將所述圖像I在所述特定圖像規範下的圖規範資訊Ir提供至神經網路模型,以產生圖規範表徵z。此步驟可由圖1的神經網路資料處理模組140所執行。在一實施例中,此步驟可利用獨熱編碼依據圖規範資訊Ir產生圖規範表徵z。 Subsequently, in step S140, the graph specification information Ir of the image I under the specific image specification is provided to the neural network model to generate the graph specification representation z. This step can be performed by the neural network data processing module 140 in FIG. 1. In one embodiment, this step can use one-hot encoding to generate a graph specification representation z based on graph specification information Ir.

在一實施例中,圖規範資訊Ir是利用對應於所述特定圖像規範的圖分類資料庫對圖像I進行分析所產生。在一實施例中,圖規範資訊Ir是利用具有所述特定圖像規範的知識圖譜庫對圖像I進行分析所產生。在一實施例中,圖規範資訊Ir是利用所述特定圖像規範量化後產生的量化規範法則對圖像I進行分析所產生。 In one embodiment, the image specification information Ir is generated by analyzing the image I using the image classification database corresponding to the specific image specification. In one embodiment, the image specification information Ir is generated by analyzing the image I using a knowledge graph library with the specific image specification. In one embodiment, the image specification information Ir is generated by analyzing the image I using a quantization specification rule generated after quantization of the specific image specification.

接下來,在步驟S160中,結合初始圖表徵y與圖規範表徵z 以產生輸入資訊a。此步驟可由圖1的結合模組162所執行。在一實施例中,此步驟是將初始圖表徵y與圖規範表徵z的向量直接合併。在一實施例中,當圖規則表徵y與初始圖表徵z的維數相同,此步驟是以圖規範表徵z作為權重與初始圖表徵y合併。 Next, in step S160, the initial graph representation y is combined with the graph specification representation z To generate input information a. This step can be performed by the combining module 162 in FIG. 1. In one embodiment, this step is to directly merge the vector of the initial graph representation y and the graph specification representation z. In one embodiment, when the dimension of the graph rule representation y and the initial graph representation z are the same, this step uses the graph specification representation z as a weight to merge with the initial graph representation y.

最後,在步驟S180中,將輸入資訊a提供至第二深度學習模型,以產生最終圖表徵b。此步驟可由圖1的第二深度學習模組164所執行。在一實施例中,第二深度學習模型是由選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群中的至少一個所提供。 Finally, in step S180, the input information a is provided to the second deep learning model to generate the final graph representation b. This step can be performed by the second deep learning module 164 in FIG. 1. In an embodiment, the second deep learning model is provided by at least one selected from the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet.

圖4是本發明圖表徵產生方法另一實施例的流程圖。相較於圖3的圖表徵產生方法。本實施例的圖表徵產生方法具有訓練步驟,可直接利用編碼產生的最終圖表徵b反向修正第一深度學習模型、神經網路模型以及第二深度學習模型的參數。在一實施例中,此圖表徵產生方法可使用圖2所示的圖表徵產生系統200執行。 Fig. 4 is a flowchart of another embodiment of a method for generating a graph characterization of the present invention. Compared to the graph in FIG. 3, the generation method is characterized. The graph representation generation method of this embodiment has a training step, and the final graph representation b generated by encoding can be directly used to reversely correct the parameters of the first deep learning model, the neural network model, and the second deep learning model. In an embodiment, this graph characterization generation method can be executed using the graph characterization generation system 200 shown in FIG. 2.

承接圖3的步驟S180,如圖中所示,在產生最終圖表徵b的步驟後,本實施例更包括比對圖像產生步驟S192與參數優化步驟S194,可自動修正步驟S120所使用的第一深度學習模型、步驟S140所使用的神經網路模型以及步驟S180所使用的第二深度學習模型的參數。 Following step S180 of FIG. 3, as shown in the figure, after the step of generating the final image representation b, this embodiment further includes a step S192 of comparing image generation with a step S194 of parameter optimization, which can automatically correct the first step used in step S120. A deep learning model, the neural network model used in step S140, and the parameters of the second deep learning model used in step S180.

比對圖像產生步驟S192是依據步驟S120至S180產生最終圖表徵b的編碼方式,將最終圖表徵b解碼還原,產生對應於原始圖像I的比對圖像I’。請一併參照圖2,在一實施例中,此步驟可利用訓練模組280的比對圖像產生單元282執行。 The comparison image generation step S192 is to generate the encoding method of the final image representation b according to the steps S120 to S180, decode and restore the final image representation b, and generate a comparison image I'corresponding to the original image I. Please also refer to FIG. 2. In one embodiment, this step can be performed by the comparison image generating unit 282 of the training module 280.

參數優化步驟S194是依據比對圖像I’與原始圖像I間的損 失函數,並依據此損失函數優化步驟S120所使用的第一深度學習模型、步驟S140所使用的神經網路模型與步驟S180所使用的第二深度學習模型的參數。在一實施例中,此步驟可利用訓練模組280的優化單元284執行。訓練模組280會以縮減損失函數的數值為目標修正參數。 The parameter optimization step S194 is based on comparing the loss between the image I’ and the original image I. The parameters of the first deep learning model used in step S120, the neural network model used in step S140, and the second deep learning model used in step S180 are optimized according to the loss function. In one embodiment, this step can be performed by the optimization unit 284 of the training module 280. The training module 280 corrects the parameters with the goal of reducing the value of the loss function.

圖5是本發明圖表徵智慧模組一實施例的示意圖。此圖表徵智慧模組300用於具有特定圖像規範的智慧財產權領域,用以將圖像I轉化成具有領域適應性的圖表徵。此圖表徵智慧模組300大致對應於圖1中的結合學習單元160。 FIG. 5 is a schematic diagram illustrating an embodiment of a smart module according to the present invention. This figure represents that the smart module 300 is used in the field of intellectual property rights with specific image specifications, and is used to transform the image I into a figure representation with domain adaptability. This figure represents that the smart module 300 roughly corresponds to the combined learning unit 160 in FIG. 1.

如圖中所示,此圖表徵智慧模組300包括結合模組320與深度學習模組340。結合模組320用以接收對應於圖像I的初始圖表徵y與對應於圖像I在特定圖像規範下的圖規範表徵z,並結合初始圖表徵y與圖規範表徵z以產生輸入資訊a。在一實施例中,圖規範表徵z是利用獨熱編碼所產生。 As shown in the figure, this figure represents that the smart module 300 includes a combining module 320 and a deep learning module 340. The combination module 320 is used to receive the initial image representation y corresponding to the image I and the image specification representation z corresponding to the image I under a specific image specification, and combine the initial image representation y and the image specification representation z to generate input information a. In one embodiment, the graph specification representation z is generated using one-hot encoding.

在一實施例中,結合模組320將初始圖表徵y與圖規範表徵結合z以產生輸入資訊a的結合方法是採用向量直接合併。不過亦不限於此。在一實施例中,結合模組320將初始圖表徵y與圖規範表徵結合z以產生輸入資訊a的結合方法也可以是以圖規範表徵z作為權重與初始圖表徵y合併。關於初始圖表徵y與圖規範表徵z的細節可參閱本案圖1的實施例,在此不予贅述。 In one embodiment, the combining module 320 combines the initial graph representation y with the graph specification representation z to generate the input information a by directly combining vectors. But it is not limited to this. In one embodiment, the combining module 320 combines the initial graph representation y with the graph specification representation z to generate the input information a. The combining method may also use the graph specification representation z as a weight to combine with the initial graph representation y. For details of the initial graph representation y and graph specification representation z, please refer to the embodiment of FIG. 1 in this case, and will not be repeated here.

深度學習模組340接收結合模組320產生之輸入資訊a以產生最終圖表徵b。在一實施例中,所述深度學習模組340是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群中的至少一個。在一實施例中,關於此深度學習模組340可參見本案圖1中的第二 深度學習模組164,關於此深度學習模組340的訓練流程可參見本案圖2中對於第二深度學習模組164的訓練流程,在此不予贅述。 The deep learning module 340 receives the input information a generated by the combining module 320 to generate the final graph representation b. In one embodiment, the deep learning module 340 is at least one selected from the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet. In one embodiment, for the deep learning module 340, please refer to the second part in Figure 1 of this case. The deep learning module 164. Regarding the training process of the deep learning module 340, please refer to the training process of the second deep learning module 164 in FIG. 2 of this case, which is not repeated here.

此圖表徵智慧模組300可以是軟體、硬體或是軟體與硬體的結合。在實際使用上,並可搭配使用者既有的深度學習模組與神經網路模組組成如圖1所示之圖表徵產生系統100以產生具有領域適應性的圖表徵供用戶分析使用。舉例來說,此圖表徵智慧模組300可以由一般的程式語言或是其他既有程式來達成,並可以設置於已知的電腦可利用媒介;此圖表徵智慧模組300可以利用積體電路制程轉換為硬體實現;此圖表徵智慧模組300亦可以將其中部份模組由一般的程式語言或是其他既有程式來達成,部分模組利用積體電路制程轉換為硬體實現。 This figure represents that the smart module 300 can be software, hardware, or a combination of software and hardware. In actual use, it can be combined with the user's existing deep learning module and neural network module to form the graph representation generation system 100 shown in FIG. 1 to generate domain-adaptable graph representations for user analysis and use. For example, this figure represents that the smart module 300 can be achieved by a general programming language or other existing programs, and can be installed in a known computer usable medium; this figure represents that the smart module 300 can use an integrated circuit The manufacturing process is converted to hardware implementation; this figure shows that the smart module 300 can also be implemented by general programming languages or other existing programs, and some modules are converted to hardware implementation by using integrated circuit manufacturing process.

圖6是本發明圖像近似度分析系統400一實施例的示意圖。此圖像近似度分析系統400是用於具有特定圖像規範的智慧財產權領域,用以分析圖像I相較於參考圖像I0的近似度。 FIG. 6 is a schematic diagram of an embodiment of the image approximation analysis system 400 of the present invention. The image similarity analysis system 400 is used in the field of intellectual property rights with specific image specifications to analyze the similarity of the image I compared to the reference image I0.

此圖像近似度分析系統400包括訓練後的第一深度學習模組120、訓練後的神經網路資料處理模組140、結合學習單元160以及近似度分析單元480。其中,結合學習單元160包括結合模組162以及訓練後的第二深度學習模組164。 The image similarity analysis system 400 includes a trained first deep learning module 120, a trained neural network data processing module 140, a combined learning unit 160, and an approximate analysis unit 480. Wherein, the combined learning unit 160 includes a combined module 162 and a second deep learning module 164 after training.

訓練後的第一深度學習模組120是用以接收圖像I以產生初始圖表徵y。在一實施例中,訓練後的第一深度學習模組120是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群中的至少一個。訓練後的神經網路資料處理模組140是用以接收所述圖像I在特定圖像規範下的圖規範資訊Ir,據以產生圖規範表徵z。 The trained first deep learning module 120 is used to receive the image I to generate the initial image representation y. In one embodiment, the trained first deep learning module 120 is selected from at least one of the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet. The trained neural network data processing module 140 is used to receive the image specification information Ir of the image I under a specific image specification, and generate the image specification representation z accordingly.

在一實施例中,如圖中所示,可利用具有所述特定圖像規範的知識圖譜庫(knowledge graph)20分析圖像I自動產生圖規範資訊Ir。不過亦不限於此。在一實施例中,可利用對應於所述特定圖像規範的圖分類資料庫對圖像I進行歸類以產生圖規範資訊Ir。此歸類操作可由計算器自動進行,亦可由人力協助。在一實施例中,可對所述特定圖像規範予以量化以產生量化規範法則,並利用量化規範法則分析圖像I以產生圖規範資訊Ir。 In one embodiment, as shown in the figure, a knowledge graph 20 with the specific image specification can be used to analyze the image I to automatically generate the graph specification information Ir. But it is not limited to this. In one embodiment, the image I can be classified by using the image classification database corresponding to the specific image specification to generate image specification information Ir. This sorting operation can be carried out automatically by a calculator or can be assisted by manpower. In one embodiment, the specific image specification may be quantified to generate a quantification specification law, and the quantification specification law may be used to analyze the image I to generate image specification information Ir.

在一實施例中,訓練後的神經網路資料處理模組140可以是單一隱藏層的簡單神經網路或是其他淺神經網路(例如隱藏層數量少於10),其隱藏層數量明顯少於前述訓練後的第一深度學習模組120,以降低成本,簡化架構的複雜度。在一實施例中,所述訓練後的神經網路資料處理模組140是利用獨熱編碼產生所述圖規範表徵z。圖規範表徵z的維數可視用戶需求與此近似度分析系統400的實際訓練與運作狀況進行調整。 In one embodiment, the trained neural network data processing module 140 can be a simple neural network with a single hidden layer or other shallow neural networks (for example, the number of hidden layers is less than 10), and the number of hidden layers is obvious The number of the first deep learning module 120 after the training is less than that of the first deep learning module 120, so as to reduce the cost and simplify the complexity of the architecture. In one embodiment, the trained neural network data processing module 140 uses one-hot encoding to generate the graph specification representation z. The dimension of the graph specification representation z can be adjusted according to user requirements and the actual training and operating conditions of the approximation analysis system 400.

結合學習單元160包括結合模組162與訓練後的第二深度學習模組164。結合模組162是用以結合所述初始圖表徵y與所述圖規範表徵z以產生輸入資訊a。在一實施例中,結合學習單元160將初始圖表徵y與圖規範表徵z結合以產生輸入資訊a的結合方法是採用向量直接合併。不過亦不限於此,在一實施例中,結合學習單元160的結合模組162將初始圖表徵y與圖規範表徵z結合以產生輸入資訊a的結合方法亦可以是以圖規範表徵z作為權重與初始圖表徵y合併。訓練後的第二深度學習模組164是用以接收輸入資訊a,以產生最終圖表徵b。最終圖表徵b會內含圖像I關聯於特定圖像規範的資訊(即圖規範資訊Ir)。在一實施例中,第二深度學習模組是選自於由LeNet、AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路族群 中的至少一個。 The combined learning unit 160 includes a combined module 162 and a second deep learning module 164 after training. The combining module 162 is used to combine the initial graph representation y and the graph specification representation z to generate input information a. In one embodiment, the combination learning unit 160 combines the initial graph representation y with the graph specification representation z to generate the input information a by directly combining vectors. However, it is not limited to this. In one embodiment, the combining module 162 of the combined learning unit 160 combines the initial graph representation y with the graph specification representation z to generate the input information a. The combining method may also use the graph specification representation z as the weight. Merged with the initial graph characterization y. The trained second deep learning module 164 is used to receive the input information a to generate the final graph representation b. The final image representation b will contain information related to the specific image specification of the image I (ie, image specification information Ir). In one embodiment, the second deep learning module is selected from the convolutional neural network group consisting of LeNet, AlexNet, VGG, GoogleLeNet, and ResNet At least one of them.

關於前述第一深度學習模組120、神經網路資料處理模組140與第二深度學習模組164的訓練流程,舉例來說,可利用如圖2所示的架構進行訓練。 Regarding the training process of the aforementioned first deep learning module 120, neural network data processing module 140, and second deep learning module 164, for example, the architecture shown in FIG. 2 can be used for training.

近似度分析單元480是用以將最終圖表徵b與參考圖像I0的參考圖表徵c比對以判定圖像I與參考圖像I0的近似度。在一實施例中,參考圖表徵c可以是將參考圖像I0經過前述訓練後的第一深度學習模組120、訓練後的神經網路資料處理模組140與結合學習單元160處理後而產生。也就是說,參考圖像I0與圖像I可經過相同的處理分別產生參考圖表徵c與最終圖表徵b以進行近似度的分析比對。 The similarity analysis unit 480 is used to compare the final image representation b with the reference image representation c of the reference image I0 to determine the similarity between the image I and the reference image I0. In an embodiment, the reference image representation c may be generated by processing the reference image I0 through the aforementioned training of the first deep learning module 120, the trained neural network data processing module 140, and the combined learning unit 160. . That is to say, the reference image I0 and the image I may undergo the same processing to respectively generate the reference image representation c and the final image representation b for analysis and comparison of similarity.

在一實施例中,前述參考圖表徵c與最終圖表徵b可具有相同維數,以利於二者之分析比對。近似度分析單元480可比對參考圖表徵c與最終圖表徵b以產生幾何距離,並依據此幾何距離判定圖像I與參考圖像I0的近似度。也就是將最終圖表徵b={b1,b2...bn}與參考圖表徵c={c1,c2..cn}理解為n維表徵空間向量中的二個點,並計算此二個點在此n維表徵空間中的幾何距離,以判斷圖像I與參考圖像I0的近似度。 In one embodiment, the aforementioned reference graph representation c and the final graph representation b may have the same dimension to facilitate the analysis and comparison of the two. The similarity analysis unit 480 may compare the reference image representation c and the final image representation b to generate a geometric distance, and determine the similarity between the image I and the reference image I0 according to the geometric distance. That is, the final graph representation b={b1,b2...bn} and the reference graph representation c={c1,c2..cn} are understood as two points in the n-dimensional representation space vector, and these two points are calculated Here, the geometric distance in the n-dimensional representation space is used to determine the degree of similarity between the image I and the reference image I0.

在一實施例中,考慮到智慧財產權領域的特殊性,近似度分析單元480可設定至少一個閾值,並透過比較所述幾何距離與所述閾值的大小關係,判斷所述圖像相似于或相同於所述參考圖像。舉例來說,可假定所有公開登記的商標圖形都是滿足近似度分析,並統計這些公開登記的商標圖形所對應的最終圖表徵間的幾何距離,這些幾何距離中的最小值即可設定為閾值。在一實施例中,考慮到智慧財產權領域的特殊性,可設定一 相同判定閾值與一近似判定閾值,當幾何距離小於相同判定閾值,即判斷圖像I相同於參考圖像I0,當幾何距離小於近似判定閾值,即判斷圖像I近似於參考圖像I0。此閾值可透過分析智慧財產權資料庫的資料予以設定。 In one embodiment, considering the particularity of the field of intellectual property rights, the similarity analysis unit 480 may set at least one threshold, and by comparing the geometric distance with the threshold, determine that the image is similar or the same In the reference image. For example, it can be assumed that all publicly registered trademark graphics satisfy the similarity analysis, and the geometric distances between the final graphical representations corresponding to these publicly registered trademark graphics can be counted. The minimum of these geometric distances can be set as the threshold. . In one embodiment, considering the particularity of the field of intellectual property rights, a The same judgment threshold and an approximate judgment threshold. When the geometric distance is less than the same judgment threshold, the judgment image I is the same as the reference image I0, and when the geometric distance is less than the approximate judgment threshold, the judgment image I is similar to the reference image I0. This threshold can be set by analyzing the data in the intellectual property rights database.

若將此圖像近似度分析系統400應用於商標領域,舉例來說,可將商標資料庫中所有登記公開的商標圖形輸入此近似度分析系統400的前半部(或是訓練後的圖表徵產生系統100),以取得這些商標圖形對應的圖表徵,這些圖表徵即可作為本實施例的參考圖表徵c。藉此,使用者在進行商標申請前,即可利用圖像近似度分析系統400先行確認在登記公開的商標中是否存在近似或相同的商標,以評估獲准可能性,進而思考需否調整商標圖形設計。 If the image similarity analysis system 400 is applied to the trademark field, for example, all registered and public trademark graphics in the trademark database can be input into the first half of the similarity analysis system 400 (or the image representation generated after training) The system 100) to obtain the graphic representations corresponding to these trademark graphics, and these graphic representations can be used as the reference graphic representation c in this embodiment. In this way, the user can use the image similarity analysis system 400 to first confirm whether there are similar or identical trademarks in the registered trademarks before applying for trademarks, so as to evaluate the possibility of approval, and then consider whether the trademark graphics need to be adjusted. design.

綜上所述,利用本發明所提供的圖像近似度分析系統,可以有效地納入智慧財產權領域既有的圖像規範,解決智慧財產權領域在圖像資料(例如商標圖像、著作權圖像、或外觀設計圖像等)的比對處理上耗費人工、錯誤與爭議大、耗時效率低等問題。 In summary, the image similarity analysis system provided by the present invention can effectively incorporate the existing image specifications in the field of intellectual property rights, and solve the problem of image data (such as trademark images, copyright images, etc.) in the field of intellectual property rights. Or design images, etc.) comparison processing is labor-intensive, errors and disputes are large, time-consuming and low-efficiency issues.

以上所述,僅是本發明的較佳實施例而已,並非對本發明作任何形式上的限制,雖然本發明已以較佳實施例揭露如上,然而並非用以限定本發明,任何熟悉本專業的技術人員,在不脫離本發明技術方案範圍內,當可利用上述揭示的方法及技術內容作出些許的更動或修飾為等同變化的等效實施例,但凡是未脫離本發明技術方案的內容,依據本發明的技術實質對以上實施例所作的任何簡單修改、等同變化與修飾,均仍屬於本發明技術方案的範圍內。 The above are only the preferred embodiments of the present invention and do not limit the present invention in any form. Although the present invention has been disclosed as the preferred embodiments, it is not intended to limit the present invention. Anyone familiar with the profession Those skilled in the art, without departing from the scope of the technical solution of the present invention, can use the methods and technical content disclosed above to make slight changes or modifications into equivalent embodiments with equivalent changes, but any content that does not depart from the technical solution of the present invention is based on Any simple modifications, equivalent changes and modifications made to the above embodiments by the technical essence of the present invention still fall within the scope of the technical solutions of the present invention.

400:圖像近似度分析系統 400: Image approximation analysis system

120:第一深度學習模組 120: The first deep learning module

140:神經網路資料處理模組 140: Neural Network Data Processing Module

160:結合學習單元 160: Combined learning unit

I:圖像 I: Image

y:初始圖表徵 y: initial graph representation

Ir:圖規範資訊 Ir: Graph specification information

z:圖規範表徵 z: Graph specification representation

162:結合模組 162: Combination Module

164:第二深度學習模組 164: The second deep learning module

a:輸入資訊 a: input information

b:最終圖表徵 b: final graph representation

480:近似度分析單元 480: Approximation analysis unit

I0:參考圖像 I0: Reference image

c:參考圖表徵 c: Reference image characterization

Claims (11)

一種圖像近似度分析系統,用於具有一特定圖像規則的智慧財產權領域,用以分析一圖像相較於一參考圖像的近似度,所述圖像近似度分析系統包括: An image similarity analysis system used in the field of intellectual property rights with a specific image rule to analyze the similarity of an image compared to a reference image. The image similarity analysis system includes: 一訓練後的第一深度學習模組,用以接收所述圖像,以產生一初始圖表徵; A trained first deep learning module for receiving the image to generate an initial image representation; 一訓練後的神經網路資料處理模組,用以接收所述圖像在所述特定圖像規範下之一圖規範資訊,並依據所述圖規範資訊產生一圖規範表徵; A trained neural network data processing module for receiving a graph specification information of the image under the specific image specification, and generating a graph specification representation according to the graph specification information; 一結合學習單元,包括一結合模組與一訓練後的第二深度學習模組,所述結合模組是用以結合所述初始圖表徵與所述圖規範表徵,以產生一輸入資訊,所述訓練後的第二深度學習模組是用以接收所述輸入資訊,以產生一最終圖表徵;以及 A combined learning unit includes a combined module and a trained second deep learning module. The combined module is used to combine the initial graph representation and the graph specification representation to generate an input information, so The second deep learning module after training is used to receive the input information to generate a final graph representation; and 一近似度分析單元,用以將所述最終圖表徵與所述參考圖像之一參考圖表徵比對,以判定所述圖像與所述參考圖像的近似度。 An approximate degree analysis unit for comparing the final image representation with one of the reference image representations to determine the similarity between the image and the reference image. 如請求項1所述的圖像近似度分析系統,其中,所述訓練後的神經網路資料處理模組是利用獨熱編碼(One Hot Encode)產生所述圖規範表徵。 The image approximation analysis system according to claim 1, wherein the trained neural network data processing module uses One Hot Encode to generate the graph specification representation. 如請求項1所述的圖像近似度分析系統,其中,所述圖規範資訊是利用對應於所述特定圖像規範之一圖分類資料庫、具有所述特定圖像規範之一知識圖譜庫、或對應於所述特定圖像規範之一量化規範法則所產生。 The image similarity analysis system according to claim 1, wherein the graph specification information uses a graph classification database corresponding to the specific image specification, and a knowledge graph library having the specific image specification , Or generated corresponding to one of the specific image specifications. 如請求項1所述的圖像近似度分析系統,其中,所述結合學習單元將所述初始圖表徵與所述圖規範表徵結合以產生所述輸入資訊的結合方法是採用向量直接合併。 The image approximation analysis system according to claim 1, wherein the combination method of the combination learning unit combining the initial graph representation and the graph specification representation to generate the input information is direct combination of vectors. 如請求項1所述的圖像近似度分析系統,其中,所述圖規則表徵與所述初始圖表徵的維數相同。 The image approximation analysis system according to claim 1, wherein the dimension of the graph rule representation is the same as the dimension of the initial graph representation. 如請求項5所述的圖像近似度分析系統,其中,所述結合學習單元是以所述圖規範表徵作為權重結合所述初始圖表徵與所述圖規範表徵。 The image approximation analysis system according to claim 5, wherein the combination learning unit uses the graph specification representation as a weight to combine the initial graph representation and the graph specification representation. 如請求項1所述的圖像近似度分析系統,其中,所述訓練後的第一深度學習模組與所述訓練後的第二深度學習模組是選自於由LeNet、 AlexNet、VGG、GoogleLeNet、ResNet所組成的卷積神經網路(Convolutional Neural Network;CNN)族群中的至少一個。 The image approximation analysis system according to claim 1, wherein the first deep learning module after training and the second deep learning module after training are selected from LeNet, At least one of the convolutional neural network (Convolutional Neural Network; CNN) group composed of AlexNet, VGG, GoogleLeNet, and ResNet. 如請求項1所述的圖像近似度分析系統,其中,所述訓練後的第一深度學習模組並用以接收所述參考圖像,以產生一初始參考圖表徵,所述訓練後的神經網路資料處理模組並用以接收所述參考圖像在所述特定圖像規範下之一參考圖規範資訊,並依據所述參考圖規範資訊產生一參考圖規範表徵,所述結合模組並用以結合所述初始參考圖表徵與所述參考圖規範表徵以產生一參考輸入資訊,所述訓練後的第二深度學習模組並用以接收所述參考輸入資訊,以產生所述參考圖表徵。 The image approximation analysis system according to claim 1, wherein the trained first deep learning module is used to receive the reference image to generate an initial reference image representation, and the trained nerve The network data processing module is used for receiving one of the reference image specification information of the reference image under the specific image specification, and generating a reference image specification representation according to the reference image specification information, and the combination module is used together The initial reference image representation and the reference image specification representation are combined to generate a reference input information, and the trained second deep learning module is used to receive the reference input information to generate the reference image representation. 如請求項1所述的圖像近似度分析系統,其中,所述最終圖表徵與所述參考圖表徵的維數相同。 The image approximation analysis system according to claim 1, wherein the dimension of the final graph representation is the same as the dimension of the reference graph representation. 如請求項1所述的圖像近似度分析系統,其中,所述近似度分析單元是比對所述最終圖表徵與所述參考圖表徵以產生一多維空間內的幾何距離,並依據所述幾何距離判定所述圖像與所述參考圖像的近似度。 The image approximation analysis system according to claim 1, wherein the approximation analysis unit compares the final image representation with the reference image representation to generate a geometric distance in a multi-dimensional space, and based on the The geometric distance determines the degree of similarity between the image and the reference image. 如請求項10所述的圖像近似度分析系統,其特徵在於,其中所述近似度分析單元設定至少一閾值,並透過比較所述幾何距離與所述閾值的大小關係,判斷所述圖像是否近似于所述參考圖像。 The image similarity analysis system according to claim 10, wherein the similarity analysis unit sets at least one threshold, and judges the image by comparing the relationship between the geometric distance and the threshold. Whether it is similar to the reference image.
TW109109247A 2020-03-19 2020-03-19 Image similarity analyzing system TWI778341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109109247A TWI778341B (en) 2020-03-19 2020-03-19 Image similarity analyzing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109109247A TWI778341B (en) 2020-03-19 2020-03-19 Image similarity analyzing system

Publications (2)

Publication Number Publication Date
TW202137079A true TW202137079A (en) 2021-10-01
TWI778341B TWI778341B (en) 2022-09-21

Family

ID=79601051

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109109247A TWI778341B (en) 2020-03-19 2020-03-19 Image similarity analyzing system

Country Status (1)

Country Link
TW (1) TWI778341B (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI689875B (en) * 2018-06-29 2020-04-01 由田新技股份有限公司 Defect inspection and classification apparatus and training apparatus using deep learning system

Also Published As

Publication number Publication date
TWI778341B (en) 2022-09-21

Similar Documents

Publication Publication Date Title
CN110490946B (en) Text image generation method based on cross-modal similarity and antagonism network generation
Chang et al. Chinese Handwriting Imitation with Hierarchical Generative Adversarial Network.
CN112446423A (en) Fast hybrid high-order attention domain confrontation network method based on transfer learning
CN110175248B (en) Face image retrieval method and device based on deep learning and Hash coding
CN114677565B (en) Training method and image processing method and device for feature extraction network
Zhao et al. Disentangled representation learning and residual GAN for age-invariant face verification
CN114842267A (en) Image classification method and system based on label noise domain self-adaption
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
US11830133B2 (en) Calculation method of three-dimensional model's spherical expression based on multi-stage deformation reconstruction
Shao et al. Deep multi-center learning for face alignment
Yang et al. MSE-Net: generative image inpainting with multi-scale encoder
Xie et al. pmbqa: Projection-based blind point cloud quality assessment via multimodal learning
CN111859925B (en) Emotion analysis system and method based on probability emotion dictionary
CN116704208B (en) Local interpretable method based on characteristic relation
TW202137079A (en) Image similarity analyzing system
Alshawi et al. An Attention-Based Convolutional Recurrent Neural Networks for Scene Text Recognition
TW202137073A (en) Image representation generating system, image representation generating method and image representation intellectual module thereof
Li A partial differential equation-based image restoration method in environmental art design
Chen et al. Point cloud registration based on learning Gaussian mixture models with global-weighted local representations
Ma et al. ADGNet: Attention Discrepancy Guided Deep Neural Network for Blind Image Quality Assessment
CN114529908A (en) Offline handwritten chemical reaction type image recognition technology
CN113763313A (en) Text image quality detection method, device, medium and electronic equipment
CN113496233A (en) Image approximation degree analysis system
Zhang et al. TCDM: Transformational Complexity Based Distortion Metric for Perceptual Point Cloud Quality Assessment
Yan et al. A Transformer-Based Unsupervised Domain Adaptation Method for Skeleton Behavior Recognition

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent