TW201001194A - Image matching, distortion, integration, and synthesis system, and method thereof - Google Patents

Image matching, distortion, integration, and synthesis system, and method thereof Download PDF

Info

Publication number
TW201001194A
TW201001194A TW97122487A TW97122487A TW201001194A TW 201001194 A TW201001194 A TW 201001194A TW 97122487 A TW97122487 A TW 97122487A TW 97122487 A TW97122487 A TW 97122487A TW 201001194 A TW201001194 A TW 201001194A
Authority
TW
Taiwan
Prior art keywords
data
image
module
pens
processing device
Prior art date
Application number
TW97122487A
Other languages
Chinese (zh)
Other versions
TWI361983B (en
Inventor
Yeong-Sung Lin
Original Assignee
Tlj Intertech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tlj Intertech Inc filed Critical Tlj Intertech Inc
Priority to TW97122487A priority Critical patent/TW201001194A/en
Publication of TW201001194A publication Critical patent/TW201001194A/en
Application granted granted Critical
Publication of TWI361983B publication Critical patent/TWI361983B/zh

Links

Abstract

This invention is an image matching, distortion, integration, and synthesis system, and its method. The system is mainly designed to receive data sent to the data processing device from an external device, and store the data into a database. It further follows operating commands and conditions to retrieve a matched first data from the database; then the system follows further search commands and conditions to retrieve a matched second data from the database. In addition, according to the compare commands and conditions, it compares the first and second data. As a result, a matched third data is generated from the second data. Finally, the first, the second and/or the third data are output to a display unit to perform distortion, integration, and synthesis processing to provide value added service of data processing.

Description

201001194 九、發明說明: 【發明所屬之技術領域】 本發明係有關於一種資料未m 有關於-種影像變形、融術_更詳而言之’係 法。 /、σ成之貧料處理系統以及方 【先前技術】 隨著資訊科技的日新 , ^ H<<{ 月"以及軟硬體技術發展的成 热’利用如電腦、筆記帮雷 動電m…, 個人數位助理、智慧型行 勒电名弟3代或第3. 5代行動雷令玺吹』、,士 眚枓的喻督4士 丁助电δ舌寺貧料處理裝置’進行 貝#的肩异、儲存、傳輸及/ ^ φ ^ . , 飞&理,早已成為人們日常生 活虽中取重要的一部分。另一 菸a,妝次w占 方面’配合網路架構的普及 心展將-貝料處理裝置與網路相处人 的咨袓冲六味 』格相、、,口合’即能實現無遠弗屆 的貝枓儲存、傳輸及/或管理。 :、、、:而,伴卩近著資料處理效率盘儲存禅入& θ A »細枚神认l十 省存嫖介谷I的增加以 及、·罔路傳輸效率的大幅提升,資 ~ 貝。孔的種類與數量盘日但 增,如何能夠在廣大的近端或遠 里,、曰(、 4退知貝科庫(Data Pool)中 搜哥到目標資料,已成為資料搜尋技術發展領域的重 題。 …此外’如何在單純且目標確定的搜尋外,進一步結人 資料搜尋技術與資料媒人枯彳杆,w— 叶炼σ技術以鈥供貧料搜尋的加值服 務;’則成為網路系統服務業者或資料搜尋業者所面對的新 的課題。 【發明内容】 為解決前述習知技術的缺失,本發明提供一種影像媒 110921 5 201001194 合、變形、融合與合成系統,係應用於具有顯示單元 料處理裝置中,該影像媒合、變形、融合與合成系統包括貝 貧料輸入模組’其係用以接收外部裝置傳送至該資料處理 裝置的資料;資料庫,其係用以健存透過該資料輸入模植 所接收來自該外部裝置的靜態及/或動態影像之資料,·資料 掘取模組,其係用以依據透過該資料輸入模組所接收的抒 作指令與條件,自該資料庫中擷取出符合所輪入之操^ 令與條件的靜態及/或動態影像之第一資料;資料搜尋模 組,其係用以依據透過該資料輸入模組所輸入之搜尋指令 與心、件及/或預③之搜尋指令與條件,自該資料庫中檢索出 允符搜尋指令與條件及/或預設之搜尋指令與條件之第二201001194 IX. INSTRUCTIONS: [Technical field to which the invention pertains] The present invention relates to a method in which image data is not related to image deformation and fusion. /, σ 成之穷料处理系统和方 [前技术] With the rapid development of information technology, ^ H<< {月" and the development of hardware and software technology's use of such as computers, notes to help lightning m..., personal digital assistant, intelligent line of electric power generation 3 generations or the 3. 5th generation of the operation of Lei Ling 玺,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Bei's shoulder, storage, transmission and / ^ φ ^ . , fly & has long been an important part of people's daily lives. Another smoke a, the makeup time w aspect of the 'companion with the popularity of the network architecture will be - the material processing device and the network of people who are in contact with the six flavors of the taste, and, the mouth can't achieve far away Storage, transmission and/or management of Bellevue. :,,,:,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, . The types and quantities of holes have increased, and how to search for target data in the vast near-end or far-reaching, 曰(,4) data pool has become the focus of data search technology development. In addition, ... how to further improve the search for data and technology media in the search for simple and targeted search, w- 炼 炼 技术 technology to search for value-added services for poor materials; A new problem faced by a system service provider or a data search provider. SUMMARY OF THE INVENTION In order to solve the above-mentioned shortcomings of the prior art, the present invention provides a video medium 110921 5 201001194 combination, transformation, fusion and synthesis system, which is applied to In the display unit processing device, the image matching, deformation, fusion and synthesis system comprises a shelling input module for receiving data transmitted from the external device to the data processing device; the database is used for health Storing data from static and/or dynamic images from the external device through the data input portlet, and the data mining module is based on the Inputting instructions and conditions received by the module, extracting from the database the first data of the static and/or motion image that meets the operating instructions and conditions of the rotation; the data search module is used for Retrieving the search command and condition and/or the preset search command and condition from the database based on the search command and the heart, the piece and/or the pre-3 search command and condition input through the data input module. Second

資料;資料比對模組,其係用以依據透過該資料輸入模Z 所輸入之比對指令與條件及/或預設之比對指令與條件,將 該第一資料與該第二資料相比對,並據以自該第二資料中 產生允符透過該資料輸入模組所輸入之比對指令斑條件及 /或預設之比對指令與條件之第三資料;以及資料輸出模 組,其係用以將該第一資料、該第二資料及/或該第三資料 輸出至該顯示單元。 於本發明之一種型態中,該資料比對模組復用以依據 透過該資料輸入模組所輸入之比對指令與條件及/或預設 之比對指令與條件,針對複數筆的該第一資料間、複數筆 的5亥第—貧料間、複數筆的該第三資料間、單筆或複數筆 的該第-資料與單筆或複數筆的該第二資料間、單筆或複 數筆的該第-資料與單筆或複數筆的該第三資料間、單筆 110921 6 201001194 或數筆的s亥第二資料盘 /、早聿或複數筆的該第rr咨P弓、社 行相似程度的比對,并处、* ' 早』。发弟—貝科間進 的比對結果予以顯示Η“出漁將相似程度 、、’、 較{土者’ 5亥比對之條件传為γ宁益 態影像區域之大小、銪备如y 作仟知為知·疋靜 程度。 幻顏色、灰階、色彩漸層或位置的近似 於轉明之-種型態中,復包括影像融合模組,其係 用以將被數筆的該第一資料 ^ , 、、 ' ' 筆或#數鳌:: 聿或複數筆的該第-資料與單 盘L #料間、單筆或複數筆的該第-資料 次料m 弟—貝枓間、早筆或複數筆的該第二 2::: ί數筆的該第三資料間予以融合,藉以產生 =資料輪入模組輸入該第一資料、該第二資料及= 一負料間融合之程度。 乂本發明之—種型態中,復包括設定模組,其係用以 資料處理裝置之顯不單元上,提供設定介面,用以讓 使=者設定操取、搜尋、比對及/或融合該第—資料、該第 一責料、該第三資料及/或該第四資料之各項功能。 於本發明之一種型態中,係應用於複數個資料處理裝 置中,且該複數個資料處理裝置係透過網路或無線通信網 路相互連接。 於本發明之一種型態中,復包括傳輸模組,其係用以 將經過擷取、搜尋、比對及/或融合的該第一資料、該第二 貝料、该第三資料及/或該第四資料,透過該資料處理裝 110921 7 201001194 置,經由網路或無線通信網路 置,並能接收其他的資料處理::的:料_ * =過揭取、搜尋、比對及/或融合的;、==路所傳 -賀料、該第三資料及/或該第四資料。 -科、該第 於本發明之—種型態中,復 —其係用以辨識出該第-資料、該第取模組’ ,或該第四資料中的臉部影像資料弟資料及/ 料中之特徵點,並將該:=出 以及衫像、_組’其係用 , 傳送至該資料處理裝置的文字、符入拉組所 該影像特料·&; P ^呆作4日令’針對 :像特徵擷取权組所辨識出的該臉部 點予以變形處理。 貝了叶甲之特被 於本發明之一種型鲅中 组辨气㈣楚- 當该影像特徵擷取模 四::,第一育料 '該第二資料、該第三資料及/或該第 中的臉部影像資料後,該影像融合模組復能用以令 4一貧料、該第二資料、該第三資料或該第四資料中之 «亥臉4影像育料為基礎臉部影像資料,依據透過該資料輸 =模^傳送至該資料處理裝置的文字、符號及/或操作指 ”設定之融合比例’將其他經過該影像特徵擷取模組由 該第一資料、該第二資料、該第三資料及/或該第四資料中 所辨識出的該臉部影像資料,按所設定之融合比例融合至 基礎臉部影像資料。 於本發明之一種型態中,復包括影像調整模組,其係 用以於该影像特徵擷取模組辨識出該第一資料、該第二資 110921 8 201001194 料、該第三資料及/或該第四資料中的該臉部影像資料,或 進一步自該臉部影像資料擷取出所辨識出的臉部影像資料 中之该特徵點,並將該特徵點+以— > 从 , 竹做點予以疋位後,依據透過該資 科輪入模組所傳送至該資料處理努 .^ 了处埋衷置的文子、符號及/或操 作才日々所设定之調整範圍、内容、卜卜仓丨a /斗、^士 、 ㈤門谷比例及/或待調整之特徵 - 點進行該臉部影像資料的調整。 • 於本發明之一種型鲅巾,去旦/你~ , 〜、τ田影像特徵擷取模組辨識出 複數個第一資料、第二資料、第二 矛一貝料及/或苐四貧料中的 臉部影像資料,並谁一牛6女·>·> & ' 旦 士 乂自各该臉部影像資料擷取出臉部 =象貧料中之祕點,再料徵點Μ定位後,影像融合 ,且復可以令第一資料、第二資料、第三資料或第四資料 、、, ^,,£依據透過資料輸入模組所傳 迗至資料處理裝置的文字、锌旁 子付嬈及/或刼作指令所設定之融 t比例’與其他經過影像特徵齡模組由第—資料、第二 貧料、第三資料及/或第四資料由 弟貝科中所辨識出的臉部影像資料 的特徵點,按所設定S&人4 7 , 疋之毗合比例相融合,藉以形成一特徵 4經過融合之新的臉部影像資料。 I置ίϊί 種型態中’ f料輪人模組於接收到外部 貞;'处理褒置的資料後’可選擇性地儲存於該 貝枓庫中,亦可僅以暫存的形式儲存於資料處理裝置中。 、卜/本毛明復提供一種影像媒合、變形、融合與合 成方法’係應用於具有顯千置4 -欠丨+ 秀頌不早兀之貧料處理裝置中,該影 像媒合、變形、融合盥入 Μ σ成方法包括.接收外部裝置傳送The data comparison module is configured to compare the first data with the second data according to the comparison instruction and the condition and/or the preset comparison command and condition input through the data input mode Z. Comparing, and generating, from the second data, third information relating to the command spot condition and/or the preset comparison command and condition input through the data input module; and the data output module And the method is configured to output the first data, the second data, and/or the third data to the display unit. In one form of the present invention, the data is multiplexed with the module to compare the instructions and conditions and/or preset comparison commands and conditions input through the data input module, and the plurality of pens are used for the plurality of pens. Between the first data, the 5 haidi-poor material of the plural pen, the third data of the plural pen, the first data of the single pen or the plural pen, and the second data of the single pen or the plural pen, a single pen Or the first data of a plurality of pens and the third data of a single pen or a plurality of pens, a single pen 110921 6 201001194 or a number of pens of the second data plate /, the early 聿 or the plural pen of the rr consult P bow The comparison between the social and administrative levels is similar, and * 'early'. The results of the comparison between the younger brother and the Becco show that “the degree of similarity in the fishing will be similar,” and the condition of the “earth of the earth” will be transmitted to the size of the γ-ning image area.仟 为 为 。 。 。 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻 幻Information ^ , , , ' ' Pen or #数鳌:: The first data of the 聿 or plural pens and the single disk L #料, single or multiple pens of the first-data sub-dimens-between, The third data of the second 2::: 数 number of the early pen or the plurality of pens is merged to generate the data entry module to input the first data, the second data and the = negative material fusion The extent of the present invention includes a setting module for providing a setting interface for the display unit of the data processing device for enabling the operator to set up, search, and compare And/or integrating the functions of the first data, the first charge, the third data and/or the fourth data. The type is applied to a plurality of data processing devices, and the plurality of data processing devices are connected to each other through a network or a wireless communication network. In one form of the invention, the transmission module is included The first data, the second bedding material, the third data, and/or the fourth data that have been retrieved, searched, compared, and/or merged are transmitted through the data processing device 110921 7 201001194 Network or wireless communication network, and can receive other data processing:: material _ * = over-extract, search, comparison and / or fusion;, == road pass - congratulations, the first The third data and/or the fourth data. - Section, in the type of the present invention, the complex is used to identify the first data, the first module, or the fourth data In the face image data of the face and the feature points in the material, and the :: and the shirt image, the _ group's use, the text transmitted to the data processing device, the image of the symbol into the group Material·&; P ^ Stay 4 Days' Target: The face point identified by the feature capture group The deformation treatment is carried out. The beetle of the beetle is identified in a type of sputum of the present invention (four) Chu - when the image feature captures the fourth::, the first breeding material 'the second data, the third data And/or the facial image data of the middle portion, the image fusion module is used to make the 4th poor material, the second data, the third data or the fourth data The material-based facial image data is based on the text, symbol and/or operation finger transmitted through the data transmission mode to the data processing device, and the other image passing through the image feature extraction module is The facial image data identified in the first data, the second data, and/or the fourth data is merged into the basic facial image data according to the set fusion ratio. In one aspect of the present invention, an image adjustment module is further included, wherein the image feature extraction module identifies the first data, the second asset 110921 8 201001194, the third data, and/or Or the facial image data in the fourth data, or further extracting the feature point in the recognized facial image data from the facial image data, and adding the feature point to -> After making a position, the scope, content, and content of the text, symbols, and/or operations that have been sent to the data processing unit through the application of the module are processed. Bu Cangjie a / bucket, ^ Shi, (5) door valley ratio and / or characteristics to be adjusted - point to adjust the face image data. • In the type of wipes of the present invention, the image acquisition module of the image of the image of the image is identified by the image sensor module to identify a plurality of first data, second data, second spear, and/or material. The facial image data in the face, and who is a cow 6 female ·>·>& 'Dan Shi 乂 from each of the facial image data 撷 remove the face = like the secret point in the poor material, and then the point After that, the image is fused, and the first data, the second data, the third data, or the fourth data, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , The ratio of the ratio of the price set by the payment and/or the production instruction to the other image-aged age modules is identified by the first data, the second poor material, the third data and/or the fourth data. The feature points of the facial image data are merged according to the set S& human 4 7 , 疋 毗 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , I set the type of the 'f wheel man module to receive the external frame; 'after processing the data of the device' can be selectively stored in the shell library, or can be stored only in the form of temporary storage. In the data processing device. , Bu / Ben Mao Ming Fu provides a method of image matching, deformation, fusion and synthesis ' applied to the poor material processing device with the display of 4 - 丨 - 颂 颂 颂 颂 , , , , , , , , , ,盥 盥 Μ σ 成 方法 method includes: receiving external device transmission

至该貧料處理裝置的咨# . Μ 士 I 貝科,儲存透過該資料輸入模組所接 110921 9 201001194 收來自該外部裝置的靜態及/或動態影像之資料至資料庫 中;依據所接收的操作指令與條件,自該資料庫中擷取出 符合所輸入之操作指令與條件的靜態及/或動態影像之第 一資料;依據所輸入之搜尋指令與條件及/或預設之搜尋指 令與條件,自該資料庫中檢索出允符搜尋指令與條件及/ -或預設之搜尋指令與條件之第二資料;依據所輸入之 .指令與條件及/或預設之比對指令與條件,將該第—資料愈 該第二資料相比對,並據以自該第二資料中產生允符所輸 入之比對指令與條件及/或預設之比對指令與條件之第三 貧料:以及,一資料、該第二資料及/或該第三資料輪 出至#亥顯示單元。 ' ’’ 於本發明之一種型態中,復包括:將複數筆的該第一 ί筆筆:該第二資料間、複數筆的該第三資料間、 筆的該第-資料與單筆或複數筆的該第二資料 資料間、單筆或複數筆的該筆的該第三 第三資料間予以融合,或複數筆的該 一 喊口精以產生第四資料;以及读讲枯挪 示單元顯示該第四資料。 ° μ,、、’貝 於本發明之一種型鲅 指令與條件及/或箱< 复0括.依據所輸入之比對 ^ <、 或預δ又之比對指令與條件,針對複數箸me 第一資料間、複數筆的兮筮τ釕複數葦的该 料間、單筆或複數筆的該第三資 二資料間、單箸式H : 貝料/、單筆或複數筆的該第 該第=㈣f 5 4的該f —料與單筆或複數筆的 科間早筆或複數筆的該第二資料與單筆或複數 110921 10 201001194 f的該第三資料間進行相似程度的比對;以及透過該顯示 單元顯示相似程度的比對結果。較佳者,該比對之條件係 為特定靜態影像區域之大小、顏色、灰階、色彩漸層或位 置的近似程度。 於本發明之一種型態中,係應用於複數個資料處理裝 置中,且該複數個資料處理裝置係透過網路或無線通信網 路相互連接。且復包括:當該資料處理裝置接收到來自另 二貢料處理裝置包括文字、符號、操作指令、靜態影像及/ 或動態影像的資料後,操作該資料的擷取、搜尋、比對及/ 或融^之步驟;以及將該擷取、搜尋、比對及/或融合完成 後的貧料經由網路或無線通信網路傳送至該另一資料處理 裝置並顯示於該另一資料處理裝置之顯示單元上。 〜+贫明之 1 队1-扣.可峨®該第一資 傻次1弟—#料、該第三資料及/或該第四資料中的臉部影 貝料,擷取出所辨識出的該臉部影像資料中之To the poor material processing device, the consultant, I. Ikeke, stores the static and/or dynamic image data from the external device through the data input module to 110921 9 201001194, and receives the data into the database; Operating instructions and conditions for extracting first data of static and/or dynamic images in accordance with the entered operational commands and conditions from the database; based on the entered search instructions and conditions and/or preset search commands and Condition, the second data of the search command and condition and/or the preset search command and condition are retrieved from the database; and the command and condition are compared according to the entered command and condition and/or preset Comparing the first data with the second data, and comparing the instructions and conditions and/or presets entered into the third data from the second data Material: and, a data, the second data and/or the third data is rotated to the #亥 display unit. In one aspect of the present invention, the complex includes: the first pen of the plurality of pens: the second data, the third data of the plurality of pens, the first data of the pen, and the single pen Or combining the second data of the plurality of pens, the third or third of the pens of the single pen or the plurality of pens, or the shouting of the plurality of pens to generate the fourth material; The display unit displays the fourth data. ° μ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,箸me The first data, the plural number of 兮筮τ钌 plural 苇 of the material, the single or multiple pens of the third stipulated data, single H H: shell material /, single or multiple pen The f = material of the first (4) f 5 4 is similar to the second data of the single or plural pens of the first or plural pens and the third data of the single or plural number 110921 10 201001194 f The alignment; and the comparison result showing similarity through the display unit. Preferably, the condition of the alignment is the approximate degree of size, color, gray scale, color gradient or position of a particular still image area. In one form of the invention, it is applied to a plurality of data processing devices, and the plurality of data processing devices are connected to each other via a network or a wireless communication network. And including: after the data processing device receives the data from the other tributary processing device including characters, symbols, operation instructions, still images, and/or motion images, the data is retrieved, searched, compared, and/or Or the step of merging; searching, searching, comparing, and/or integrating the poor material through the network or wireless communication network to the other data processing device and displaying the data processing device On the display unit. ~ + poor 1 team 1 - buckle. 峨 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该In the facial image data

點予以定位;依據所輸入之文字、符號及/或操作 針對該所辨識出的臉部影像資料中之該特徵點予以 =處理;以及將所辨識出的該第-資料、該第二資料、 料及/或該第四資料中的臉部影像資料及/或臉部 斗中Λ特徵點,以及經過影像變形模組變形處理該 -貧料、該第二資料、該第三資料及/或該 / 於該資料庫中。 貝枓儲存 於本發明之一種型態中,令該第一資料、二次 料、該第三資料及/或該第四資料中之該臉部影像資料= 110921 11 201001194 ,臉部影像資料;以及依據所輸人之文字、符號及/或操作 指令所設定之融合比例,將其他由該第一資料、該第二資 料、該第三資料及/或該第四資料中所辨識出的臉部影像資 料,饮所设定之融合比例融合至該基礎臉部影像資料。 於本發明之一種型態中,復包括令辨識出該第一資 料L該第二資料、該第三資料及/或該第四資料中的臉部影 、像㈣’或進―步自該臉部影像資料擷取出所辨識出的臉 ^像㈣巾之特徵點,並將耗徵點予以定位;以及依 據所輸人之文字、符號及/錢作指令所設定之調整範圍、 内容、比例及/或待調整之該特徵點進行錢部影像資料的 调整。 於本發明之一種型離中,4复句紅Μ 1 u设包括辨識出該第一資料、 ;=:、該第三資料及/或該第四資料中的臉部影像資 辨識出的該臉部影像資料中之特徵點;將該 ,,,占予以疋位;將該第一資料、該第二資料、該 …貝卄中之臉L貧料的特徵點,依據所輸 之文子、付號及/或操作指令所μ定夕3人 由兮楚一次』, …”所-疋之融合比例,與其他 ”弟;貝料、該第二資料、該第三資料及/或該第四資料 所辨熾出的臉部影像資料的特徵點 例相融合,·以及形成一特徵點麵、…杈所°又疋之融合比 料 4 _占經過融合之新的臉部影像資 人# 於自知技術’本發明之影像媒合、變形、融合盘 二成系統以及方法能透過單—詩處㈣置或複數料 々理裝置,執行資料的操取、搜尋、比對…或融合藉以提 Π0921 12 201001194 供資料搜尋與資料媒合之加值服務。 【實施方式】 以下係藉由特定的具體實施例說明本發 熟悉此技藝之人士可由本說明書所揭示 貫:方 瞭解本發明之其他優點與功效。本發明亦 -易地 的具體實施例加以施行或應用,本說明書“各項f不同 可基於不同觀點與應用,在不恃離本發明之精^郎亦 種修錦與變更。 進仃各 第一實施例: 明苓閱第1圖’其係用以顯示本發明之影 :實=合成系統之第一實施例的應用架構示意二; 貫“列中,本發明之影像媒合、變形、融合盥 係應用於資料處理步罟.. . 成糸統 限定料處縣置2G可例如但不 ★舌m電腦、個人數位助理、智慧型行動電 居弟3代或弟3·5代行動電話等資料處理 ==具有顯示單元21,其係用以顯示經過二 影==理而呈現給使用者包括文字、符號或靜態/動態 明之影像媒合、變形、融合與合成系統包括資料 二且⑴、貧料庫112、資料擷取模組ιΐ3、資料搜尋模 ,、且I"、貧料比對模組115以及資料輸出模組116。 、,資料輸入模組⑴係用以接收外部褒置(未圖示)傳 达至貪料處理裝置2〇的資料’該資料包括文字、符號、操 作指令、靜態影像及/或動態影像的資料,於本實施例中:、 110921 13 201001194 係以透過資料輸入模組⑴接收來自外部裝 料處理裝置2°的靜態影像為例予以說明,而於 中,亦可選擇性地接收動態影像。而 ς 資料處理裝置2〇復包_圖或影像生 處理應用程式,則使用者可透過資 ^ 文字、符號及/或其他操作指I #别、、,且111错由輸入 影像資料。外部裝置可例㈣不之限方^3靜態或動態之 ΤΙ光學儲存裝置、快閃記憶體錯存裝置、攝影 衣置、其他與資料處理裝置2〇 · 料處理裝置。 ^的貝㈣存媒介或資 資料庫112係用以儲存透過資料輸人模 外;=態及/或動態影像之資料。於其= 二C可用以儲存使用者透過資料輸入模組⑴ 料令之方式,利用緣圖 ==及/或影像處理應用程式所生成之靜態或動態 貝枓。此外,於本實施例中,資料庫112係可選擇性 料處理裳置20中及/或以外接的方式與資料處 =叫資料庫112可例如但不限定的建置於軟式磁 梦、硬式磁碟裝置、光學儲存裝置、快閃記憶體儲存 裝置、攝影裝置、其他與資料處理裝置20相連接的資料 存媒介或資料處理裝置。 資料擷取模組113係用以依據透過資料輸入模組U1 所接收的操作指令與條件,自資料庫112中擷取出符合所輸 之“作私々與條件的靜態及/或動態影像之第一資料。承 Π0921 14 201001194 前所述’於本實施例中,係自資料庫n 2中擷取出符合所輸 入之操作指令與條件的第一靜態影像資料為例予以說明。 具體言之’其條件可例如為特定的靜態影像的檔案名稱、 儲存日期、檔案類型及/或資料儲存位置。 資料搜尋模組114係用以依據透過資料輸入模組J J1 所輸入之搜尋指令與條件及/或預設之搜尋指令與條件,自 資料庫112中檢索出允符搜尋指令與條件及/或預設之搜尋 才曰令與條件之第二資料。承前所述,於本實施例中,係自 資料庫112中搜尋出允符搜尋指令與條件及/或預設之搜尋 指令與條件之第二靜態影像資料為例予以說明。 資料比對模組115係用以依據透過資料輸入模組η 1 所輸入之比對指令與條件及/或預設之比對指令與條件,將 資料擷取模組〗13所擷取出之第一資料,亦即第一靜態影像 資料,與資料搜尋模組丨14所檢索出之第二資料,亦即第二 靜態影像資料相比對,並據以自第二資料中產生允符透過 資料輸入模組1U所輸入之比對指令與條件及/或預設之比 對指令與條件之第三資料。於本實施例中,第三資料除可 為靜態影像資料外,亦可為動態影像資料,其端視第二資 料之内容而定。 於其他實施例中,第三資料可例如為允符比對指令與 ,件及/或㈣之比對指令與條件之第二資料之樓案名 冉、儲存日期、檔案類型及/或資料儲存位置。 Μ 實施例_ ’第三f料可例如為結合靜態影像及 或動態影像資料以及對應靜態影像及/杨態影像資料之 110921 15 201001194 才田木存日期、槽案類型及/或資料儲存位置。 可?早筆資料亦可為複數筆資料。舉例言之= 為早華貧料,而第二資料則為複數斗 或複數筆資料。 H之比對指令與條件之單筆 模組116係用以將第一資料、第二資料及 貝:輸出至顯示單元21,俾透過顯示單元21予以顧 二象:ί:::述亦,據第ί資料之型態,可選擇性地為靜態 及/或預执之比針可Λ動盘態影像資料’允符比對指令與條件 曰期、“類料之槽案名稱、儲存 動態影像資料以及㈣像及/或 牵名摇計 像及或㈣影像資料之檔Positioning the point; performing, according to the entered text, symbol and/or operation, the feature point in the recognized facial image data; and processing the identified first data, the second data, And/or the facial image data in the fourth data and/or the feature points in the facial hopper, and the deformation of the image deformation module, the second material, the third data, and/or the / In the database. The beak is stored in one form of the present invention, such that the first image, the secondary material, the third data, and/or the facial image data in the fourth data = 110921 11 201001194, facial image data; And other faces identified by the first data, the second data, the third data, and/or the fourth data according to a fusion ratio set by the characters, symbols, and/or operation instructions of the input person The image data and the fusion ratio set by the drink are merged into the basic facial image data. In one form of the present invention, the method includes recognizing the first data L, the second data, the third data, and/or the facial image in the fourth data, and the image (4) or stepping from the The facial image data extracts the feature points of the recognized face image (4), and locates the consumption points; and the adjustment range, content, and proportion set according to the characters, symbols, and/or money instructions of the input person And/or the feature point to be adjusted is used to adjust the image data of the money department. In one type of the present invention, the fourth complex sentence includes: identifying the first data, ;=:, the third data and/or the facial image in the fourth data to identify the face The feature points in the image data; the, the, and the occupies; the feature points of the first data, the second data, the face of the face, the poor material, according to the text, the payment No. and / or operation instructions of the singer of the three people by the 一次 一次 once, ... ... 所 融合 融合 融合 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , The feature points of the facial image data that are identified by the data are merged, and a feature point surface is formed, and the fusion material ratio of the 4 疋 疋 4 占 占 占 占 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合 融合Self-aware technology 'The image matching, deformation, fusion disk two-in-one system and method of the present invention can perform data manipulation, search, comparison, or fusion through a single-poet (four) or multiple material processing device. Π0921 12 201001194 Value-added services for data search and data matching. [Embodiment] The following is a description of the specific embodiments of the present invention, and other advantages and effects of the present invention will be apparent to those skilled in the art. The present invention is also applicable to the specific embodiments of the present invention. In the present specification, "the various items f can be based on different viewpoints and applications, and the simplifications and modifications of the invention are not deviated from the present invention. An embodiment: FIG. 1 is a diagram showing an image of the present invention: a schematic diagram of an application architecture of a first embodiment of a real=synthesis system; The fusion system is applied to the data processing step.. . The county is limited to the county. 2G can be, for example, but not the tongue computer, personal digital assistant, smart mobile electric brother 3 generation or brother 3. 5 generation mobile phone Data processing == has a display unit 21, which is used to display images, symbols, or static/dynamic images, including text, symbols, or static/dynamic images, including data 2 and (1) The poor storage bank 112, the data acquisition module ιΐ3, the data search module, and the I", the lean material comparison module 115 and the data output module 116. The data input module (1) is configured to receive an external device (not shown) to transmit data to the craving device 2, which includes text, symbols, operation commands, still images, and/or motion image data. In the present embodiment, 110921 13 201001194 is described by taking a static image from the external charging processing device through the data input module (1) as an example, and in the above, the moving image can also be selectively received. The data processing device 2, the overlay image or the image processing application, can input the image data by means of text, symbols and/or other operations. The external device can be exemplified by (4) not limited to ^3 static or dynamic ΤΙ optical storage device, flash memory erroneous device, photographic device, other and data processing device 2 料 material processing device. ^Bei (4) Storage Media or Resource Library 112 is used to store data transmitted through the data; = state and / or dynamic image. It can be used to store the user's data input module (1) by means of the data input module (1), using the edge map == and / or the static or dynamic shell generated by the image processing application. In addition, in the embodiment, the data library 112 can selectively process the device 20 and/or the external device and the data store. The data library 112 can be, for example but not limited to, a soft magnetic dream, a hard type. A disk device, an optical storage device, a flash memory storage device, a photographic device, and other data storage media or data processing devices connected to the data processing device 20. The data capture module 113 is configured to extract from the database 112 the "static and/or dynamic images for the private and the condition" according to the operation commands and conditions received by the data input module U1. 1. In the present embodiment, the first static image data that meets the input operation instructions and conditions is taken from the database n 2 as an example. The condition may be, for example, a file name, a storage date, a file type and/or a data storage location of a particular still image. The data search module 114 is configured to search for commands and conditions and/or conditions entered via the data input module J J1 . The preset search command and condition retrieves the second information of the search command and condition and/or the preset search command and condition from the database 112. As described above, in the present embodiment, The data library 112 searches for a second static image data of a search command and condition and/or a preset search command and condition as an example. The data comparison module 115 is configured to use the data. Entering the comparison command and condition and/or the preset comparison command and condition entered in the module η 1 , and the first data extracted by the data capture module 13 is the first static image data, and The second data retrieved by the data search module 丨14, that is, the second static image data is compared, and the comparison instructions and conditions input by the data input module 1U are generated from the second data. And/or the third data of the preset comparison command and condition. In this embodiment, the third data may be a static image data or a moving image data, and the second data may be determined according to the content of the second data. In other embodiments, the third data may be, for example, the name of the building, the date of storage, the type of file, and/or the data of the second data of the comparison instruction and the component and/or (4) of the instruction and condition. Storage location. 实施 Example _ 'The third material can be combined with static image and or dynamic image data and corresponding static image and / Yang image data 110921 15 201001194 Talent field date, slot type and / or data storage Location. Can be paid early The material may also be a plurality of data. For example, the first data is a plurality of materials, and the second data is a plurality of data or a plurality of data. The ratio of the H to the command and condition of the single module 116 is used to be the first The data, the second data and the shell: output to the display unit 21, and the second display through the display unit 21: ί:::, according to the type of the data, optionally static and/or pre-emptive The ratio of the needle can be used to trigger the image data of the disk, the order of the command and condition, the name of the slot of the material, the image of the stored motion picture, and (4) the image and/or the image of the image and/or the image data. files

案名稱、储存曰期、槽案類型及/或資_存位置Q 請參閱第2圖,其係用w ^ 來、%入命入/ 本發明之影像媒合、變 :成方法透過本發明之影像媒合、變妒、s虫人 ,、合成糸統之第一實施例執行的流程圖。如圖所二:: 驟S10中,接收外部裝置傳送 、/ 實施例中,該資料包括文字、符二::二:: r動態影像的資料。而於本發明之 = :處影像生成及/或影像二: =態之:像資料。外部裝置可例如但不限 磁樂裝置、硬式磁碟裝置、光學儲存裝置、快閃記憶體儲 110921 16 201001194 存裝置、攝勒梦罢、# 存媒介或資料他與貧料處縣置相連接的資料儲 =貝科處理裝置。接著進至步驟川。 於步驟S11 _,儲在 動態影像之資料於"庫巾㈣自外部裝置的靜態及/或 可用以錯存使用中。於其他實施例中,資料庫復 及/H广 貧料輸入模組藉由輸入文字、符妒 及/或其他操作指今 刑、乂子付现 傻卢理痛 方式’利用繪圖或影像生成及/或影 像處理應用程式所生成之 ^ 於本實施例中,資料庫::摆=之影像資料。此外, t/ . r; L 犀係了廷擇性地内建於資料處理裝置 π 的方式與資料處理裝置搭接,資料庫可例如 置於軟式磁碟裝置、硬式磁碟裝置、光學儲 1相=記憶體儲存褒置、攝影裝置、其他與資料處 牛驟S12 資料儲存媒介或資料處理裝置。接著進至 於步驟S12’依據所接收的操作指令與條件,自資料 貝取出符合所輸人之操作指令與條件的靜態及/或動 ::像之第一資料。承前所述,於本實施例中,係自資料 2掏取出符合所輸人之操作指令與條件的第—靜態影像 j為例予以說明。具體言之,其條件可例如為特定的靜 怨影像的㈣名稱、儲存日期、標案類型及/或資料儲存位 置。接著進至步驟S13。 於步驟S13中,依據所輸入之搜尋指令與條件及/或預 設之搜尋指令與條件,自資料庫中檢索出允符搜尋指令與 條件及/或預設之搜尋指令與條件之第二資料。承前所述, 於本貫施例中,係自資料庫中搜尋出允符搜尋指令與條件 110921 17 201001194 及/或預設之搜尋指令與條件之第二靜態影 以說明。接著進至步驟Sj[4。 貝科為例予 於步驟S14中,依據所輸入之比對指令蛊 設之比對指令與條件,將所擷取出之第一資料*及/或預 靜態影像資料,與所檢索出之第二資料,亦即第亦 像資料相比對,並據以自第二資料中產生允符^ = 對指令與條件及/或預設之比對指令與條件之第= ί貫施例中,第三資料除可為靜態影像資❸卜,亦可㈣ 悲影像肓料,其端視第二資料之内容而定。 、 例中,第三資料可例如為允符比對指令與 条件及/或預…對指令與條件之第二資 稱、儲存日期、檔案類型及/或資料儲存位置〇 /或中’第三資料可例如為結合靜態影像及 4:牵::貝;以及對應靜態影像及/或動態影像資料之 田”肖、儲存日期、檔案類型及/或資料儲存位置。 可為說明者,第一資料、第二資料及/或第三資料 為單ΐίΓ亦可為複數筆資料。舉例言之,第—資料可 :早¢4 ’而第二資料則為複數筆資料,第三資 =比對指令與條件及/或預設之比對指令與條件之單筆 S硬數筆二貝料。接著進至步驟S15。 於步驟S15中’將第一資料、第二資料及,或第三 :。至:料處理裝置之顯示單元,俾透過顯示單元予以顯 :德:別所述’依據第三資料之型態,可選擇性地為靜態 貝料外,亦可為動態影像資料;允符比對指令與條件 18 110921 201001194 及/或預設之比對指令與條件之第二資料之檔案名稱、儲存 曰,、檔案類型及/或資料儲存位置;結合靜態影像及/ = 動悲影像資料以及對應靜態影像及/或動態影像資料之檔 案名稱、儲存日期、檔案類型及/或資料儲存位置。 弟二實施例: 其係用以顯示本發明之影像媒合 晴芩閱第3圖 形無融合與合成系統之第二實施例的應用架構示意圖。第 -^施例與第-實施例之應用架構大抵相同,其差異在於 本貫施例中’復包括影像融合模組U7,影像融合模组⑴ =用以將複數筆的第―資料間、複數筆的第:資料間、複 :::的第三資料間、單筆或複數筆的第—資料與單筆或複 筆的第二㈣間、單筆錢數筆的第或 數筆的第三資料間、單筆戍 :,、早筆或複 數筆的聿次複數筆的第二貧料與單筆或複 数聿的弟二貧料間予以融合, 施例中,較佳者,第一資料、貧料。於本實 資料…1 為合作為前述第-資料、第二 貝枓及/或弟三資料的靜態影像資料。 #父佳纟’ f彡像融合模組117可 過貧料輸入模組⑴輸入第 徒仏使用者透 料間融合之程度1第一資二::】料爾三資 靜態影像資料為例,可例如执定t貝查;4及/或第三資料為 百分之$°又疋為锼數賢料間完全融合、 白刀之五十融合或百分之三十融人, 不同資料的影像特徵愈明顯。……比愈低各個 請參閱第4圖,1係用、日 〇係用以蝻不本發明之影像媒合、變 1J0921 19 201001194 形、融合與合成方法透過本發明之影像媒合、變形、融入 5成系統之第二實施例執行的流程圖。第二實施例:; 一貫施例之流程大抵相同,其差異在於本實施例中,於步 驟=5後,復包括步驟S16,於步驟S16中,係將複數筆的第 資料間複數筆的第一資料間、複數筆的第三資料間、 或複數筆的第一資料與單筆或複數筆的第二資料間、 f筆或複數筆的第一資料與單筆或複數筆的第三資料間、 單筆或複數筆的第二資料與單筆或複數筆的第三資料間予 〔以融合,藉以產生第四資料。於本實施例中,較佳者,第 一資料、第二資料與第三資料均為靜態影像資料,第四資 料則為融合作為前述第一資料、第二資料及/或第三資料的 靜態影像資料。 較佳者,可選擇性地依據使用者輸入之第一資料、第 一資料及/或第二資料間融合之程度執行融合。以第一資 料、第二資料及/或第三資料為靜態影像資料為例,可例如 c ;設定為複數資料間完全融合、百分之五十融合或百分之三 十融合,融合百分比愈低各個不同資料的影像特徵愈明 顯。接著進至步驟S17。 於步驟S17中,透過顯示單元顯示第四資料。 第三實施例: 於本實施例中,本發明之影像媒合、變形、融合與合 成系統與第一實施例之應用架構大抵相同,其差異僅在於 資料比對模組11 5能更進一步地依據透過資料輪入模組i i i 所輸入之比對指令與條件及/或預設之比對指令與條件,針 20 110921 201001194 對複數筆的第一資料間、複數筆的第二資料間、複數 第三資料間、單筆或複數筆的第一資料與單筆或‘數a 第二資料間、單筆或複數筆的第一資料與單筆或 : 第三資料間、單筆或複數筆的第二資料與單筆或 沾 第三育料間進行相似程度的比對,並能透過資料 、 116將相似程度的比對結果予以顯示。 ,杈、、且 •—承前所述,以第一資料、第二資料及/或第三資斜或 靜態影像資料為例,其比對之條件可例如為特定靜能与像 區域之大小、顏色、灰階、色彩漸層、位判: 相近似的程度。 判斷其 請參閱第5圖,其係用以顯示本發明之影像媒合、織 形、融合與合成方法透過本發明之影像 、嫩 盥合成李鲚之篦-者丨袖/ 、〇 欠形、融合 ”成糸、、先之第二““列執行的流程圖。第三實施例盥第 -貝施例之流程大抵相同’其差異在於本實施例中,: 驟S15後,復包括步驟S18,於步 、 .4t ^ t ,係依據所輸入之 (比,々與條件及/或預設之比對指令與條 的弟一資料間、複數筆的第二資料間、複數筆的第三資= :ί =複數筆的第一資料與單筆或複數筆的第二資料 、+ 早筆或歿數筆的第三資料 間、早筆或複數筆的第二資料鱼〃 七、早筆或设數筆的第三資料 間進仃相似程度的比對。接著進至步驟幻9。 於步·9中,透過顯示單元顯示相似程度的 果。 第四實施例: 110921 21 201001194 护、s:: 6圖’其係用以顯示本發明之影像媒合、變 #施:;舁二:系統之第四實施例的應用架構示意圖。本 二f 或第三實施例為基礎應用架構予以 資料處理:i2〇,,V;斗處置20復透過網路連接至另-.他用戶端 貝理裝置2〇’可例如為伺服器或其 :而且可由網路服務業者、入口網站業 提供網站所建置。資料處理鑛可為複數個,; 貝枓處理裝詈20,承命甘凡一 成 接以傳輸資貧料處理裝置透過網路相互連 妾:輸貝科。峨理裝置2〇與資料處理 主從式網路架構或分散式網路架構。 J為 模二第一、第二或第三實施例之資料輸入 =:u心料庫112,、她取模組113,、 資料輸出模組-及/或影像 中,資料产理= 裝置20,。但於其他實施例 料庫11; \、G仍然可以建置有資料輸人模組111、資 模组115、^料操取模組113、資料搜尋模組114、資料比對 可由二、/料輸出模組116及/或影像融合模組117。亦即 科處理裝置2〇與資料處理褒置 料處理裝置2〇或資料處理裝置2〇,分散處理。丁处戈由貝 1 i 2,11 ’/料處理裝置2 〇更包括資料庫112,’資料庫 實F庫:上子的貧料屬性與内容可與資料庫112相同。惟 :::用上’若資料處理裝置20,為網路服務業者、入口網 裝置20,右各種/用程式提供網站所建置者,或者資料處理 " *文個4 ’其貧料庫的容量與内容應會多於甚至 110921 22 201001194 遠多於資料處理裝置20之資料庫112,的容量與内容。因 此,將資料擷取模組113,、資料搜尋模組Π4,、資料比對 模組115及/或影像融合模組117 ’建置於資料處理裝置2 〇, 端’將更容易獲取所需要的資料。 於貫際彳呆作上’當資料處理裝置2 〇,之資料輪入模組 • 111接收到來自資料處理裝置20包括文字、符號、操作指 •令、靜態影像及/或動態影像的資料後,即可如前述第一: 第二或第三實施例所述,操作資料的擷取、搜尋、比對及/ (或融合,並於擷取、搜尋、比對及/或融合完成後,透過資 料輪出模組H6,將擷取、搜尋、比對及/或融合完成後的資 料經由網路傳送至資料處理裝置2〇,並顯示於資料處: 置20之顯示單元21上。 、 第五實施例:The name of the case, the storage period, the type of the slot, and/or the location of the resource Q. Please refer to Fig. 2, which uses the image of w ^ , % input, or the image of the present invention. A flow chart of the execution of the first embodiment of the image matching, the changing, the snail, and the synthetic system. As shown in Figure 2: In step S10, the receiving external device transmits, / in the embodiment, the data includes the text, the symbol 2:: 2:: r dynamic image data. In the present invention, =: image generation and/or image 2: = state: image data. The external device can be, for example, but not limited to a magnetic music device, a hard disk device, an optical storage device, a flash memory storage device 110921 16 201001194 storage device, a camera, a storage medium, or a data source. Data storage = Becco processing unit. Then proceed to the step Sichuan. In step S11_, the data stored in the dynamic image is stored in the "Liundard (4) from the external device static and/or available for use in the wrong place. In other embodiments, the database re-integration/H-weak input module uses input text, symbols, and/or other operations to refer to the sentence, the use of drawing or image generation and / or image processing application generated in this embodiment, the database:: pendulum = image data. In addition, the t/.r; L rhinoceros is selectively built in the data processing device π in a manner that is connected to the data processing device, and the data library can be placed, for example, in a flexible disk device, a hard disk device, and an optical storage device. Phase = memory storage device, photographic device, other data and storage department S12 data storage medium or data processing device. Then, proceeding to step S12', according to the received operation command and condition, the first data of the static and/or motion: image corresponding to the operation instruction and condition of the input person is taken out from the data shell. As described above, in the present embodiment, the first static image j corresponding to the operation instructions and conditions of the input person is taken out from the data 2 to be described as an example. Specifically, the condition may be, for example, the (4) name, storage date, type of the document, and/or data storage location of the particular swearing image. Then it proceeds to step S13. In step S13, the second data of the search command and the condition and/or the preset search command and condition are retrieved from the database according to the input search command and condition and/or the preset search command and condition. . As mentioned above, in the present example, the second static image of the search command and condition 110921 17 201001194 and/or the preset search command and condition is searched from the database. Then proceed to step Sj [4. For example, in the case of the Sco, the first data* and/or the pre-still image data extracted and the second retrieved are retrieved according to the comparison command and condition of the input ratio. The information, that is, the data is compared with the data, and the information is generated from the second data. ^ = the comparison between the instruction and the condition and/or the preset instruction and condition. The third data can be used for static imagery, or (iv) sad imagery, depending on the content of the second data. In the example, the third data may be, for example, a comparison of the instructions and conditions and/or pre-...the second capital of the instructions and conditions, the storage date, the file type, and/or the data storage location 〇/or the third The information may be, for example, a combination of a still image and a 4: pull:: shell; and a field corresponding to the static image and/or motion image data, storage date, file type, and/or data storage location. The second information and/or the third information may be a single data or a plurality of data. For example, the first data may be as early as 4' and the second data may be multiple data, the third asset = comparison command A single S hard pen with a condition and/or a preset ratio of instructions and conditions, and then proceeds to step S15. In step S15, 'the first data, the second data, and/or the third:. To: the display unit of the material processing device, 俾 is displayed through the display unit: De: not according to the type of the third data, optionally static material, or dynamic image data; Comparison of instructions and conditions 18 110921 201001194 and/or presets The file name, storage file, file type and/or data storage location of the second data of the condition; combined with the static image and/or the image of the sad image and the file name and storage date of the corresponding still image and/or motion picture data The file type and/or the data storage location. The second embodiment: It is a schematic diagram of the application architecture of the second embodiment of the third embodiment of the image fusion service and the synthesis system of the present invention. The application example is substantially the same as the application architecture of the first embodiment. The difference is that the image fusion module U7 is included in the embodiment, and the image fusion module (1) is used to compare the first data of the plurality of pens with the plurality of pens. The third data of the third data, the single data of a single or multiple pens, the second (four) of a single or multiple pens, or the first or several pens of a single pen In the case of a single pen or a single pen, the second poor material of the multiple pens of the early pen or the plural pen is combined with the second poor material of the single pen or the plural pendulum. In the example, the first data, Poor material. In this material...1 for cooperation as mentioned above The static image data of the first data, the second bell and/or the third data. #父佳纟' f彡像融合模块 117 can input the perishable input module (1) into the first and second user Level 1 first capital 2::] The three images of static imagery, for example, can be set as t-chacha; 4 and / or the third data is $ ° percent and it is a complete fusion between the two The fusion of the 50 or the 30% of the white knives, the more obvious the image characteristics of different materials. ... The lower the ratio, please refer to Figure 4, the 1 series, the Japanese version is used to refrain from the invention. Image matching, change 1J0921 19 201001194 Shape, fusion and synthesis method Flow chart executed by the second embodiment of the image matching, deformation, and integration system of the present invention. Second embodiment: The process of consistent application is generally The difference is that in the embodiment, after step=5, step S16 is further included. In step S16, the first data of the plurality of pens of the plurality of pens and the third data of the plurality of pens are Or the first data of a plurality of pens and the second data of a single pen or a plurality of pens, f pen or The third data between the first data and the number of strokes of the pen single or complex, single or complex data between the third pen second data and single or plural pen [to fusion, in order to generate a fourth data. In this embodiment, preferably, the first data, the second data, and the third data are still image data, and the fourth data is a static of the first data, the second data, and/or the third data. video material. Preferably, the fusion is performed selectively according to the degree of integration between the first data, the first data and/or the second data input by the user. Taking the first data, the second data and/or the third data as static image data, for example, c can be set as a complete fusion between multiple data, 50% fusion or 30% fusion, and the fusion percentage is increased. The image features of different data are more obvious. Then it proceeds to step S17. In step S17, the fourth data is displayed through the display unit. Third Embodiment: In the embodiment, the image matching, deformation, fusion and synthesis system of the present invention is substantially the same as the application architecture of the first embodiment, and the difference is only that the data comparison module 11 5 can further According to the comparison command and condition and/or preset comparison command and condition input through the data wheel module iii, the needle 20 110921 201001194 is between the first data of the plurality of pens, the second data of the plurality of pens, and the plural number The first data of the third data, single or multiple pens, and the first data and single pen or single or single pen between the single or the second data: the third data, single or multiple pens The second data is compared with a single or third cultivating material, and the similarity comparison results can be displayed through the data and 116. , 杈,, and _, as described above, taking the first data, the second data, and/or the third slant or static image data as an example, the condition of the comparison may be, for example, the size of the specific static energy and the image area, Color, grayscale, color gradient, bit judgment: the degree of similarity. For the judgment, please refer to FIG. 5, which is used to display the image matching, weaving, fusion and synthesizing method of the present invention through the image of the present invention, the 盥 盥 盥 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨 丨In the second embodiment, the flow chart of the column execution is the same as the flow of the third embodiment. The difference is that in the present embodiment, after step S15, step S18 is further included. Step, .4t ^ t , based on the input (ratio, 々 and condition and / or preset ratio between the instruction and the brother of the data, the second data of the plural, the third amount of the plural = :ί = the first data of the plural pen and the second data of the single or multiple pens, the third data of the early pen or the pen, the second data of the early pen or the plural pen. The third data of the plurality of pens is compared with the degree of similarity. Then, the process proceeds to step 9. In step 9, the fruit of similarity is displayed through the display unit. Fourth embodiment: 110921 21 201001194 Protection, s:: 6Fig. 'It is used to display the image media combination of the present invention, change #施:;舁二: system fourth Schematic diagram of the application architecture of the embodiment. The second or third embodiment provides data processing for the basic application architecture: i2〇, V; bucket disposal 20 multiplexed through the network to connect to another - his user-side device 2' It can be, for example, a server or its: and can be built by an Internet service provider or an portal website. The data processing mine can be plural, and the Bessie processing device 20 can be used to transmit money. The poor material processing devices are connected to each other through a network: a transfer device, a processing device, a data processing master-slave network architecture or a distributed network architecture. J is a second, second or third embodiment of the second embodiment. The data input =: u heart library 112, her module 113, data output module - and / or image, data production = device 20, but in other embodiments library 11; \, G The data input module 111, the resource module 115, the material operation module 113, the data search module 114, the data comparison can be implemented by the second, the material output module 116 and/or the image fusion module. 117. That is, the processing unit 2 and the data processing, the processing unit 2 or the data processing equipment Set 2 〇, decentralized processing. Ding Zhi Ge Yubei 1 i 2,11 '/ material processing device 2 包括 further includes database 112, 'database real F library: the poor material properties and content of the upper sub-database The same.::: Use 'data processing device 20, for network service providers, portal devices 20, right various / use programs to provide website builders, or data processing " * text 4 'its poor The capacity and content of the library should be more than even 110921 22 201001194 far more than the capacity and content of the database 112 of the data processing device 20. Therefore, the data capture module 113, the data search module Π 4, the data The comparison module 115 and/or the image fusion module 117' are built into the data processing device 2, and the end 'will make it easier to acquire the required data. After the data processing device 20 receives the data from the data processing device 20 including text, symbols, operation instructions, still images and/or motion pictures, the data processing device 2 is received. , as described in the foregoing first: the second or third embodiment, the retrieval, searching, comparison, and/or integration of the operational data, and after the retrieval, searching, comparison, and/or integration is completed, The data capture module H6 is used to transmit, search, compare and/or merge the completed data to the data processing device 2 via the network, and display it on the display unit 21 of the data display unit. Fifth embodiment:

,:月翏閱第7圖’其係用以顯示本發明之影像媒合、變 $ :融合與合成系統之第五實施例的應用架構示意圖。本 :施例與第四實施例的應用架構大抵相同,而可以第一、 第二或第三實施例為基礎應 盍第四每# A丨+撕V ^ ^ 弟五貝施例 二弟例主要的差異在於,資料處理裝置20係為知· 型行動電話、第3代或第3·5代行動 、;曰心 資料處理裝置㈣為提供資Fig. 7 is a schematic diagram showing an application architecture of a fifth embodiment of the image matching and conversion system of the present invention. The embodiment is substantially the same as the application architecture of the fourth embodiment, and may be based on the first, second or third embodiment, and the fourth every #A丨+Tear V^^ The main difference is that the data processing device 20 is a known type of mobile phone, the third generation or the third and fifth generation actions, and the data processing device (four) is provided for funding.

St 資料處理裝置2〇,經由無線通信網路提 貝科處縣置20行動通訊或其他加㈣務。 具體貫施時,使用去α、泰、m — 取、搜尋、比對及料處理展置2()將所欲擷 或融5的資料透過無線通信網路傳送至 110921 23 201001194 資料處理裝置20’,資料處理裝置20’接收到來自資料處理 裝置20包括文字、符號、操作指令、靜態影像及/或動態影 像的資料後,即可如前述第一、第二或第三實施例所述, 操作資料的擷取、搜尋、比對及/或融合,並於擷取、搜尋、 比對及/或融合完成後,透過資料輸出模組11 6’將擷取、搜 - 尋、比對及/或融合完成後的資料經由網路傳送至資料處理 裝置20,並顯示於資料處理裝置20之顯示單元21上。 第六實施例: # 請參閱第8圖,其係用以顯示本發明之影像媒合、變 形、融合與合成系統之第六實施例的應用架構示意圖。本 實施例可以第一、第二、第三、第四或第五實施例為基礎 應用架構予以改變。以第五實施例之應用架構為例,於第 六實施例之影像媒合、變形、融合與合成系統中,本發明 之影像媒合、變形、融合與合成系統復包括設定模組118, 設定模組118係用以於如智慧型行動電話、第3代或第3. 5 代行動電話等資料處理裝置20之顯示單元21上,提供設定 " 介面,用以讓使用者設定擷取、搜尋、比對及/或融合第一 貢料、弟二貢料、弟二貢料及/或第四貧料之各項功能。 於本發明之其他實施例中,設定模組亦可選擇性地建 置於資料處理裝置20’。 第七實施例: 請參閱第9圖,其係用以顯示本發明之影像媒合、變 形、融合與合成系統之第七實施例的應用架構示意圖。本 實施例可以第一、第二、第三、第四、第五或第六實施例 24 110921 201001194 為基礎應用架構予以改變。第丄者 例,於坌士鲁# 7 乂弟,、只把例之應用架構為 =W七貫施例之影像媒合、變形、融合與 本發明之影像媒合、變形、融合* =、、先中 ㈣i 9係用以將經過掏取、搜尋、比對及/ 二= 一貧料、第二資料、第三資料及/或第四資料, 广如日〜、型行動電話、第3代或第3.5代行動電話等資料 .ϊΓο裝置:广由無線通信網路傳送給其他的資料處理裝 广路所傳、他的資料處理裝置20’經由無線通信網 第二資:過擷取、搜尋、比對及/或融合的第-資料、 貝枓、第二資料及/或第四資料。 置於明之其他實施例中,傳輸模組亦可選擇性地建 置於貝料處理裝置2 〇,。 第八實施例: ^閱第1 〇圖,其係用以顯示本發明之影像媒合、變 嘁口與合成系統之第八實施例的應用架構示 ^例可以第一、第二、第三、第四、第五、第六或第七 貫^例為基礎制架構予以改變。於本實施财,本發明 合、變形、融合與合成系統復包括影像特徵擷取 4、·Α120與影像變形模組121。 次影像特徵擷取模組丨2〇係用以辨識出第一資料、第二 ^料、第二資料及/或第四資料中的臉部影像資料,並用以 4取f所辨識出的臉部影像資料中之特徵點,並將特徵點 ::疋位。第一資料、第二資料、苐三資料及/或第 可自資料庫112或il2,中提取。 、 110921 25 201001194 办像义形模組〗21係用以依據透過資料輸入模組m 所傳运至貧料處理裝置2G的文字、符號及/或操作指令,針 對影像特徵擷取模組12〇所辨識出的臉部影像資料中之特 徵占予以.义形處理,變形處理可例如但不限定為放大、縮 小、旋轉、扭曲成特定之幾何形狀。較佳者,影像變形模 組L21可針對單個第一資料、第二資料、第三資料及/或第 祖貝,予Γ交形’亦可選擇性地連續針對複數個第—資 ’斗、第二資料、第三資料及/或第四資料予以變形。、 ,佳者以像知·彳政擷取模組120所辨識出的第—資 二資料、第三資料及/或第四資料中的臉部影像資料 影像資料中之特徵點,以及經過影像變 理第一資料、第二資料、第三資料及/或第四資 抖了儲存於資料庫112或112,中。 、 礙护模ft月之”他,㈣中,影像特徵摘取模組與影像 艾形模組亦可選擇性地建置於資料處理裝置2〇,。 請參閱第11圖’其係用以顯示本發 ::::與合成方法透過本發明之影像媒合:變形、融: 一成糸統之第八實施例執行的流 驟辨識出第一資料、第二資料、第 四—貝料中的臉部影像資料。接著進至步驟S21 第 於步驟S21中,擷取出所辨缉山 特徵點。接著進至步驟S22。° 部影像資料中之 =驟S2^’將特徵點予以位。接著進至步驟奶。 —驟S23中’依據透過資料輸入模組所傳送至資料 110921 26 201001194 處:里裝置的文字、符號及/或操作指令,針對影像特徵掏取 l 且所辨識出的臉部影像資料中之特徵點予 ^形處理可例如但不限U放大、縮小、旋轉、』二 疋之幾何形狀。接著進至步驟S24。 寸 =驟S24中,將所辨識出的第一資料、第二資料、 或第四資料中的臉部影像資料及/或臉部影像 枝點’以及經過影像變形模組變形處理第一資 七嫩㈤樣—料庫中: :實一施例可以第八實施例為基礎應用架構予以改 1。々:本中,#影像特_取模組12_識出第一資 料、弟二資料、繁二咨輕_ β /斗、物 貝 一 '蚪或弟四資料中的臉部影像資料 ί 合模組117復可以令第-資料、第二資料、第三 二資料中之臉部影像資料為基礎臉部影像資料, 過資料輸入模組⑴所傳送至資料處理裝置2〇的文 子、指及/或操作指令所設定之融合 ::徵操取模組丨2。由第-資料、第二資料、第三資 或第四貝财所辨識出的臉部影像資料,按所Μ之融入 =例融合至基礎臉部影像資料。融合比例可例如但不㈣ 為臉型比例及/或材質比例。 …剛述用以作為基礎臉部影像資料之由第-資料、第二 =料、第二貧料及/或第四資料中所辨識出的臉部影像資 枓,以及其他經過影像特徵擷取模組120由第一資料、第二 資料、第三資料及/或第四資料中所辨識出的臉部影像㈣ 110921 27 201001194 :例如為真人的臉部影像資料或卡通人物的臉部影像資 請參閱第12圖,其传用以县g + & _ 牙 八知用以顯不本發明之影像媒合、變 >、融合與合成方法透過本發明之影像媒合、變形人 與合成系統之第九實施例執行的流程圖。如圖所示,= 驟S30中,令影像特徵顧取模組辨識出第一資料二二 料、第三資料及/或第四資料中的臉部影像資 :: 步驟S31。 戈考進主 於步驟S31中,令第一資料、第二資料、第三資㈣ 第四貝枓中之臉部影像資料為基礎臉部影像資料 至步驟S32。 按者退 於步驟S32中,依據透過資料輪入模組所傳送至 處理裝置的文字、符號及/或㈣指令所設定线合比例, 將其他經過影像特徵擷取模組由第—資料、第二資料、 ^資料及/或第四資料中所辨識出的臉部影像資料,按所設 定之融合比例融合至基礎臉部影像資料。 又 第十實施例: 請參閱第13圖,其係用以顯示本發明之影像媒合、變 形、融合與合成系統之第十實施例的應用架構示意圖。本 實施例可以第一、第二、第三、第四、第五、第六、第七、 第八或第九實施例為基礎應用架構予以改變。於本實施例 中’本發明之影像媒合、變形、融合與合成系統復包括影 像調整模組122,其係用以於影像特徵褐取模組12〇辨識出 第-資料、第二資料、第三f料及/或第四資料中的臉部影 110921 28 201001194 料臉取出所辨識出的臉部 資料輸入模予以定位後,依據透過 /或操作指令所設定之p =處理裝置20的文字、符號及 之特徵點進行臉部影像資料的調整。 ⑷切调正 ,敕::;之’調整範圍可例如但不限定為五官、髮型; ,週正内谷可例如但不限定為膚色、斑點、線條 ==度;調整比例可例如但不限定為調整的:二’ 整比㈣與極限值,並以初始值為調 之令極限值為調整比例百分之百。 地建明之其他實施例中,影像調整模組亦可選擇性 地建置於資料處理裝置20,。 进擇性 請參閱第14圖,其係用以顯 形、融合與合成方法透過本發明之像媒合、變 與人m ^ i由 3之衫像媒合、變形、融人 驟:4〇中、一像特:執行的流程圖。如圖所示,於; 料、第三資料及/或第四資 弟貝枓弟二資 自臉部影像資料擷取出所物::,料’或進-步 點,並將特徵點予以定位影像資料令之特徵 於步糊t,嶋 處理裝置的文字、符號及/或摔作=模組所傳送至資料 内容、比例及/或待調整之特徵二斤:定之調整範圍、 整。 2進仃臉部影像資料的 第十一實施例: 110921 29 201001194 本實施例可以第八、第九或第十實施例為基礎 構予以改變。於本實施财,當影像特徵擷取模㈣:辨ς 出稷數個第一資料、第二資料、第三資料及/或第四資料; :=ΓΓ ’並進一步自各該臉部影像資料摘取出臉 Μ -貝料中之特徵點,再將特徵點予以定位後,… 合杈組117復可以令第一資料、第二資料、第三 w 貧料中之臉部影像資料的特徵點, 於3四 至竭理裝置2。的文字、符號及/或操作指令 其他經過影像特徵擷取模組120由第 " 弟三資料及/或第四資料中所辨識出的 臉㈣像資料的特徵點,按所設定之融合比例相融合’夢 以形成一特徵點經過融合之新的臉部影像資料。 曰 承前所述,於實際操作時,首先,辨識出第一資料、 第-貧料、第三資料或第四資料中的臉部影像資料。其次, 擷取出所辨識出的臉部影像資料中之特徵點。接著,將特 徵點予以定位。再者,將第一資料、第二資料、第三資料 ,資料中之臉部影像資料的特徵點,依據所輸入之文 子、符號及/或操作指I所設定之融合比例,與1他由第一 資料1二資料、第三資料或第四資料中所辨識出的臉部 影像貢料的特徵點’按所設定之融合比例相融合。之後, 形成一特徵點經過融合之新的臉部影像資料。 第十二實施例: 本貫施例可以第一、第二、第三、第四、第五、第六、 第 第九第十或第十一實施例為基礎應用架構 110921 30 201001194 予以改變。於本實施例中,資料輸入模組111或1U,於接你 專 貝料處理裝置20或20,的資料後,可選擇 ^也儲存於該資料庫112或112,中,亦可僅以暫 j 或liiui / 戈20中,亦即,當資料輸入模組111 x 1所接收到的資料處理完成後戍 2〇,闕閉恭ι . 几取佼次貝枓處理裝置2〇或 料輸人模組⑴或⑴ 即不復存在。 牧叹q的貝枓 上述貫施例僅為例示性說明本發明之 效,而非用於限制本發明。任何熟習此 之 在不違背本發明之精神及㈣下,對上士均可 與變化。因此,本發明之權利 貞&歹'、仃修飾 專利範圍所列。 ㈣保心圍’應如後述之申請 【圖式簡單說明】 變形、融合與合成系統 第1圖係本發明之影像媒合 之第一實施例的應用架構示意圖 變形 第2圖係本發明之影像媒合' 變形、 透過本發明之影像媒人㈣ 田合成方法 不U之〜像媒。、㈣、融合與合成 施例執行的流程圖; 、'” 第一實 —第3圖係本發明之影像媒合、變形、融合“ 之弟二實施例的應用架構示意圖; /、口成系、洗 第4圖,其係用以顯示本發明之影 人 - 合與合成方法透過本發明之影像媒合、變# 交形、融 系統之第二實施例執行的流程圖; ^、融合與合成 第5圖係本發明之影像媒合、 "⑽合與合成方法 110921 31 201001194 透過本發明之影像媒合、變形、融合與合成系統之第三實 施例執行的流程圖; 第6圖係本發明之影像媒合、變形、融合與合成系統 之第四實施例的應用架構示意圖; 第7圖係本發明之影像媒合、變形、融合與合成系統 - 之第五實施例的應用架構示意圖; 第8圖係本發明之影像媒合、變形、融合與合成系統 之第六實施例的應用架構示意圖; ί 第9圖係本發明之影像媒合、變形、融合與合成系統 之第七實施例的應用架構示意圖; 第10圖係本發明之影像媒合、變形、融合與合成系統 之第八實施例的應用架構示意圖; 第11圖係本發明之影像媒合、變形、融合與合成方法 透過本發明之影像媒合、變形、融合與合成系統之第八實 施例執行的流程圖; 第12圖係本發明之影像媒合、變形、融合與合成方法 i 透過本發明之影像媒合、變形、融合與合成系統之第九實 施例執行的流程圖; 第13圖係本發明之影像媒合、變形、融合與合成系統 之第十實施例的應用架構示意圖;以及 第14圖係本發明之影像媒合、變形、融合與合成方法 透過本發明之影像媒合、變形、融合與合成系統之第十實 施例執行的流程圖。 【主要元件符號說明】 32 110921 201001194 20 、 20, 資料處理裝置 m、m, 資料輸入模組 112 、 112’ 資料庫 113 、 113’ 資料擷取模組 114 、 114, 資料搜尋模組 - 115 、 115, 資料比對模組 1 116 、 116, 資料輸出模組 1Π、117’ 影像融合模組 , 118 設定模組 119 傳輸模組 120 影像特徵擷取模組 121 影像變形模組 122 影像調整模組 S10〜S19 步驟 S20-S24 步驟 S30〜S32 步驟 S40 、 S41 步驟 33 110921The St data processing device 2, via the wireless communication network, provides a 20-way mobile communication or other plus (4) service. In the specific implementation, the data is transferred to the 110921 23 201001194 data processing device 20 by using the de-α, 泰, m-take, search, comparison, and material processing spread 2 () to transmit the desired data or the data 5 to the wireless communication network. After receiving the data from the data processing device 20 including characters, symbols, operation commands, still images, and/or moving images, the data processing device 20' may be as described in the first, second or third embodiments above. Acquisition, search, comparison and/or integration of operational data, and after retrieval, search, comparison and/or integration, data acquisition module 11 6' will capture, search-see, compare and / or the merged data is transmitted to the data processing device 20 via the network and displayed on the display unit 21 of the data processing device 20. Sixth Embodiment: # Please refer to Fig. 8, which is a schematic diagram showing an application architecture of a sixth embodiment of the image matching, transformation, fusion and synthesis system of the present invention. This embodiment can be changed based on the first, second, third, fourth or fifth embodiment. Taking the application architecture of the fifth embodiment as an example, in the image matching, transformation, fusion and synthesis system of the sixth embodiment, the image matching, transformation, fusion and synthesis system of the present invention further includes a setting module 118, which is set. The module 118 is configured to provide a setting " interface for the user to set the capture, on the display unit 21 of the data processing device 20 such as a smart mobile phone, a third generation or a 5.3th generation mobile phone. Search, compare and/or integrate the functions of the first tribute, the second tribute, the second tribute and/or the fourth poor. In other embodiments of the invention, the setting module can also be selectively built into the data processing device 20'. Seventh Embodiment: Referring to Figure 9, there is shown a schematic diagram showing an application architecture of a seventh embodiment of the image matching, transformation, fusion and synthesis system of the present invention. This embodiment may change the basic application architecture for the first, second, third, fourth, fifth or sixth embodiment 24 110921 201001194. The third example, Yu Shilu #7 乂,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, , first (4) i 9 is used to capture, search, compare and / / = = poor, second, third and / or fourth information, as wide as ~, type mobile phone, third Generation or 3.5th generation mobile phone and other information. ϊΓο装置: widely transmitted by the wireless communication network to other data processing equipment, his data processing device 20' via the wireless communication network second capital: overtake, Search, comparison and/or integration of the first data, Bessie, the second data and / or the fourth data. In other embodiments of the invention, the transfer module can also be selectively placed in the bedding processing unit 2 . The eighth embodiment: FIG. 1 is a diagram showing an application architecture of the eighth embodiment of the image matching, smashing and synthesizing system of the present invention, which may be first, second, and third. The fourth, fifth, sixth or seventh example is to change the infrastructure. In the present invention, the combination, deformation, fusion and synthesis system of the present invention includes image feature extraction 4, Α 120 and image deformation module 121. The secondary image feature capture module 丨 2 is used to identify the facial image data in the first data, the second material, the second data, and/or the fourth data, and is used to take the face recognized by f The feature points in the image data, and the feature points:: 疋 position. The first data, the second data, the third data and/or the first data may be extracted from the database 112 or il2. 110921 25 201001194 The image module is used to capture the module 12 according to the characters, symbols and/or operation commands transmitted to the poor processing device 2G through the data input module m. The features in the recognized facial image data are processed in a shape, for example, but not limited to, enlarged, reduced, rotated, and twisted into a specific geometric shape. Preferably, the image deformation module L21 can be configured for a single first data, a second data, a third data, and/or a ancestor, and can also selectively continuously target a plurality of first- The second, third and/or fourth materials are modified. , the best ones, such as the feature points in the image data of the face data, the third data and/or the fourth data identified by the module 120, and the image passing through the image The first data, the second data, the third data, and/or the fourth asset are stored in the database 112 or 112. In the "Four" mode, the image feature extraction module and the image Ai module can also be selectively built into the data processing device 2, please refer to Figure 11 Displaying the present invention:::: and the synthetic method through the image of the present invention: deformation, melting: the flow of the eighth embodiment of the first embodiment to identify the first data, the second data, the fourth - shell material The face image data in the process proceeds to step S21 in step S21, and the identified feature points are extracted. Then, the process proceeds to step S22. In the image data, the step S2^' is used to position the feature points. Then proceed to the step milk. - In step S23, according to the text, symbol and/or operation command transmitted to the data device through the data input module 110921 26 201001194, the recognized feature is captured for the image feature. The feature points in the image data may be, for example, but not limited to, a magnified, reduced, rotated, and geometric shape. Then, the process proceeds to step S24. In the second step S24, the identified first data is obtained. , second information, or facial imaging in the fourth data And / or the facial image branch point 'and the image deformation module deformation treatment of the first seven seven (five) sample - in the library:: the actual one example can be the eighth embodiment for the basic application architecture to be changed 1.中,#影像特_取模块12_Identify the first data, the second two data, the second two light _ β / Dou, the object of a 蚪 蚪 or the fourth four data in the face image qing 117 The face image data in the first data, the second data and the third data may be used as the basic facial image data, and the text, finger and/or operation transmitted to the data processing device 2 by the data input module (1) The fusion set by the instruction:: the acquisition module 丨 2. The facial image data identified by the first data, the second data, the third asset or the fourth foreign currency, according to the integration of the example = Basic facial image data. The fusion ratio may be, for example, but not (d), the ratio of the face type and/or the proportion of the material. The data used as the basic facial image data is the first data, the second material, the second poor material and/or Face image data identified in the fourth data, and other image features captured by the image 120 Face images identified by the first data, the second data, the third data and/or the fourth data (4) 110921 27 201001194 : For example, face images of real people or facial images of cartoon characters please refer to Figure 12, which is used by the county g + & _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ A flow chart executed by the ninth embodiment. As shown in the figure, in step S30, the image feature taking module identifies the facial image in the first data, the third data, and/or the fourth data. :: Step S31. The Goku test proceeds in step S31, and causes the facial image data in the first data, the second data, and the third asset (4) fourth shell to be the base facial image data to step S32. The user retreats in step S32, according to the text, symbol and/or (4) command set by the data wheeling module to transmit the ratio of the line, and the other image feature capturing module is selected from the first data. The facial image data identified in the second data, the ^ data and/or the fourth data are merged into the basic facial image data according to the set fusion ratio. Further, a tenth embodiment: Referring to Fig. 13, there is shown a schematic diagram showing an application architecture of a tenth embodiment of the image matching, transformation, fusion and synthesis system of the present invention. The present embodiment may change the basic application architecture for the first, second, third, fourth, fifth, sixth, seventh, eighth or ninth embodiments. In the present embodiment, the image matching, morphing, merging and compositing system of the present invention includes an image adjusting module 122 for identifying the first data, the second data, and the image feature browning module 12 The third f material and/or the facial shadow in the fourth material 110921 28 201001194 After the face data is taken out, the face data input mode is positioned, and the text of the processing device 20 is set according to the transmission/or operation command. The symbol and its feature points are used to adjust the facial image data. (4) The adjustment range can be adjusted, for example, but not limited to, facial features, hairstyles; for example, but not limited to skin color, spots, lines == degrees; adjustment ratio can be, for example but not limited to For the adjustment: two 'integer ratio (four) and the limit value, and the initial value is adjusted to the limit value of the adjustment ratio of 100%. In other embodiments of the ground construction, the image adjustment module can also be selectively built into the data processing device 20. For the selectivity, please refer to Fig. 14, which is used for the visualization, fusion and synthesis method through the image of the present invention, and the human m ^ i is composed of 3 shirts, deformation, and fusion: 4〇 One image: the flow chart of execution. As shown in the figure, the material, the third data, and/or the fourth-born younger brother, Becky, borrowed from the facial image data to extract the object::, material, or advance-step, and locate the feature points. The image data is characterized by the step of t, the text, symbol and/or fall of the processing device = the content transmitted by the module, the proportion and/or the characteristics to be adjusted: the adjustment range and the adjustment. The eleventh embodiment of the face image data is: 110921 29 201001194 This embodiment can be changed based on the eighth, ninth or tenth embodiment. In this implementation, when the image feature capture module (4): identify a number of first data, second data, third data and/or fourth data; :=ΓΓ 'and further extract from each of the facial image data After taking out the feature points in the face-bean and then positioning the feature points, the combination group 117 can make the feature points of the facial image data in the first data, the second data, and the third w poor material. , on the 4th to the exhaust device 2. Characters, symbols, and/or manipulation instructions of the other feature image points captured by the image feature capture module 120 from the third and fourth data, according to the set fusion ratio The fusion of 'dreams to form a feature point through the fusion of new facial image data.承 As mentioned above, in actual operation, first, the facial image data in the first data, the first-lean material, the third data or the fourth data is identified. Next, the feature points in the recognized facial image data are extracted. Next, locate the feature points. Furthermore, the first data, the second data, the third data, and the feature points of the facial image data in the data are based on the input ratio of the input text, the symbol, and/or the operation index I, and The feature points of the facial image tributary identified in the first data, the second data, or the fourth data are fused according to the set fusion ratio. After that, a new facial image data with a feature point is merged. Twelfth Embodiment: The present embodiment can be changed by the first, second, third, fourth, fifth, sixth, ninth tenth or eleventh embodiment as the basic application architecture 110921 30 201001194. In this embodiment, after the data input module 111 or 1U is connected to the data of the special material processing device 20 or 20, the data can be stored in the database 112 or 112, or only temporarily. j or liiui / 戈20, that is, when the data received by the data input module 111 x 1 is processed, 戍 2〇, 阙 恭 ι . . . . . . . . . . . . . . Module (1) or (1) no longer exists. The above examples are merely illustrative of the effectiveness of the invention and are not intended to limit the invention. Anyone who is familiar with this may not change against the spirit of the present invention and (4). Accordingly, the scope of the invention is set forth in the 贞 & (4) Baoxinwei' should be applied as described later [Simple description of the schema] Deformation, fusion and synthesis system Fig. 1 is a schematic diagram of the application architecture of the first embodiment of the image medium of the present invention. Fig. 2 is an image of the present invention The mediation 'deformation, through the image of the present invention (4), the field synthesis method is not U ~ like media. (4) Flowchart of the implementation of the fusion and synthesis example; ''The first real-third figure is a schematic diagram of the application architecture of the second embodiment of the image matching, deformation and fusion of the present invention; And washing the fourth figure, which is used to display the flow chart of the second embodiment of the image capturing, changing, and merging system of the present invention; ^, fusion and Synthesis of Figure 5 is a flow chart of the present invention, <(10) Combination and Synthesis Method 110921 31 201001194 Flowchart executed by the third embodiment of the image matching, deformation, fusion and synthesis system of the present invention; Schematic diagram of the application architecture of the fourth embodiment of the image matching, deformation, fusion and synthesis system of the present invention; FIG. 7 is a schematic diagram of the application architecture of the fifth embodiment of the image matching, deformation, fusion and synthesis system of the present invention. 8 is a schematic diagram of an application architecture of a sixth embodiment of the image matching, deformation, fusion and synthesis system of the present invention; ί FIG. 9 is a seventh embodiment of the image matching, deformation, fusion and synthesis system of the present invention. FIG. 10 is a schematic diagram of an application architecture of an eighth embodiment of the image matching, deformation, fusion and synthesis system of the present invention; FIG. 11 is a video matching, deformation, fusion and synthesis method of the present invention; Flowchart executed by the eighth embodiment of the image matching, deformation, fusion and synthesis system of the present invention; FIG. 12 is a video mediation, deformation, fusion and synthesis method of the present invention. Flowchart of the ninth embodiment of the deformation, fusion and synthesis system; FIG. 13 is a schematic diagram of the application architecture of the tenth embodiment of the image matching, deformation, fusion and synthesis system of the present invention; and FIG. 14 is the invention Image Capturing, Deformation, Fusion, and Synthesis Methods Flowcharts performed by a tenth embodiment of the image media, deformation, fusion, and synthesis system of the present invention. [Main component symbol description] 32 110921 201001194 20 , 20, data processing device m, m, data input module 112, 112' database 113, 113' data capture module 114, 114, data search module - 115 115, data comparison module 1 116, 116, data output module 1Π, 117' image fusion module, 118 setting module 119 transmission module 120 image feature capture module 121 image deformation module 122 image adjustment module S10~S19 Steps S20-S24 Steps S30~S32 Steps S40, S41 Step 33 110921

Claims (1)

201001194 十、申請專利範圍·· i.-種影!媒合、變形、融合與合成系統,係應用於具 有.'、、頁不單凡之資料處理裝置中,該影像媒合、變形、 融合與合成系統包括·· 次貝料輪入模組,其係用以接收外部裝置傳送至該 料處理裝置的資料; 貝料庫’其係用以儲存透過該資料輸入模組所接 s 收來自該外部裝置的靜態及/或動態影像之資料; 貝料擷取模組,其係用以依據透過該資料輸入模 組所接收的操作指令與條件,自該資料庫中操取出符 合所輸入之操作指令與條件的靜態及/或動態影像之 第一資料; 貧料搜尋模組,其係用以依據透過該資料輸入模 組所輸入之搜尋指令與條件及/或預設之搜尋指令與 條件,自該資料庫中檢索出允符搜尋指令與條件及/或 f 預設之搜尋指令與條件之第二資料; 貧料比對模組,其係用以依據透過該資料輸入模 組所輸入之比對指令與條件及/或預設之比對指令與 條件,將該第一資料與該第二資料相比對,並據以自 該第二資料中產生允符透過該資料輸入模組所輸入之 比對指令與條件及/或預設之比對指令與條件之第三 資料,以及 資料輸出模組’其係用以將該第一資料、該第二 資料及/或該第三資料輸出至該顯示單元。 】1092] 34 201001194 係為= = = 之系統,其中’該資料處理裝置 3. 電話电腦、個人數位助理、智慧型行動 或第3. 5代行動電話。 4. t申符^專,圍第1項之系統,其中’該資料包括文 如申」^利呆」乍指令、靜態影像及/或動態影像的資料。 申::利關第3項之系統,其中,該資料處理裝置 5. ==影像生ί及/或影像處理應用程式,用以 作二貝;剧入杈組藉由輪入文字、符號及/或其他操 曰々之方式,生成靜態或動態之影像資料。 ’、 如申請專利範圍第4項之糸# . ^ ^ 儲存使用者透過”料於、’V 料庫復用以 ^亥貝科輸入模組藉由輸入文字、 =其他操作指令之方式,利用繪圖或影像生成及; ^像處理應用程式所生成之靜態或動態之影像資 6. ::明專利耗圍第i項之系統’其中,該外部 _儲2置=,=、光學儲存裝置、快閃 接的其他資料處理=裝置或與該資料處理裝置相連 士申:專利軌圍第!項之系統’其中’該資料庫係内 於該資料處理梦罟由'、建 理裝置搭接。或以外接的方式與該資料處 如申請專利範圍第!項之系統,其中,該 式磁碟裝置、硬式磁碟裝置、光學儲存裝置;= 憶體儲存裝置、攝影裝置或與該資料處理裝置相= 110921 35 8. 201001194 的其他資料處理裝置。 9. '該條件係為特定 槽案類型及/或資 如申請專利範圍第1項之系統,其中 的靜態影像的檔案名稱、儲存日期、 料儲存位置。 10.如申請專利範圍第W之系統,其中,該第三資料係為 允符比對指令與條件及/或預設之比對指令與條件之 該第二資料之㈣名稱、儲存日期、㈣類型及/或資 料儲存位置。 11. 如申請專利範圍第1項之m中,該第三資料係為 靜態影像資料、動態影像資料、允符比對指令與條件 =/或預設之比對指令與條件之該第二資料之檔案名 稱儲存日期、棺案類型及/或資料儲存位置或結合靜 態影像及/或動態影像資料以及對應靜態影像及/或動 態影像資料之檀案名稱、儲存日期、㈣類型及/或資 料儲存位置。 12.,申請專利範圍第1項之系統,其中,該第一資料、該 第一=貝料及/或该第二資料係為單筆資料或複數筆 料。 13. 如:請專利範圍第12項之系統,其中,該第一資料係 為單筆貧料,而該第二資料則為複數筆資料,該第三 貝料則為允符比對指令與條件及/或預設之比對指令 與條件之單筆或複數筆資料。 14. 如申請專利範圍第丨項之系統,其中’該資料比對模組 復用以依據透過該資料輪入模組所輸入之比對指令與 110921 36 201001194 或倾之比對指令與條件,針對複數筆的該第 二:枓間、複數筆的該第二資料間、複數筆的該第三 二該筆或複ί!的該第—資料與單筆或複數筆 或複數:貝料間、早筆或複數筆的該第一資料與單筆 或稷數葦的該第三資料間、單筆或複數 次 :或複數筆的該第三資料間進行相似上: 並此透過該貧料輸出模組將相似程度的比對结 予以顯示。 〇衣 15.如申請專利範圍第14項 係為特定靜態影像區域 層或位置的近似程度。 之系統,其中,該比對之條件 之大小、顏色、灰階、色彩漸201001194 X. Application for patent scope·· i.-Group shadow! The mediation, deformation, fusion and synthesis system is applied to a data processing device having a '., a page, which is not unique. The image matching, deformation, fusion and synthesis system includes a sub-shell material wheel-in module. For receiving data transmitted from an external device to the material processing device; the library is for storing data of static and/or dynamic images received from the external device through the data input module; The capture module is configured to retrieve, from the database, the first data of the static and/or motion image that meets the input operation command and condition according to the operation command and condition received through the data input module. a poor material search module for retrieving a search command and condition from the database based on search commands and conditions and/or predetermined search commands and conditions entered through the data input module / or f Pre-set search instructions and conditions of the second data; poor material comparison module, which is used to compare the comparison of instructions and conditions and/or presets entered through the data input module And the condition, the first data is compared with the second data, and the comparison instructions and conditions and/or presets input through the data input module are generated from the second data The third data of the comparison instructions and conditions, and the data output module 'is used to output the first data, the second data and/or the third data to the display unit. 】 1092] 34 201001194 is a system of = = =, where 'the data processing device 3. Telephone computer, personal digital assistant, smart action or the 3. 5th generation mobile phone. 4. t The application system of the first item, where the information includes the text, the application, the static image and/or the motion picture. Application: The system of the third item, wherein the data processing device 5. == video processing and/or image processing application for two shells; the script group enters text, symbols and / or other methods of operation, generate static or dynamic image data. ', as in the scope of patent application No. 4 糸 # . ^ ^ Storage users through the "materials," V library reuse to ^ Haibeike input module by input text, = other operational instructions, the use of Drawing or image generation and; ^ like the static or dynamic image generated by the processing application. 6. :: The system of the patented i-th item', where the external_storage 2 =, =, optical storage device, Other data processing of the flash connection = device or connected with the data processing device Shi Shen: The system of the patent track circumference item [where 'the database is in the data processing nightmare', the construction device is connected. Or an externally connected device and the data processing device, such as the system of the patent application scope item, wherein the magnetic disk device, the hard disk device, the optical storage device; the memory storage device, the photographic device or the data processing device Phase = 110921 35 8. Other data processing equipment of 201001194. 9. 'This condition is the specific type of slot and/or the system of patent application scope 1, the file name, storage date and material of the static image. Storage 10. The system of claim No. W, wherein the third data is a name (4) of the second data that corresponds to the command and condition and/or the preset command and condition. (4) Type and/or data storage location 11. In the case of the first paragraph of the patent application, the third data is static image data, motion picture data, and the matching instructions and conditions = / or preset Comparing the file name storage date, file type and/or data storage location of the second data and the static image and/or motion image data and the name of the corresponding static image and/or motion image data, Storage date, (4) type and/or data storage location 12. The system of claim 1 of the patent scope, wherein the first data, the first material, and/or the second data is a single data or plural 13. For example, please refer to the system of the 12th patent range, wherein the first data is a single poor material, and the second data is a plurality of materials, and the third material is a tolerance ratio. Command and condition / or a preset single or multiple data of the comparison of the instructions and conditions. 14. For the system of the scope of the patent application, in which the data is multiplexed with the module to be used to enter the module through the data. Enter the comparison command with 110921 36 201001194 or compare the command and condition, for the second of the plural: the second data of the day, the second data of the plural, the third or the second of the plural The first data and the single or multiple pens or plurals: between the first material of the bedding, the early pen or the plural pen, and the third data of the single pen or the number of pens, single or plural: or The third data of the plurality of pens is similarly: and the similarity comparison knot is displayed through the lean output module. 〇 15. 15. If the scope of the patent application is the 14th item, the degree of approximation of the specific static image area layer or position. a system in which the size, color, grayscale, and color of the alignment are gradually 16·如申請專利範圍第㈣之系統,復包括影像融合模植, 以將複數筆的該第一資料間、複數筆的該第二 :二間、複數筆的該第三資料間、單筆或複數筆㈣ 第一貧料與單筆或複數筆的該第二資料間、單筆< 數筆的該第一資料與單筆或複數筆的該第三資料間、 早筆或複數筆的該第二資料與單筆或複數筆的: 資料間予以融合,藉以產生第四資料。 / — α如,申請專利範圍第16項之系統,#中,該影像融合模 組设用以提供使用者透過該資料輸入模組輸入該第一 貝料、泫第二資料及/或該第三資料間融合之程戶。 Μ·如申請專利範圍第!項之系統,復包括設定模組^係 用以於該資料處理裝置之顯示單元上,提供 面,用以讓使用者設定擷取、搜尋、比對及/或融合二 110921 37 201001194 第一資料、該第二眘4立 之各項功能。—该第三資料及7或該第四資料 19.如申凊專利範圍第】項 理裝置_,且m、 ^ ’係應用於複數個資料處 衣置令且该複數個資料 線通信網路相互連接。 ㈣為路或無 20.如申請專利範圍第】9 係用以將經過心’其 料、該第二資料、哕苐〜對及’或融合的該第-資 該資料處理裝£,_ 弟四貝科,处過 他的資料處理裝置,線通信網路傳送給其 由益绫t 並此接收其他的資料處理裝置經 由無她網路所傳送之經 融合的該第—資料、m 役+ &對及/或 第四資料。 弟-負料、該第三資料及/或該 21·如申請專利範圍第1或16項之系統,復包括. 料擷取模組’其係用以辨識出該第一資 a X -貝料、該第三資料及/或該 =像資料’並用以擷取出所辨識出的該 料令之特徵點,並將該特徵㈣以定位像貝 =形模組’其係用以依據透過該資料輸入模 資料處理裝置的文字、符號及/或操作指 :料像特徵搁取模組所辨識出的該臉部影像 貝枓中之特徵點予以變形處理。 圍第21項之系統’其中,該影像變形模 之㈣處理係為放大、縮小、旋轉及/或扭曲成特定 110921 38 201001194 之幾何形狀。 23.如申請專利範圍第21項之系統,其中,該影像變形模 組係針對單個該第一資料、該第二資料、該第三資料 及^或該第四資料予以變形及/或連續針對複數個該第 資料°亥第一資料、該第三資料及/或該第四資料予 ' 以變形。 、 24.如:請專利範圍第”項之系統,其中,該影像特徵擷 :模組所辨識出的該第一資料、該第二資料、該第三 資2及/或該第四資料中的臉部影像資料及/或臉部影 像貝料中之特徵點,以及經過該影像變形模組變形處 Ϊ :亥第一貝料、該策二資料、該第三資料及/或該第四 資料儲存於該資料庫中。 25.如申請專利範圍第21項之系統,復包括:當該影像特 被擷取模組辨識出該第-資料、該第二資料、該第: =料及/或該第四資料中的臉部影像資料後,該影像:16) The system of claim 4 (4), comprising image fusion molding, to divide the first data of the plurality of pens, the second data of the plurality of pens, the third data of the plurality of pens, and the single pen Or a plurality of pens (4) between the first poor material and the second data of the single pen or the plurality of pens, the single pen < the first data of the pen and the third data of the single pen or the plurality of pens, the early pen or the plurality of pens The second information is combined with the single or multiple: data to generate the fourth data. The image fusion module is configured to provide a user to input the first bedding material, the second material and/or the first through the data input module. The three data integration. Μ·If you apply for a patent scope! The system of the item includes a setting module for providing a surface on the display unit of the data processing device for the user to set the search, search, comparison and/or integration of the second material 110921 37 201001194 The second cautious 4 functions. - the third data and 7 or the fourth information 19. If the scope of the patent application is stipulated, and m, ^ ' is applied to a plurality of data-receiving orders and the plurality of data line communication networks Connected to each other. (4) For the road or not 20. If the scope of the patent application is ninth, the system is used to process the data of the first, the second, the second, the right, and the Four Beko, who passed his data processing device, sent the line communication network to the merging data that was transmitted by him and received other data processing devices via the network without her. & and / or fourth information. The younger-negative material, the third data and/or the system of claim 1 or 16 of the patent application scope, including the material extraction module 'is used to identify the first capital a X - shell Material, the third data and/or the image data are used to extract the identified feature points of the material order, and the feature (4) is used to locate the image module The text, symbol and/or operation of the data input mode data processing device means that the feature points in the face image of the face image recognition module are deformed. In the system of item 21, the processing of the image deformation mode is to enlarge, reduce, rotate and/or twist into a specific 110921 38 201001194 geometry. 23. The system of claim 21, wherein the image deformation module deforms and/or continuously targets a single first data, the second data, the third data, and/or the fourth data A plurality of the first data, the third data, and/or the fourth data are 'deformed'. 24. The system of claim 1, wherein the image feature is: the first data, the second data, the third asset 2, and/or the fourth data identified by the module The feature image of the facial image data and/or the facial image of the facial image, and the deformation of the image deformation module: the first shell material, the second data, the third data and/or the fourth The data is stored in the database. 25. The system of claim 21, wherein the image is specifically captured by the capture module to identify the first data, the second data, the first: = material and / Or after the facial image data in the fourth data, the image: :模組復能用以令該第-資料、該第二資料、該第三 資料或該第四資斜中夕# 立 像資料W 料為基礎臉部影 像貝枓,依據透過該資料輸入模組所傳送 理裝置的文字、符號及/或操作指令所設定之融合^ 例,將其他經過該影像特徵擷取模組由該第一資料、 D亥第一貝料、该第三資料及/或該第四資料中所辨識出 =該臉部影像資料,按所設定之融合比例融 臉部影像資料。 & W 26.如申請專利範圍第25項 之系’’充,其中,該融合比例係 Π0921 39 201001194 為臉型比例及/或材質比例。 27.Γί:Γ圍第21項之线,復包括:影像調整模 斜,^^用以於該影像特徵擷取模組辨識出該第一資 臉^:2料、該第三資料及/或該第四資料中的該 臉h像-貝料,或進一步自該臉部影像資料操取 =㈣臉部影像資料中之該特徵點,並將該特徵點 予以疋位後,依據透過該資料輸入模組所傳送至該資 =處理裝置的文字、符號及/或操作指令所設定之調整 内容、比例及/或待調整之特徵點進行該臉部影 像資料的調整。 •:申凊專利範圍第21項之系統,其中,該調整範圍係 為五官及/或髮型;調整内容係為膚色、斑點、線條及 /或光影投射的強度或角度;調整比例係為調整的幅 度,該調整的幅度係為預先設定調整的初始值與極限 值,亚以初始值為調整比例百分之零,極限值為調整 比例百分之百。 2 9.如申請專利範圍第2 i項之系統,其中,該當影像特徵 擷取模組辨識出複數個該第一資料、該第二資料、該 第三資料及/或該第四資料中的臉部影像資料並擷取 出該臉部影像資料中之特徵點’再將該特徵點予以定 位後,該影像融合模組復用以令該第一資料、該第二 資料、該第三資料及/或該第四資料中之該臉部影像資 料的該特徵點,依據透過該資料輸入模組所傳送至該 資料處理裝置的文字、符號及/或操作指令所設定之融 110921 40 201001194 合比例,與其他經過該影像特徵擷取模組由該第一資 料、該第二資料、該第三資料及/或該第四資料中所辨 識出的該臉部影像資料的該特徵點,按所設定之融合 比例相融合,藉以形成—特徵點經過融合之新的臉部 影像資料。 30. -種影像媒合、變形、融合與合成方法,係應用於旦 有顯示單元之資料處理裝置中,該影像媒合、變形、 融合與合成方法包括: 接收外部裝置傳送至該資料處理裝置的資料; 儲存透過該資料輸人模組所接收來自肖外部裝置 的靜態及/或動態影像之資料至資料庫中; 依據所接收的操作指令與條件,自該資料庫中榻 取出符合所輸入之操作指令與條件的靜態及 影像之第一資料; 依據所輸入之搜尋指令與條件及/或預設之搜尋 ^與條件’自該貧料庫中檢索出允符搜尋指令與條 及/或預設之搜尋指令與條件之第二資料; 依據所輪入之比對指令與條件及/或預設之比對 ::與條件’將該第—資料與該第二資料相比對,並 、自4第—資料中產生允符所輸人之比對指令盘條 件及/或預設之比對指令與條件之第三資料;以及…、 :該第—資料、該第二資料及/或該第三資 至该顯示單元。 申明專利範圍第30項之方法,復包括: 110921 41 201001194 將複數筆的該第—資料間、複數筆的該第二資料 丄广ί筆的5亥第二資料間、單筆或複數筆的該第-貝料與單筆或複數筆的該第二資料間、單筆或複數筆 的該第:資料與單筆或複數筆的該第三資料間、單筆 弟一貝枓與早筆或複數筆的該第三資料 間予以融合,藉以產生第四資料;以及 貝十 4 遥過該頭示單元顯示該第四資料。 32·如申請專利範圍第3〇項之方法,復包括: 指令=輪:之_指令與條件及/或預設之比對 _ 、,十對複數筆的該第一資料間、複數筌 该第二資料間、複數 ' 这ΛΑ斗Μ 弟二貝枓間、單筆戎葙叙 葦的该弟-資料與單 -炅數 筆或複數筆的嗲第一 Μ第-貧料間、單 的^弟一貝料與單筆或複數筆 料間、單筆或複數筆的該第二資料愈單三資 該第三資料間進行相似程度的比對Γ以/ 筆的 透過該顯示單开甚5 -上 oq ^ , 早,”、貝不相似程度的比對紝果 33·如申請專利範圍第32項之 對、、,。果。 係為特定靜態影像區域之大’ 6亥比對之條件 層或位置的近似程度: 涵、灰階、色彩漸 34·如申請專利範圍第3〇項之 處理裝置中,且該複數”料产用於複數個資料 無線通信網路相互連接。貝科處理農置係透過網路或 35.如申請專利範圍第^項 當該資料處理裝 ,復匕括: 裝置接收到來自另—資料處理裝置 U0921 42 201001194 ::號:操作指令、靜態影像及/或動態影像 之牛取.本作5亥貝料的擷取、搜尋、比對及/或融合 i少驟;以及 將該操取、搜尋、卜 由網或合完成後的資料經 吟4無線通仏網路傳读 • 顯示於該另—資料卢们貧料處理裝置並 t 另貝枓處理裝置之顯示單元上。 • σ申凊專利範圍第30項之方法^ , 置係為電腦、整印刑“ 〃中,該資料處理裝 - ^ ^ 。孓甩腦、個人數位助理、知彗刑;^ 動電話、第3代或第3.5代行動電話。 日慧山丁 •如申凊專利範圍第3〇 字、符號、操作指令、靜2像^中,該資料包括文 38.如申請專利範圍第㈣^法,复或動絲像的資料。 置復包括繪圖或影像生 八,该貧料處理裴 以透過該資料輪入模組藉 -里應用各式,用 操作指令之方,予、付號及/或其他 如申請專利範圍二影像資料。 7存使用者透過該資料輪1模;二資料庫復用 號及/或其他操作指令之方式,利用==文字、符 /或影像處理應用程式 、曰圖或衫像生成及 料。 成之靜態或動態之影像資 4〇·如申請專利範圍第30項之方法,A 為軟式磁碟裝置、硬式 壯,"中,該外部裝置係 閃記憶體儲存裝置、攝影^、f學儲存裝置、快 連接的其他資料處理裝置。、,與该貧料處理裝置相 Π0921 43 201001194 41. ==圍第3〇項之方法,其中,該資料庫係内 處理;=接處理裝置中…外接的方式與該資料 42. ==圍:3:之方法’其中,料庫係為 二 更式磁碟裝置、光學儲存裝置、快閃 2:: f儲存裝置、攝影裝置或與該資料處理裝置相連 接的其他資料處理裝置。 43·=::ίΙ:圍第3〇項之方法,其中,該條件係為特 資料:::置的檔案名稱、儲存曰期、槽案類型及/或 44.如申請專利範圍第3 .,„ L 貝之方法,其中,該第三資料係 為允付比對指令與條件及/或 之該第二資料之檔案名稱、儲存日期浐索曰7 〃條件 資料儲存位置。 料日案類型及/或 ㈣3〇項之方法’其中’該第三資料係 料、動態影像資料、允符比對指令與條 名稱比對指令與條件之該第二資料之槽案 存曰期、槽案類型及/或資料儲存位置或結合 及/或動態影像資料以及對應靜態影像及/或 動態衫像貧料之檔案名稱、儲. 資料儲存位置。 储存曰期'槽案類型及/或 Ή專利範圍第30項之方法,其中,該第一資料、 =了資料及/或該第三資料係為單筆資料或複數筆 110921 44 201001194 47. 如:請,範圍第46項之方法,其中,該第一資料係 =單筆資料’而遺第二資料則為複數筆資料,該第三 資料則為允符比對指令與條件及/或預設之比對指令 與條件之單筆或複數筆資料。 48. 如申請專利範圍第3()或叫之方法,復包括: 辨識出該第一資料、★歹笛_ -穴玉丨 ^ 貝枓。亥苐一―貝料、該第三資料及/ 或5亥弟四貧料中的臉部影像資料; 擷取出所辨識出的該臉部影像資料中之特徵點; 將該特徵點予以定位; 所辨2所輸人之文子、符號及/或操作指令,針對該 :辨識出的臉部影像資料中之該特徵點予以變形處 理,以及 識出的該第一資料、該第二資料、該第三 像=/或該第四資料中的臉部影像資料及/或臉部影 =中之特徵點,以及經過影像變形模組變形處理 資料、該第二資料、該第三資料及/或該第四資 枓儲存於該資料庫中。 49·1 申請專利範圍第48項之方法,其中,該變形處理步 :將該特徵點予以放大、縮小、旋轉、扭曲成特定 之幾何形狀。 50.如申請專利範圍第3()或叫之方法,復包括: 辨識出該第一資料、与r笛_ -欠 ★ 乐貝才叶。哀弟一貧料、該第三資料及/ 或該第四資料中的臉部影像資料; 7 β第-資料' $第二資料、該第三資料及/或該 110921 45 201001194 第四資料中之該臉部影像 以及 資料為 基礎臉部影像資料 n铄作指令 融合比例,將其他由該第一資料、該第二資料二, :資料及/或該第四資料中所辨識出的臉:: 料,按所設定之融合比例融合至該基礎臉部与7二貝 51·如申請專利範圍第3〇或31項之方法,復包。括^貢料。 令辨識出該第一資料、該第二資料、該第三 及/或該第四資料中的臉部影像資料,或進二牛厂 部影像資料擷取出所辨識出的臉部影像資料 點’並將該特徵點予以定位;以及 、主 依據所輸入之文字、符號及/或操作指令所設定之 調整範圍、内容、比例及/或待調整之該特徵點:行該 臉部影像資料的調整。 52.如申請專利範圍第3〇或31項之方法,復包括:The module regenerative function is used to make the first data, the second data, the third data or the fourth capital oblique image 立# image data based on the face image, according to the data input module a fusion example set by a text, a symbol, and/or an operation command of the transmission device, and the other image acquisition module from the first data, the first material, the third data, and/or The facial image data is identified in the fourth data, and the facial image data is melted according to the set fusion ratio. & W 26. The system of claim 25, wherein the fusion ratio is Π0921 39 201001194 for face ratio and/or material ratio. 27. Γί: The line of item 21, including: image adjustment mode, ^^ is used to identify the first face ^: 2 material, the third data and / Or the face h image in the fourth data, or further acquiring the feature point in the facial image data from the facial image data, and clamping the feature point, according to The adjustment content, the proportion and/or the feature point to be adjusted set by the data input module to the text, symbol and/or operation command of the resource processing device are used to adjust the facial image data. •: The system of claim 21 of the patent scope, wherein the adjustment range is a facial features and/or a hairstyle; the adjustment content is the intensity or angle of the skin color, the spots, the lines, and/or the light and shadow projection; the adjustment ratio is adjusted. The amplitude is the initial value and the limit value of the preset adjustment. The initial value of the adjustment is zero percent, and the limit value is 100% of the adjustment ratio. 2 9. The system of claim 2, wherein the image feature capture module identifies a plurality of the first data, the second data, the third data, and/or the fourth data After the facial image data is extracted and the feature points in the facial image data are extracted and the feature points are positioned, the image fusion module is multiplexed to make the first data, the second data, the third data and And the feature point of the facial image data in the fourth data is proportional to the text, symbol and/or operation command transmitted by the data input module to the data processing device. And the other feature points of the facial image data recognized by the image data capture module from the first data, the second data, the third data, and/or the fourth data, The set fusion ratios are combined to form a new facial image data through which the feature points are fused. 30. A method for image matching, deformation, fusion and synthesis, which is applied to a data processing device having a display unit, wherein the image matching, deformation, fusion and synthesis method comprises: receiving an external device and transmitting to the data processing device The data stored in the static and/or dynamic image received from the external device of the input module through the data input module is stored in the database; according to the received operation instructions and conditions, the input from the database is taken out and the input is matched. The first data of the operation instructions and conditions of the static and image; according to the search command and conditions entered and/or the preset search and condition 'retrieve the search command and the bar from the poor library and/or a second data of a predetermined search command and condition; a comparison of the command and the condition and/or the preset according to the ratio of the rounds: and the condition 'the first data is compared with the second data, and , from the 4th - the data is generated in accordance with the commander's condition and/or the preset third order of the command and condition; and ..., the first data, the second data and / or Third resource to the display unit. The method of claim 30 of the scope of patents includes: 110921 41 201001194 The second data of the first data of the plurality of pens, the second data of the plurality of pens, the second data of a single pen, a single pen or a plurality of pens The first material and the single data of the single or multiple pens, the first data of the single or multiple pens, and the third data of the single or multiple pens, the single pen and the first pen and the early pen Or the third data of the plurality of pens is merged to generate the fourth data; and the fourth data is displayed by the head unit. 32. The method of claim 3, the method includes: instruction = round: _ command and condition and/or preset comparison _,, the first data of the ten pairs of plural, the plural The second data, the plural 'this ΛΑ Μ 二 二 二 、 、 、 、 、 、 、 、 、 、 、 、 、 、 - - - - - - - - - - - - - - - - - - - - - - - - - - - ^The second data of a single material and a single or multiple pens, single or multiple pens, the more the three data, the similarity between the third data, the comparison between the pen and the pen. 5 -上oq ^ , early,", and the degree of dissimilarity of the comparison of the results of the 33. If the scope of the patent application is 32, the result is a large static image area of the '6 Haibi The degree of approximation of the conditional layer or position: culvert, grayscale, color gradual 34. As in the processing device of claim 3, and the plural "material" is used for interconnection of a plurality of data wireless communication networks. Becco handles the farm through the Internet or 35. If the patent application scope is the data processing equipment, the device receives: from the other data processing device U0921 42 201001194 :: number: operation command, static image And/or the dynamic image of the cow. This is a 5 haibei material extraction, search, comparison and / or fusion i less; and the operation, search, the network or the completed data after the completion 4 Wireless overnight network reading • Displayed on the display unit of the other data processing device and t. • σ 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊 凊3rd generation or 3.5th generation mobile phone. Rihui Shanding • Rushen’s patent scope 3rd word, symbol, operation instruction, static 2 image ^, the information includes the text 38. If the patent application scope (4) ^ method, The data of the complex or moving wire image. The resetting includes drawing or image processing, and the poor material processing is used to apply the various methods through the data wheeling module, using the operating instructions, giving, paying, and/or Others, such as the application of patent scope 2 image data. 7 save the user through the data wheel 1 mode; 2 database reuse number and / or other operating instructions, using == text, symbol / or image processing application, map Or the image of the shirt is generated and materialized. The static or dynamic image is 4〇·If the method of claim 30, A is a flexible disk device, hard and strong, ", the external device is flash memory storage Device, photography ^, f learning storage device, quick connect other The material processing device is in contact with the lean material processing device. 0921 43 201001194 41. == The method of the third item, wherein the database is processed internally; = the processing device is connected to the external device and the data 42 ==围:3: The method of 'the library is a two-type disk device, optical storage device, flash 2:: f storage device, photographic device or other data processing connected with the data processing device 43.=::ίΙ: The method of the third item, wherein the condition is special information::: file name, storage period, slot type and/or 44. 3. The method of L, wherein the third data is the file name of the order and condition and/or the second data, the storage date, and the condition data storage location. The type of the day case and/or the method of (4) 3 items, where the third data system, the dynamic image data, the comparison of the instructions and the article name comparison instructions and conditions of the second data , slot type and / or data storage location or combined and / or dynamic image data and corresponding static image and / or dynamic shirt image of the file name, storage. Data storage location. The method of storing the term 'slot type and/or the scope of the patent scope item 30, wherein the first data, the data and/or the third data is a single data or a plurality of pens 110921 44 201001194 47. : Please, the method of the 46th item, wherein the first data is a single data and the second data is a plurality of data, the third data is a comparison of the instructions and conditions and / or pre- Set a single or multiple data that compares the instructions and conditions. 48. If the patent application scope 3 () or the method is called, the complex includes: Identify the first data, ★ 歹 _ _ _ _ _ 枓 枓 枓 枓a facial image data in the first material, the third data, and/or the 5th dynasty material; 撷 extracting the identified feature points in the facial image data; positioning the feature point; Identifying the text, symbol and/or operation instruction of the 2 input persons, and modifying the feature point in the recognized facial image data, and identifying the first data, the second data, the a third image=/or a feature point in the facial image data and/or facial shadow= in the fourth data, and a deformation processing data, the second data, the third data, and/or The fourth asset is stored in the database. 49. The method of claim 48, wherein the deformation processing step: enlarging, reducing, rotating, and twisting the feature point into a specific geometric shape. 50. If the patent application scope 3 () or the method is called, the complex includes: identifying the first data, and r flute _ - owed ★ Lebe only leaves. A mourning for the poor, the third information and/or the facial image data in the fourth data; 7 β-data ' $ second information, the third data and / or the 110921 45 201001194 fourth information The face image and the data are based on the facial image data, and the other faces identified by the first data, the second data 2, the data and/or the fourth data are: : Material, according to the set fusion ratio is fused to the base face and 7 two shells 51. If the method of claim 3, or 31, is repeated. Including the tribute. Recognizing the first data, the second data, the facial image data in the third and/or the fourth data, or the image data of the face identified in the second cattle factory And positioning the feature point; and adjusting the range, content, proportion and/or the feature point to be adjusted according to the input text, symbol and/or operation instruction: adjusting the facial image data . 52. If the method of claim 3 or 31 of the patent application is included, the complex includes: 辨識出該第一資料、該第二資料、該第三資料及/ 或該第四資料中的臉部影像資料; 擷取出所辨識出的該臉部影像資料中之特徵點; 將該特徵點予以定位; 將該第一資料、該第二資料、該第三資料及/或該 第四資料中之臉部影像資料的特徵點,依據所輸入之 文字、符號及/或操作指令所設定之融合比例,與其他 由邊第一資料、該第二資料、該第三資料及/或該第四 賓料中所辨識出的臉部影像資料的特徵點,按所設定 46 110921 201001194 之融合比例相融合;以及 形成一特徵點經過融合之新的臉部影像資料。 47 110921Identifying the first data, the second data, the third data, and/or the facial image data in the fourth data; and extracting the identified feature points in the facial image data; Positioning; the feature points of the first data, the second data, the third data, and/or the facial image data in the fourth data are set according to the input text, symbol and/or operation instruction The fusion ratio is compared with other feature points of the facial image data identified by the first data, the second data, the third data, and/or the fourth material, according to the ratio of the set 46 110921 201001194 Convergence; and forming a new facial image data with a feature point fused. 47 110921
TW97122487A 2008-06-16 2008-06-16 Image matching, distortion, integration, and synthesis system, and method thereof TW201001194A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW97122487A TW201001194A (en) 2008-06-16 2008-06-16 Image matching, distortion, integration, and synthesis system, and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW97122487A TW201001194A (en) 2008-06-16 2008-06-16 Image matching, distortion, integration, and synthesis system, and method thereof

Publications (2)

Publication Number Publication Date
TW201001194A true TW201001194A (en) 2010-01-01
TWI361983B TWI361983B (en) 2012-04-11

Family

ID=44824767

Family Applications (1)

Application Number Title Priority Date Filing Date
TW97122487A TW201001194A (en) 2008-06-16 2008-06-16 Image matching, distortion, integration, and synthesis system, and method thereof

Country Status (1)

Country Link
TW (1) TW201001194A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI492076B (en) * 2010-03-25 2015-07-11 Inventec Appliances Corp Method and system for transmitting data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI492076B (en) * 2010-03-25 2015-07-11 Inventec Appliances Corp Method and system for transmitting data

Also Published As

Publication number Publication date
TWI361983B (en) 2012-04-11

Similar Documents

Publication Publication Date Title
US11023666B2 (en) Narrative-based media organizing system for transforming and merging graphical representations of digital media within a work area
US8208764B2 (en) Photo automatic linking system and method for accessing, linking, and visualizing “key-face” and/or multiple similar facial images along with associated electronic data via a facial image recognition search engine
US9317531B2 (en) Autocaptioning of images
CN102216941B (en) For the method and system of contents processing
US9462175B2 (en) Digital annotation-based visual recognition book pronunciation system and related method of operation
US8650242B2 (en) Data processing apparatus and data processing method
TW201115252A (en) Document camera with image-associated data searching and displaying function and method applied thereto
US8538093B2 (en) Method and apparatus for encouraging social networking through employment of facial feature comparison and matching
CN104025610B (en) For providing the system of content, method and apparatus based on a collection of image
CN103412951A (en) Individual-photo-based human network correlation analysis and management system and method
CN103369049A (en) Mobile terminal and server interactive method and system thereof
CN111491187B (en) Video recommendation method, device, equipment and storage medium
US8577752B2 (en) Photobook engine powered by blog content
Quack et al. Object recognition for the internet of things
US9081801B2 (en) Metadata supersets for matching images
WO2024088291A1 (en) Form filling method and apparatus, electronic device, and medium
CN112102157A (en) Video face changing method, electronic device and computer readable storage medium
Kumar et al. A comprehensive survey on generative adversarial networks used for synthesizing multimedia content
JP2008198135A (en) Information delivery system, information delivery device and information delivery method
CN111542817A (en) Information processing device, video search method, generation method, and program
Dang et al. Digital Face Manipulation Creation and Detection: A Systematic Review
KR100950053B1 (en) The system which provide a specialized advertisement contents where the data which the user designates is reflected
TW201001194A (en) Image matching, distortion, integration, and synthesis system, and method thereof
US20140016873A1 (en) Method and Apparatus for Encouraging Social Networking Through Employment of Facial Feature Comparison and Matching
JP4752628B2 (en) Drawing search system, drawing search method, and drawing search terminal