TWI695347B - Method and system for sorting and identifying medication via its label and/or package - Google Patents

Method and system for sorting and identifying medication via its label and/or package Download PDF

Info

Publication number
TWI695347B
TWI695347B TW107126302A TW107126302A TWI695347B TW I695347 B TWI695347 B TW I695347B TW 107126302 A TW107126302 A TW 107126302A TW 107126302 A TW107126302 A TW 107126302A TW I695347 B TWI695347 B TW I695347B
Authority
TW
Taiwan
Prior art keywords
image
images
drug
generate
processed
Prior art date
Application number
TW107126302A
Other languages
Chinese (zh)
Other versions
TW202008309A (en
Inventor
鍾聖倫
陳智芳
王靖煊
Original Assignee
台灣基督長老教會馬偕醫療財團法人馬偕紀念醫院
國立臺灣科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 台灣基督長老教會馬偕醫療財團法人馬偕紀念醫院, 國立臺灣科技大學 filed Critical 台灣基督長老教會馬偕醫療財團法人馬偕紀念醫院
Priority to TW107126302A priority Critical patent/TWI695347B/en
Publication of TW202008309A publication Critical patent/TW202008309A/en
Application granted granted Critical
Publication of TWI695347B publication Critical patent/TWI695347B/en

Links

Images

Landscapes

  • Medical Preparation Storing Or Oral Administration Devices (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed herein is an improved pharmaceutical management system and methods implemented by the system for sorting and identifying a medicine via its label and/or package. The method comprises steps of: (a) receiving a plurality of raw images of a package of a medication; (b) juxtaposing two of the plurality of raw images to produce a combined image, in which the two raw images are different from each other; (c) processing the combined image to produce a reference image; and (d) establishing the medication library with the aid of the reference image. The system comprises an image capturing device, an image processor, and a machine learning processor. The image processor is programmed with instructions to execute the method for producing a combined image.

Description

經由藥物包裝及/或標籤以分類及辨識該藥物的方法及系統Method and system for classifying and identifying medicine through medicine packaging and/or labeling

本揭示內容是關於藥物管理系統領域,特別是有關於一種可經由藥物之標籤及/或包裝來分類及識別該藥物的方法及系統。This disclosure relates to the field of medicine management systems, and in particular, to a method and system for classifying and identifying medicines through labeling and/or packaging of medicines.

醫療用品及藥學產品的高效儲存及管理系統及/或方法,對於醫院系統的順利運作以及提供優質病患照護來說是重要的先決條件。在此情況下,處方配藥的準確性對所有醫療機構都至關重要。雖然全自動配藥櫃(automated dispensing cabinets,ADCs)被引入醫療院所已有20年的歷史,但仍存在藥事上的疏失。由於全自動配藥櫃的給藥在很大程度上仍仰賴臨床人員對於標準流程的精確遵守,因此不太可能完全排除最終的人為錯誤。舉例來說,藥房可能將錯誤的藥品存放在給定的藥櫃,或是臨床人員可能從鄰近的藥櫃上拿到「長得像」的藥品。因此,以人眼經由藥物外觀識別該藥物是相當不可靠的,遑論可否適用到藥物管理系統中。Efficient storage and management systems and/or methods of medical supplies and pharmaceutical products are important prerequisites for the smooth operation of hospital systems and the provision of quality patient care. In this case, the accuracy of prescription dispensing is critical to all medical institutions. Although fully automated dispensing cabinets (ADCs) have been introduced into medical institutions for 20 years, there are still medical negligence. Since the administration of fully automated dispensing cabinets still relies heavily on the precise adherence of standard procedures by clinical staff, it is unlikely that the final human error will be completely ruled out. For example, a pharmacy may store the wrong medicine in a given medicine cabinet, or a clinician may get a "look-alike" medicine from an adjacent medicine cabinet. Therefore, it is quite unreliable to identify the drug through the appearance of the drug with the human eye, let alone apply it to the drug management system.

深度學習(deep learning)是基於多級別學習數據表示(representation)的更廣泛機器學習方法系列的一部分,這些方法是透過組合簡單但非線性的模塊(module)所獲得,每個模塊將一個級別的表示轉換為更高、更稍微抽象的級別來表示。通過這些轉換足夠的組合,可以學習較為複雜的功能(例如分類任務)。因此,深度學習對於解決存在於人工智能社群多年的問題上取得重大的進展,同時可應用至許多技術領域。針對藥學管理系統,深度學習在識別物體外觀的優異表現應可為目前藥物分配技術的缺點帶來解決之道。然而,考慮到涉及某些外觀相似之藥物包裝的種類繁多,僅執行深度學習是無法產生期望的結果。Deep learning is part of a broader series of machine learning methods based on multi-level learning data representation. These methods are obtained by combining simple but non-linear modules, each of which will be a level The representation is converted to a higher, more abstract level. With sufficient combinations of these transformations, more complex functions (such as classification tasks) can be learned. Therefore, deep learning has made significant progress in solving the problems that have existed in the artificial intelligence community for many years, and can be applied to many technical fields. For the pharmaceutical management system, the excellent performance of deep learning in identifying the appearance of objects should bring solutions to the shortcomings of current drug distribution technology. However, given the wide variety of pharmaceutical packages involving similar appearances, performing deep learning alone cannot produce the desired results.

鑑於上述,現有技術有必要提供一種用於藥物管理(例如,經由藥物的標籤及/或包裝來分類和識別該藥物)的改善方法及系統。In view of the foregoing, it is necessary for the prior art to provide an improved method and system for drug management (eg, classification and identification of drugs via labeling and/or packaging of drugs).

為了給讀者提供基本的理解,以下提供本揭示內容的簡要發明內容。此發明內容不是本揭示內容的廣泛概述,同時非用來識別本發明的關鍵/必需元件或勾勒本發明的範圍。其唯一目的是以簡化的概念形式呈現本揭示內容的一些概念,以作為呈現於後文中更詳細描述的序言。In order to provide the reader with a basic understanding, the following provides a brief summary of the disclosure. This summary of the invention is not an extensive overview of the disclosure, nor is it intended to identify key/essential elements of the invention or outline the scope of the invention. Its sole purpose is to present some concepts of the disclosure in a simplified conceptual form as a prelude to the more detailed description that is presented later.

如本文所體現和廣泛描述的,本揭示內容的目的是提供一種改善的藥品管理系統以及透過該系統識別臨床藥物的實施方法,藉此可大幅改善配藥的效率及精確性。As embodied and broadly described herein, the purpose of this disclosure is to provide an improved drug management system and implementation method for identifying clinical drugs through the system, thereby greatly improving the efficiency and accuracy of dispensing.

本揭示內容的一態樣是關於一種用以建立一藥物資料庫的電腦實施方法。在某些實施方式中,該方法包含:(a) 接受一藥物包裝的複數個原始影像;(b) 規整(juxtapose)該複數個原始影像中的兩張,以產生一合併影像,其中該兩張原始影像彼此相異;(c) 處理該合併影像以產生一參考影像;以及 (d)借助於該參考影像以建立該藥物資料庫。An aspect of this disclosure relates to a computer-implemented method for creating a drug database. In some embodiments, the method includes: (a) receiving a plurality of original images of a drug package; (b) juxtapose two of the plurality of original images to generate a merged image, wherein the two The original images are different from each other; (c) processing the merged image to generate a reference image; and (d) using the reference image to create the drug database.

在某些非必要實施方式中,前述方法更包含在步驟(a)之前同時擷取該藥物包裝的該複數個原始影像。In some non-essential embodiments, the aforementioned method further includes simultaneously capturing the plurality of original images of the drug package before step (a).

根據本揭示內容的某些實施方式,該藥物是一泡型包裝(blister package)的形式。According to some embodiments of the present disclosure, the drug is in the form of a blister package.

根據本揭示內容的某些實施方式,步驟(b)包含:(b-1) 分別處理該複數個原始影像,以產生複數個分別具有經定義輪廓的第一處理影像;(b-2) 識別該複數個第一處理影像之各經定義輪廓的角點,以確定其座標;(b-3) 基於步驟(b-2)確定的該座標,旋轉步驟(b-1)的各該第一處理影像,以產生複數個第二處理影像;以及(b-4) 結合該複數個第二處理影像中的任兩張,以產生步驟(b)的該合併影像。According to some embodiments of the present disclosure, step (b) includes: (b-1) processing the plurality of original images separately to generate a plurality of first processed images each having a defined contour; (b-2) identifying Corners of each defined contour of the plurality of first processed images to determine their coordinates; (b-3) based on the coordinates determined in step (b-2), rotate the first of step (b-1) Processing images to generate a plurality of second processed images; and (b-4) combining any two of the plurality of second processed images to generate the combined image of step (b).

根據本揭示內容的某些實施方式,步驟(b-1)之各該複數個原始影像分別受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。According to some embodiments of the present disclosure, each of the plurality of original images in step (b-1) is subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) An edge recognition process, (iv) a convex hull operation process, and (v) a contour finding process.

根據本揭示內容的某些實施方式,可以任何順序獨立地執行前述(i)至(v)各處理;且在前述(i)至(v)各處理中,可使用步驟(b-1)的原始影像,或是使用經當前處理以外的其他經(i)至(v)任一處理的影像。According to some embodiments of the present disclosure, the aforementioned processes (i) to (v) may be independently performed in any order; and in the aforementioned processes (i) to (v), the steps (b-1) may be used The original image, or an image processed by any one of (i) to (v) other than the current processing.

在本揭示內容的某些實施方式中, 是以一直線轉換演算法或一質心演算法來執行步驟(b-2)。In some embodiments of the present disclosure, step (b-2) is performed with a linear transformation algorithm or a centroid algorithm.

較佳的是前述合併影像具有該藥物泡型包裝的雙面影像。It is preferable that the aforementioned merged image has a double-sided image of the medicine bubble packaging.

根據本揭示內容的某些實施方式,以一機器學習演算法來執行步驟(c)。According to some embodiments of the present disclosure, a machine learning algorithm is used to perform step (c).

本揭示內容的另一態樣是關於一種用以經由一藥物泡型包裝來識別該藥物的電腦實施方法。所述方法包含:(a)同時取得該藥物泡型包裝的正面與反面影像;(b) 規整步驟(a)的該正面與反面影像以產生一候選影像;(c) 比對該候選影像與前述方法所建立的一藥物資料庫的一參考影像;以及(d) 輸出步驟(c)的結果。Another aspect of the present disclosure relates to a computer-implemented method for identifying a drug via a drug bubble package. The method includes: (a) obtaining both the front and back images of the blister package of the drug; (b) regularizing the front and back images of step (a) to generate a candidate image; (c) comparing the candidate image with A reference image of a drug database created by the foregoing method; and (d) outputting the result of step (c).

根據本揭示內容的某些實施方式,步驟(b)包含:(b-1)分別處理該正面與反面影像,以產生兩張分別具有經定義輪廓的第一處理影像;(b-2) 識別該兩張第一處理影像之各經定義輪廓的角點,以確定其座標;(b-3) 基於步驟(b-2)確定的該座標,旋轉步驟(b-1)的各該兩張第一處理影像,以產生兩張第二處理影像;以及(b-4) 結合該兩張第二處理影像,以產生步驟(b)的該合併影像。According to some embodiments of the present disclosure, step (b) includes: (b-1) processing the front and back images separately to generate two first processed images each having a defined contour; (b-2) identifying The corners of the defined contours of the two first processed images to determine their coordinates; (b-3) based on the coordinates determined in step (b-2), rotate the two of step (b-1) The first processed image to generate two second processed images; and (b-4) combining the two second processed images to generate the combined image of step (b).

根據本揭示內容的某些實施方式,步驟(b-1)之該正面與反面影像分別受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv) 一凸包運算處理以及(v)一尋找輪廓處理。According to some embodiments of the present disclosure, the front and back images of step (b-1) are subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) a Edge recognition processing, (iv) a convex hull operation processing, and (v) a contour finding process.

在某些實施方式中,可以任何順序獨立地執行前述(i)至(v)各處理;且在前述(i)至(v)各處理,可使用步驟(b-1)的正面及反面影像或是使用經當前處理以外的其他經(i)至(v)任一處理的影像。In some embodiments, the aforementioned (i) to (v) processes can be independently performed in any order; and in the aforementioned (i) to (v) processes, the front and back images of step (b-1) can be used Or use the image processed by any one of (i) to (v) other than the current processing.

在某些非必要實施方式中,是以一直線轉換演算法或一質心演算法來執行步驟(b-2)。In some non-essential embodiments, step (b-2) is performed with a linear transformation algorithm or a centroid algorithm.

在某些非必要實施方式中,是以一機器學習演算法來執行步驟(d)。In some non-essential embodiments, a machine learning algorithm is used to perform step (d).

在某些非必要實施方式中,所述方法更包含在步驟(c)之前傳送該候選影像至該藥物資料庫內。In some non-essential embodiments, the method further includes transmitting the candidate image to the drug database before step (c).

本揭示內容又另一態樣是關於一種藥物管理系統,其包含一影像擷取裝置、一影像處理器以及一機器學習處理器,設以實現前述方法。Yet another aspect of the present disclosure relates to a medicine management system, which includes an image capture device, an image processor, and a machine learning processor, to implement the aforementioned method.

具體而言,該影像擷取裝置是設以擷取一藥物包裝的複數個影像。所述影像處理器經指令編程以執行用於產生一候選影像的方法,所述方法包含:(1)分別處理該藥物包裝的該複數個影像,以產生複數個分別具有經定義輪廓的第一處理影像;(2) 識別該複數個第一處理影像之各經定義輪廓的角點,以確定其座標;(3) 基於步驟(2)確定的該座標,旋轉步驟(1)的各該第一處理影像,以產生複數個第二處理影像;以及(4) 規整步驟(3)的該複數個第二處理影像中的兩張,以產生該候選影像,其中該兩張第二處理影像彼此相異。再者,所述機器學習處理器是經指令編程執行一方法,該方法係用於比對候選影像與前述藥物資料庫的一參考影像。隨後可將藉由機器學習處理器產生的結果輸出以通知正在配藥的操作員。Specifically, the image capturing device is configured to capture multiple images of a medicine package. The image processor is programmed with instructions to execute a method for generating a candidate image, the method includes: (1) processing the plurality of images of the drug package to generate a plurality of first images each having a defined contour Process the image; (2) Identify the corner points of the defined contours of the plurality of first processed images to determine their coordinates; (3) Rotate each of the steps of step (1) based on the coordinates determined in step (2) A processed image to generate a plurality of second processed images; and (4) normalizing two of the plurality of second processed images in step (3) to generate the candidate image, wherein the two second processed images are mutually Different. Furthermore, the machine learning processor executes a method through instruction programming, which is used to compare the candidate image with a reference image of the aforementioned drug database. The results generated by the machine learning processor can then be output to notify the operator who is dispensing.

在本揭示內容的某些實施方式中,影像擷取裝置包含一透明板體,藥物放置在該透明板體上,且二影像擷取單元個別設置於該透明板體各側之上方。In some embodiments of the present disclosure, the image capturing device includes a transparent plate, the medicine is placed on the transparent plate, and the two image capturing units are individually disposed above each side of the transparent plate.

在本揭示內容的某些實施方式,步驟(1)的各該複數個影像是分別受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。In some embodiments of the present disclosure, each of the plurality of images in step (1) is subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) an edge Recognition processing, (iv) a convex hull arithmetic processing, and (v) a contour finding processing.

在某些實施方式中,可以任何順序獨立地執行前述(i)至(v)各處理;且在前述(i)至(v)各處理中,可使用步驟(1)的該複數個影像或是使用經當前處理之外的其他經(i)至(v)任一處理的影像。In some embodiments, the aforementioned (i) to (v) processes can be performed independently in any order; and in the aforementioned (i) to (v) processes, the plurality of images of step (1) or It is an image that has been processed by any of (i) to (v) other than the current processing.

額外地或替代性地,可藉由一直線轉換演算法或一質心演算法來執行所述步驟(2)。Additionally or alternatively, the step (2) may be performed by a linear transformation algorithm or a centroid algorithm.

在某些非必要實施方式中,是藉由一機器學習演算法來執行所述用於比對該候選影像與前述藥物資料庫之參考影像的方法。In some non-essential embodiments, the method for comparing the candidate image with the reference image of the aforementioned drug database is performed by a machine learning algorithm.

經由以上配置,用於分類及識別的藥物管理方法及系統可以即時實時的方式進行,藉以在配藥過程中,不論藥物的方位為何,均可縮短整體影像辨識的處理時間。Through the above configuration, the medicine management method and system for classification and identification can be carried out in real-time and real-time, so that in the dispensing process, the processing time of the overall image recognition can be shortened regardless of the orientation of the medicine.

此外,可提升經由藥物外觀及/或泡型包裝來識別藥物的準確度,並可降低配藥過程中人為的錯誤。因此,可改善藥物使用的安全性。In addition, the accuracy of identifying the medicine through the appearance of the medicine and/or the bubble packaging can be improved, and the human error in the dispensing process can be reduced. Therefore, the safety of drug use can be improved.

在參閱下文實施方式後,本發明所屬技術領域中具有通常知識者當可輕易瞭解本發明之基本精神及其他發明目的,以及本發明所採用之技術手段與實施態樣。After referring to the embodiments below, those with ordinary knowledge in the technical field to which the present invention belongs can easily understand the basic spirit of the present invention and other inventive objectives, as well as the technical means and implementation aspects adopted by the present invention.

為了使本揭示內容的敘述更加詳盡與完備,下文針對了本發明的實施態樣與具體實施例提出了說明性的描述;但這並非實施或運用本發明具體實施例的唯一形式。實施方式中涵蓋了多個具體實施例的特徵以及用以建構與操作這些具體實施例的方法步驟與其順序。然而,亦可利用其他具體實施例來達成相同或均等的功能與步驟順序。In order to make the description of this disclosure more detailed and complete, the following provides an illustrative description of the implementation form and specific embodiments of the present invention; however, this is not the only form for implementing or using specific embodiments of the present invention. The embodiments cover the features of multiple specific embodiments, as well as the method steps and their sequence for constructing and operating these specific embodiments. However, other specific embodiments can also be used to achieve the same or equal functions and sequence of steps.

I.  定義I. Definition

為了便於說明,此處統整性地說明本說明書、實施例以及後附的申請專利範圍中所記載的特定術語。除非本說明書另有定義,此處所用的科學與技術詞彙之含義與本發明所屬技術領域中具有通常知識者所理解與慣用的意義相同。此外,在不和上下文衝突的情形下,本說明書所用的單數名詞涵蓋該名詞的複數型;而所用的複數名詞時亦涵蓋該名詞的單數型。具體而言,除非上下文另有明確說明,本文和後附的申請專利範圍所使用的單數形式「一」(a及an)包含複數形式。此外,在本說明書與申請專利範圍中,「至少一」(at least one)與「一或更多」(one or more)等表述方式的意義相同,兩者都代表包含了一、二、三或更多。For ease of explanation, the specific terms described in the specification, examples, and appended patent applications are described here in a comprehensive manner. Unless otherwise defined in this specification, the meanings of scientific and technical terms used herein have the same meanings as those understood and used by those with ordinary knowledge in the technical field to which the present invention belongs. In addition, without conflicting with the context, the singular noun used in this specification covers the plural form of the noun; and the plural noun used also covers the singular form of the noun. Specifically, unless the context clearly indicates otherwise, the singular forms "a" (a and an) used in this and the appended patent applications include plural forms. In addition, in this specification and the scope of patent application, the expressions of "at least one" (at least one) and "one or more" (one or more) have the same meaning, and both represent one, two, three Or more.

雖然用以界定本發明較廣範圍的數值範圍與參數皆是約略的數值,此處已盡可能精確地呈現具體實施例中的相關數值。然而,任何數值本質上不可避免地含有因個別測試方法所致的標準偏差。在此處,「約」(about)通常係指實際數值在一特定數值或範圍的正負10%、5%、1%或0.5%之內。或者是,「約」一詞代表實際數值落在平均值的可接受標準誤差之內,視本發明所屬技術領域中具有通常知識者的考量而定。除了實驗例之外,或除非另有明確的說明,當可理解此處所用的所有範圍、數量、數值與百分比(例如用以描述材料用量、時間長短、溫度、操作條件、數量比例及其他相似者)均經過「約」的修飾。因此,除非有相反的說明,本說明書與附隨申請專利範圍所揭示的數值參數皆為約略的數值,且可視需求而更動。至少應將這些數值參數理解為所指出的有效位數與套用一般進位法所得到的數值。在此處,將數值範圍表示成由一端點至另一段點或介於二端點之間;除非另有說明,此處所述的數值範圍皆包含端點。Although the numerical ranges and parameters used to define the broader range of the present invention are approximate values, the relevant numerical values in the specific embodiments have been presented as accurately as possible. However, any numerical value inevitably contains standard deviations due to individual test methods. Here, "about" generally means that the actual value is within plus or minus 10%, 5%, 1%, or 0.5% of a specific value or range. Or, the term "about" means that the actual value falls within the acceptable standard error of the average value, depending on the consideration of those with ordinary knowledge in the technical field to which the present invention belongs. Except for experimental examples, or unless clearly stated otherwise, all ranges, quantities, values, and percentages used herein can be understood (for example, to describe the amount of materials, length of time, temperature, operating conditions, quantity ratio, and other similarities All) have been modified by "about". Therefore, unless stated to the contrary, the numerical parameters disclosed in this specification and the accompanying patent application are approximate values, and can be changed as required. At least these numerical parameters should be understood as the indicated significant digits and the values obtained by applying the general rounding method. Here, the numerical range is expressed from one end point to another segment point or between two end points; unless otherwise stated, the numerical range described herein includes end points.

本文使用的「泡型包裝」(blister pack或blister package)一詞涵蓋產品被包含在片體材料之間的任何類型的分層包裝;其中該些片體材料可藉由所屬技術領域中具有通常知識者熟知的方法黏合或封裝,舉例來說,這些片體可藉由熱及/或壓力活化膠加以黏合。可從市售獲得這些可做為單獨片體(用於手工包裝)或作為捲料上連續卷片(用於機器包裝)的片體材料。泡型包裝的主要結構是由可成形網(通常是熱塑性塑膠)製成的空腔或袋體。空腔或袋體足夠大以容納在在泡型包裝中的物品。根據應用,泡型包裝可具有熱塑性材料的背襯。對於製藥領域,泡型包裝常用作藥片的單劑量包裝,且包含印刷在該泡型包裝背面的藥物資訊。此外,這些片體有各種厚度可以選擇。As used herein, the term "blister pack" (blister pack or blister package) covers any type of layered packaging in which the product is contained between sheet materials; where these sheet materials can The method is well known to those skilled in the art. For example, these sheets can be bonded by heat and/or pressure activated glue. These are commercially available as sheet materials for individual sheets (for manual packaging) or as continuous sheets on a roll (for machine packaging). The main structure of the blister package is a cavity or bag made of a formable mesh (usually thermoplastic). The cavity or bag is large enough to contain the items in the blister pack. Depending on the application, the blister pack may have a thermoplastic backing. For the pharmaceutical field, blister packaging is often used as a single-dose packaging for tablets, and contains the drug information printed on the back of the blister packaging. In addition, these sheets are available in various thicknesses.

II. 本發明實施方式II. Embodiments of the invention

對病患照護來說最重要的事情是藥物使用的安全性。正確填寫處方藥和配藥對病患照護至關重要。由於習知的自動配藥系統(例如ADCs)仍有改進的餘地,本發明旨在提供一種改善的誘導式深度學習方法,藉以解決前述問題。此外,本發明亦旨在發展一種自動藥物驗證(automatic medication verification,AMV)設備,採用實時操作系統,藉以減低臨床人員的工作負荷。The most important thing for patient care is the safety of drug use. Correctly filling out prescription drugs and dispensing medicines is essential for patient care. Since there is still room for improvement in conventional automatic dispensing systems (such as ADCs), the present invention aims to provide an improved inductive deep learning method to solve the aforementioned problems. In addition, the present invention also aims to develop an automatic medication verification (AMV) device that uses a real-time operating system to reduce the workload of clinical staff.

具體來說,所謂誘導式的深度學習是指將特徵或影像輸入演算法之前,處理或醒目化該些特徵或影像的方法,藉此以更強調的方式學習該些特徵資訊。為了更好地劃分所識別的目標並曝光固有的描述性特徵,一般是藉由各種影像處理步驟以達成特徵處理,該些步驟將泡型包裝外觀的兩面剪裁影像整合至一固定尺寸的模板中,從而利於後續學習網路的分類過程。Specifically, the so-called induced deep learning refers to a method of processing or highlighting features or images before inputting the features or images into the algorithm, thereby learning the feature information in a more emphasized manner. In order to better divide the identified targets and expose the inherent descriptive features, the feature processing is generally achieved through various image processing steps that integrate the two-sided cropped images of the appearance of the blister package into a fixed-size template , So as to facilitate the subsequent learning network classification process.

1.1. 建立藥物資料庫的方法Method of establishing drug database

配合第1圖,本揭示內容的第一態樣是關於一種用於在電腦可讀取儲存媒體中建立藥物資料庫的方法。With reference to FIG. 1, the first aspect of the present disclosure relates to a method for establishing a drug database in a computer-readable storage medium.

如第1圖所示,其根據本揭示內容一實施方式繪示以電腦實施的方法100之流程圖。所述方法包含至少以下步驟: (S110)接受一藥物包裝的複數個原始影像; (S120) 規整所述複數個原始影像中的兩張以產生一合併影像,其中該兩張原始影像彼此相異; (S130) 處理前述合併影像以產生一參考影像;以及 (S140) 借助於該參考影像以建立所述藥物資料庫。As shown in FIG. 1, it illustrates a flowchart of a computer-implemented method 100 according to an embodiment of the present disclosure. The method includes at least the following steps: (S110) receiving a plurality of original images of a medicine package; (S120) arranging two of the plurality of original images to generate a combined image, wherein the two original images are different from each other (S130) processing the aforementioned merged image to generate a reference image; and (S140) using the reference image to create the drug database.

在實施方法100之前,可藉由任何已知的方式(例如一影像擷取裝置,例如攝像機)擷取一藥物包裝(例如泡型包裝)的複數個原始影像。接著,在本發明方法的步驟S110中,經擷取的影像被自動地轉發到其嵌有用於執行本發明方法100之指令的裝置及/或系統。在之後的步驟S120,透過前述裝置及/或系統接收的複數個原始影像的兩張可彼此規整,以產生一合併影像。值得注意的是兩個經規整的原始影像彼此相異。在一例示性實施方式中,該藥物是以泡型包裝的包裝形式;因此,以影像擷取裝置取得的複數個原始影像會包含該泡型包裝的兩個面。亦即,分別顯示泡型包裝之正面及反面的兩張影像會彼此規整以產生一合併影像。為了增進後續機器學習程序對外觀辨識的精確度,對該原始影像進行處理及醒目化該原始影像,藉以給出具有預定特徵(例如固定在預定的像素尺寸等)的一乾淨影像。在乾淨影像中,非主體(即,泡型包裝及/或標籤的影像)的背景部分均全部被移除。Before the method 100 is implemented, a plurality of original images of a pharmaceutical package (such as a bubble package) can be captured by any known method (such as an image capturing device, such as a video camera). Next, in step S110 of the method of the present invention, the captured image is automatically forwarded to the device and/or system in which the instruction for executing the method 100 of the present invention is embedded. In the following step S120, two of the plurality of original images received through the aforementioned device and/or system may be regularized with each other to generate a combined image. It is worth noting that the two original original images are different from each other. In an exemplary embodiment, the medicine is in the form of a blister package; therefore, the plurality of original images obtained by the image capturing device will include both sides of the blister package. That is, the two images showing the front and back sides of the blister package respectively will be regularized to produce a combined image. In order to improve the accuracy of subsequent machine learning programs in identifying the appearance, the original image is processed and the original image is highlighted to give a clean image with predetermined characteristics (for example, fixed at a predetermined pixel size, etc.). In a clean image, the background part of the non-body (ie, the image of the blister pack and/or label) is all removed.

在步驟S130中,前述合併影像用以訓練嵌入計算機(例如處理器)的一機器學習演算法,以產生參考影像。可重複步驟S110至S130多次,每次使用的藥物包裝與之前使用的不同。最後,借助於參考影像可建立一藥物資料庫(步驟S140)。In step S130, the aforementioned merged image is used to train a machine learning algorithm embedded in a computer (such as a processor) to generate a reference image. Steps S110 to S130 can be repeated multiple times, and the medicine package used each time is different from that used previously. Finally, a drug database can be created with reference to the reference image (step S140).

回到步驟S120,通常包含以下步驟: (S121) 分別處理複數個原始影像,以產生分別具有經定義輪廓的複數個第一處理影像; (S122) 識別該複數個第一處理影像之各經定義輪廓的角點,以確定其座標; (S123) 基於步驟S122經確定的該座標,旋轉步驟S121的各該第一處理影像,以產生複數個第二處理影像;以及 (S124) 結合該複數個第二處理影像的任兩張,以產生步驟S120的合併影像。Returning to step S120, it usually includes the following steps: (S121) processing a plurality of original images separately to generate a plurality of first processed images each having a defined contour; (S122) identifying each of the defined first processed images The corner points of the contour to determine its coordinates; (S123) based on the coordinates determined in step S122, rotate each of the first processed images in step S121 to generate a plurality of second processed images; and (S124) combine the plurality of Any two of the second processed images to generate the merged image in step S120.

步驟S121是利用一影像處理器分別處理所述複數個原始影像,最終產生分別具有明確定義之輪廓的複數個第一處理影像。影像處理器執行數種演算法,以消除影像的背景雜訊並提取目標特徵。具體來說,步驟S121的每一複數個原始影像分別接受以下的處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。須注意的是,可獨立地以任何順序執行前述(i)至(v)各處理。Step S121 is to use an image processor to process the plurality of original images respectively, and finally generate a plurality of first processed images each having a clearly defined contour. The image processor executes several algorithms to remove background noise from the image and extract target features. Specifically, each of the plurality of original images in step S121 is subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) an edge recognition process, (iv) a Convex hull arithmetic processing and (v) contour finding processing. It should be noted that the aforementioned processes (i) to (v) can be performed independently in any order.

具體來說,(i)使用色彩轉換演算法執行灰階轉換處理,以將BGR色彩(彩色)轉換成灰階(S1211);(ii)使用過濾器演算法執行雜訊濾除處理,以將背景雜訊減至最少(S1212);(iii)使用邊緣識別演算法執行邊緣識別處理,以測定影像中泡型包裝之各邊緣的座標(S1213);(iv)使用凸包運算演算法執行凸包運算處理,以計算影像中藥物泡型包裝的實際面積(S1214);以及(v)使用輪廓定義演算法執行所述尋找輪廓處理,以提取及醒目化泡型包裝的主要區域(S1215)。Specifically, (i) use a color conversion algorithm to perform gray-scale conversion processing to convert BGR color (color) to gray-scale (S1211); (ii) use a filter algorithm to perform noise filtering processing to convert Minimize background noise (S1212); (iii) use the edge recognition algorithm to perform edge recognition processing to determine the coordinates of each edge of the bubble package in the image (S1213); (iv) use the convex hull algorithm to perform convexity Packet arithmetic processing to calculate the actual area of the medicine bubble packaging in the image (S1214); and (v) Perform the contour finding process using a contour definition algorithm to extract and highlight the main area of the bubble packaging (S1215).

根據本揭示內容的某些實施方式,步驟S121的原始影像接受前述(i)至(v)所有處理之後才續行步驟S122。如同前述,可以任何順序獨立地執行這些處理,因此除了步驟S121形成的原始影像之外,接受任一處理之後的影像可以是下一個處理的原始影像。舉例來說,步驟S121的原始影像先經受(i)處理或步驟S1211,在此步驟中,原始影像的BGR色彩被轉換成灰階,藉以產生一灰階影像。接著以(ii)至(v)任一處理,對從(i)處理得來的灰階影像進行處理,直到成功地對該影像施用(i) 至(v)的所有處理,藉此產生具有經定義輪廓的第一處理影像。According to some embodiments of the present disclosure, the original image in step S121 is subjected to all the aforementioned processes (i) to (v) before proceeding to step S122. As described above, these processes can be performed independently in any order, so in addition to the original image formed in step S121, the image after receiving any process may be the original image of the next process. For example, the original image in step S121 is first subjected to (i) processing or step S1211. In this step, the BGR color of the original image is converted into grayscale to generate a grayscale image. Then, by any processing from (ii) to (v), the gray-scale image from (i) processing is processed until all the processing from (i) to (v) is successfully applied to the image, thereby generating The first processed image of the defined contour.

抑或是,或非必要地,步驟S121 的原始影像是先經受(iv)處理或步驟S1214。在此步驟中則是執行一凸包運算演算法。凸包運算演算法是用來在平面或其他低維度找到有限點集合(set of points)的凸包。本揭示內容則是透過計算影像中藥物泡型包裝的實際面積來定義凸包。從(iv)處理或步驟S1214獲得的影像可接著經受(i)、(ii)、(iii)或(v)的任一處理,直到(i)至(v)所有處理均被成功地施用在該影像上,接著產生具有經定義輪廓的第一處理影像。Or, or unnecessarily, the original image in step S121 is first subjected to (iv) processing or step S1214. In this step, a convex hull algorithm is executed. Convex hull arithmetic is used to find the convex hull of a set of points in a plane or other low dimensions. In this disclosure, the convex hull is defined by calculating the actual area of the drug bubble packaging in the image. The image obtained from the (iv) process or step S1214 may then be subjected to any of (i), (ii), (iii) or (v) until all the processes from (i) to (v) are successfully applied at On this image, a first processed image with a defined contour is then generated.

應當理解的是,前述記載的(i)至(v)處理或是步驟S1211至S1215可依序、隨機及/或重複地執行(例如可重複一次或多次(iii)處理);較佳地,是以(i)至(v)的順序或是從步驟S1211至S1215的順序執行。It should be understood that the aforementioned processes (i) to (v) or steps S1211 to S1215 may be performed sequentially, randomly, and/or repeatedly (for example, the process (iii) may be repeated one or more times); preferably Is executed in the order of (i) to (v) or from steps S1211 to S1215.

藉由執行前述步驟S1211至S1215,從複數個原始影像產生各具有經定義輪廓的複數個第一處理影像,其中原始影像的背景雜訊被消除,且該泡型包裝的主要部分被提取並醒目化,以利後續影像處理程序。By performing the aforementioned steps S1211 to S1215, a plurality of first processed images each having a defined contour are generated from the plurality of original images, wherein the background noise of the original image is eliminated, and the main part of the bubble packaging is extracted and eye-catching To facilitate subsequent image processing procedures.

續行步驟S122,該步驟係定義出每一張第一處理影像的角點(第1圖)。不論泡型包裝的輪廓形狀為何,此步驟可確定每一張第一處理影像的至少一角點之座標。較佳地,確定從步驟S121獲取的每一張第一處理影像的四個角點座標。此步驟的目標在於從泡型包裝的輪廓邊緣預測出至少三條直線,接著透過幾何學推理方式以獲得最貼近泡型包裝邊緣的四邊形以及各角點座標。由於泡型包裝可能具有多種形狀及/或輪廓,因此,可採用不同演算法來從藥物包裝之第一處理影像中識別角點。舉例來說,若第一處理影像中的藥物包裝的輪廓形狀是常規四邊形(如:矩形),則可採用直線轉換演算法來識別四個角點。另一方面,若第一處理影像中藥物包裝的輪廓形狀是非常規四邊形,像是具有三個直線邊緣及一個曲線邊緣的非典型多邊形,那麼則採用質心演算法來進行角點識別。透過前述設計,不論藥物包裝的方向及形狀為何,均能確定每個角點的座標。Continue to step S122, which defines the corner point of each first processed image (Figure 1). Regardless of the outline shape of the bubble packaging, this step can determine the coordinates of at least one corner of each first processed image. Preferably, the coordinates of the four corner points of each first processed image acquired from step S121 are determined. The goal of this step is to predict at least three straight lines from the contour edge of the bubble package, and then use geometric reasoning to obtain the quadrilateral and the coordinates of the corner points that are closest to the edge of the bubble package. Since bubble packaging may have multiple shapes and/or contours, different algorithms may be used to identify corner points from the first processed image of the pharmaceutical packaging. For example, if the outline shape of the medicine package in the first processed image is a conventional quadrilateral (such as a rectangle), a straight line conversion algorithm can be used to identify the four corner points. On the other hand, if the outline shape of the medicine package in the first processed image is an unconventional quadrilateral, such as an atypical polygon with three straight edges and one curved edge, then the centroid algorithm is used for corner recognition. Through the aforementioned design, the coordinates of each corner point can be determined regardless of the direction and shape of the medicine package.

於步驟S122識別每一處理影像的角點不僅是為了建立後續旋轉步驟(S123)及合併步驟(S124)的定錨點,同時也是為了確保整個藥物包裝有被包含在第一處理影像中以便後續分析。值得注意的是,前述泡型包裝影像經測定的四個角點必須以順時針或逆時針方式排列,據此可基於該四個角點的確定座標將第一處理影像旋轉至一預定的位置(S123)。預定的位置可隨著實際實施需求變化。在某些實施例中,預定位置是指泡型包裝的短邊與長邊分別與笛卡兒座標系統的X軸及Y軸平行的位置。泡型包裝較佳的定向方式是僅需旋轉一次影像就可使其短邊與長邊分別與笛卡兒座標系統的X軸及Y軸平行。實際施用時,可使用多種透視轉換演算法旋轉第一處理影像,其設有預定像素尺寸(例如 448×224像素),藉以產生有預定位置的第二處理(經旋轉)影像。Recognizing the corners of each processed image in step S122 is not only to establish anchor points for the subsequent rotation step (S123) and merge step (S124), but also to ensure that the entire drug package is included in the first processed image for subsequent analysis. It is worth noting that the measured four corner points of the bubble packaging image must be arranged in a clockwise or counterclockwise manner, according to which the first processed image can be rotated to a predetermined position based on the determined coordinates of the four corner points (S123). The predetermined position may vary with actual implementation requirements. In some embodiments, the predetermined position refers to a position where the short side and the long side of the bubble packaging are parallel to the X-axis and Y-axis of the Cartesian coordinate system, respectively. The preferred orientation of the bubble packaging is to rotate the image only once to make the short and long sides parallel to the X-axis and Y-axis of the Cartesian coordinate system. In actual application, various perspective transformation algorithms can be used to rotate the first processed image, which is provided with a predetermined pixel size (for example, 448×224 pixels), so as to generate a second processed (rotated) image with a predetermined position.

接著,在步驟S124,可將藥物的任兩張第二處理影像並排放置(或是彼此規整)藉以產生一合併影像。應注意的是從此步驟獲得的合併影像可包含各種組合,這有利於盡可能地建立藥物資料庫數據。兩張第二處理影像可以是泡型包裝的兩相反面(無論正立或倒立)。較佳地,相同方向且分別呈現藥物泡型包裝兩面的兩張第二處理影像彼此規整,以產生包含該藥物包裝最多特徵資訊的合併影像。Next, in step S124, any two second processed images of the medicine may be placed side by side (or aligned with each other) to generate a combined image. It should be noted that the merged images obtained from this step may contain various combinations, which is conducive to establishing the drug database data as much as possible. The two second processed images can be two opposite sides of the bubble packaging (whether upright or inverted). Preferably, the two second processed images in the same direction and respectively presenting both sides of the drug bubble package are regular to each other to generate a merged image containing the most characteristic information of the drug package.

為了建立藥物資料庫,在步驟S130及S140中將合併影像用於訓練嵌入計算機的機器學習演算法,以產生參考影像。包含藥物資訊的合併影像是以儲存在藥物資料庫的參考影像來分類,且之後可提取用於識別候選包裝。在某些實施方式中,將至少一合併影像輸入至機器學習演算法中。在例示性實施方式,可將超過10至20,000張的合併影像,例如10、100、200、300、400、500、1000、1,100、1,200、1,300、1,400、1,500、2,000、2,500、3,000、3,500、4,000、4,500、5,000、5,500、10,000、11,000、12,000、13,000、14,000、15,000、16,000、17,000、18,000、19,000及20,000張合併影像輸入至機器學習演算法,以建立藥物資料庫。每張影像可用於訓練機器學習系統以將該影像資訊轉換成參考資訊,接著可將該參考資訊儲存在裝置及/或系統內建的資料庫中。In order to establish a drug database, the merged images are used to train the computer-embedded machine learning algorithm in steps S130 and S140 to generate reference images. The merged images containing drug information are classified with reference images stored in the drug database, and can then be extracted to identify candidate packages. In some embodiments, at least one merged image is input into the machine learning algorithm. In an exemplary embodiment, more than 10 to 20,000 merged images may be combined, such as 10, 100, 200, 300, 400, 500, 1000, 1,100, 1,200, 1,300, 1,400, 1,500, 2,000, 2,500, 3,000, 3,500, 4,000, 4,500, 5,000, 5,500, 10,000, 11,000, 12,000, 13,000, 14,000, 15,000, 16,000, 17,000, 18,000, 19,000 and 20,000 merged images are input to the machine learning algorithm to create a drug database. Each image can be used to train a machine learning system to convert the image information into reference information, which can then be stored in a database built into the device and/or system.

須注意的是,適用於本揭示內容的機器學習程式系統可以是任何本技術領域習知的視覺對象偵測模型,根據實際需要在某些準則下優化或不優化,該些模型包含但不限於:可變型組件模型(formable parts model,DPM)、區域卷積神經網路(region convolutional neural network,R-CNN)、快速區域卷積神經網路(Fast R-CNN)、高速區域卷積神經網路(Faster R-CNN)、遮罩區域卷積神經網路(mask R-CNN)以及YOLO。較佳地,對適用於訓練本揭示內容之學習步驟的視覺對象偵測模型進行優化,至少對參數(像是輸入的影像像素、定界框(bounding box)及錨框(anchor box)的數目和尺寸)進行優化。根據本揭示內容,經前述步驟處理且後續輸入至學習系統的合併影像應為符合預定尺寸(例如固定像素)的「滿版影像」(full bleed image)。因此,錨框的數目及尺寸可被最小化(例如最小化至僅有一個錨框),以增進計算速度與效能。換言之,藉由上述將兩張影像規整成一張的步驟,即使是處理大規模藥物包裝的大量數據時,機器學習系統也能順暢快速地運行。更甚者,可縮短整體處理時間,這大幅地增進建立藥物資料庫的效能。It should be noted that the machine learning program system applicable to the present disclosure may be any visual object detection model known in the art, and may or may not be optimized under certain criteria according to actual needs. These models include but are not limited to :Formable parts model (DPM), regional convolutional neural network (R-CNN), fast regional convolutional neural network (Fast R-CNN), high-speed regional convolutional neural network Road (Faster R-CNN), mask area convolutional neural network (mask R-CNN) and YOLO. Preferably, the visual object detection model suitable for training the learning steps of the present disclosure is optimized, at least for the parameters (such as the number of input image pixels, bounding boxes and anchor boxes) And size) for optimization. According to the present disclosure, the merged image processed through the foregoing steps and subsequently input to the learning system should be a "full bleed image" that conforms to a predetermined size (for example, fixed pixels). Therefore, the number and size of the anchor frames can be minimized (for example, to only one anchor frame) to improve the calculation speed and performance. In other words, the machine learning system can run smoothly and quickly even when processing large amounts of data for large-scale pharmaceutical packaging through the above-mentioned steps of normalizing the two images into one. What's more, the overall processing time can be shortened, which greatly improves the efficiency of establishing a drug database.

藉由執行前述步驟S130及S140,當實施藥物資料庫時,可輸出每個不同藥物的訓練結果作為分類模型。By performing the aforementioned steps S130 and S140, when the drug database is implemented, the training result of each different drug can be output as a classification model.

2.2. 識別藥物的方法Method of identifying drugs

配合第2圖,本揭示內容的第二態樣是關於一種經由藥物包裝(例如泡型包裝)來識別該藥物的方法。With reference to FIG. 2, the second aspect of the present disclosure relates to a method for identifying the medicine via the medicine packaging (for example, bubble packaging).

參閱第2圖,其根據本揭示內容的實施方式繪示方法200之流程圖。方法200包含以下步驟: (S210) 同時取得該藥物泡型包裝的正面與反面影像; (S220) 將步驟S210的該正面與反面影像規整以產生一候選影像; (S230) 非必要地,傳送該候選影像至藥物資料庫內; (S240) 比對該候選影像與如前段所述之方法所建立的藥物資料庫的一參考影像;以及 (S250) 輸出步驟S240的結果。Referring to FIG. 2, which illustrates a flowchart of the method 200 according to an embodiment of the present disclosure. The method 200 includes the following steps: (S210) Obtain the front and back images of the blister pack simultaneously; (S220) Regularize the front and back images of step S210 to generate a candidate image; (S230) Unnecessarily, send the Candidate images into the drug database; (S240) compare the candidate images with a reference image of the drug database created by the method described in the previous paragraph; and (S250) output the result of step S240.

較佳地,可透過嵌有用於執行本發明之步驟的指令以及一藥物資料庫的非暫時處理器可讀取儲存媒體來執行本發明方法200。藥物資料庫可以是原本儲存或內建的藥物資料庫,也可以是藉由前述方法100所建立的藥物資料庫。Preferably, the method 200 of the present invention can be performed by a non-transitory processor-readable storage medium embedded with instructions for performing the steps of the present invention and a drug database. The drug database may be an originally stored or built-in drug database, or a drug database created by the aforementioned method 100.

在影像接受步驟(S210)中,同時擷取一藥物的兩張原始影像。兩張原始影像較佳分別是該藥物泡型包裝的每一面(即,正面及反面影像)可藉由任何已知的方式取得原始影像,較佳是透過至少一影像擷取裝置(例如攝像機)來取得。在一例示性實施方式中,藉由分別設置在藥物兩側(例如藥物的正面和反面)的兩個影像擷取裝置同時擷取泡型包裝的兩張原始影像;因此,兩張原始影像會涵蓋泡型包裝雙面的藥物資訊。為了改善後續機器學習程式辨識外觀的精確度,接著將步驟S210獲得的兩張原始影像彼此規整,以產生一候選影像(步驟S220)。與前述方法100相似,目標是產生一具有預定特徵(例如:並排擺放正面與背面影像、具有預定像素大小......等等),且不含主體以外之資訊(即,僅含泡型包裝及/或標籤之影像)的乾淨影像。In the image receiving step (S210), two original images of a medicine are simultaneously captured. The two original images are preferably on each side of the blister pack (ie, front and back images). The original images can be obtained by any known method, preferably through at least one image capture device (such as a camera) To get. In an exemplary embodiment, the two original images of the blister pack are simultaneously captured by two image capturing devices provided on both sides of the drug (eg, front and back of the drug); therefore, the two original images Covers information on drugs on both sides of blister packaging. In order to improve the accuracy of the subsequent machine learning program to recognize the appearance, the two original images obtained in step S210 are then aligned with each other to generate a candidate image (step S220). Similar to the aforementioned method 100, the goal is to generate a predetermined feature (for example, placing front and back images side by side, having a predetermined pixel size..., etc.), and without information other than the main body (ie, only Clean image of bubble packaging and/or label image).

接著將步驟S220產生的候選影像與藥物資料庫儲存的參考影像進行比對(S240),並將結果輸出給一使用者(S250)。可透過與執行步驟S210、S240及S250相同或不同的計算裝置執行步驟S220。根據本揭示內容的,步驟S220是在一計算裝置上執行,該計算裝置不同於執行步驟S210、S240及S250的計算裝置。據此,可選擇執行步驟S230,此步驟是將步驟S220中經處理的影像或候選影像傳送到執行步驟S240及S250的計算裝置。Then, the candidate image generated in step S220 is compared with the reference image stored in the medicine database (S240), and the result is output to a user (S250). Step S220 may be performed by the same or different computing device as steps S210, S240, and S250. According to the present disclosure, step S220 is performed on a computing device, which is different from the computing devices that perform steps S210, S240, and S250. According to this, step S230 can be selected for execution. This step is to transfer the processed image or candidate image in step S220 to the computing device that executes steps S240 and S250.

回到步驟S220,其與方法100的步驟S120相似,通常包含以下步驟: (S221) 處理正面與反面影像,以產生分別具有經定義輪廓的兩張第一處理影像; (S222) 識別該兩張第一處理影像之各經定義輪廓的角點,以確定其座標; (S223) 基於步驟S222經確定的該些座標旋轉步驟S221的各該兩張第一處理影像,以產生兩張第二處理影像;以及 (S224) 結合步驟S223的該兩張第二處理影像,以產生該步驟候選影像。Returning to step S220, which is similar to step S120 of method 100, and generally includes the following steps: (S221) processing the front and back images to generate two first processed images with defined contours; (S222) identifying the two The corner points of each defined contour of the first processed image to determine its coordinates; (S223) Rotate the two first processed images of step S221 based on the determined coordinates of step S222 to generate two second processed images An image; and (S224) combining the two second processed images of step S223 to generate the candidate image of the step.

在步驟S221中,藉由影像處理器分別處理正面及背面影像,以產生兩張第一處理影像,其中各經處理影像具有明確定義之輪廓。為了達成此目的,步驟S221的正面及反面影像分別接受以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv) 一凸包運算處理以及(v)一尋找輪廓處理。需注意的是可獨立地以任何順序執行(i)至(v)之處理。實際操作時,可使用多種已知的演算法以消除兩張原始影像的背景雜訊並從該些影像提取目標特徵。In step S221, the image processor processes the front and back images separately to generate two first processed images, where each processed image has a clearly defined outline. To achieve this, the front and back images of step S221 are subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) an edge recognition process, (iv) a convex hull Operational processing and (v) contour finding processing. Note that the processes (i) to (v) can be performed independently in any order. In actual operation, a variety of known algorithms can be used to eliminate background noise of two original images and extract target features from these images.

具體來說,(i)使用色彩轉換演算法執行灰階轉換處理(S2211);(ii)使用過濾器演算法執行雜訊濾除處理,以盡可能減少背景雜訊(S2212);(iii)使用邊緣識別演算法執行邊緣識別處理,以測定影像中泡型包裝之各邊緣的座標(S2213);(iv)使用凸包運算演算法執行凸包運算處理,以計算影像中藥物泡型包裝的實際面積(S2214);以及(v)使用輪廓定義演算法執行所述尋找輪廓處理,以提取及醒目化泡型包裝的主要區域(S1215)。Specifically, (i) use a color conversion algorithm to perform gray-scale conversion processing (S2211); (ii) use a filter algorithm to perform noise filtering processing to minimize background noise (S2212); (iii) Use the edge recognition algorithm to perform the edge recognition process to determine the coordinates of each edge of the bubble package in the image (S2213); (iv) use the convex hull algorithm to perform the convex hull operation process to calculate the image of the drug bubble package The actual area (S2214); and (v) The contour finding process is performed using a contour definition algorithm to extract and highlight the main area of the bubble packaging (S1215).

實際施用是步驟S221的原始影像接受前述(i)至(v)所有處理之後才續行步驟S222。如同前述,可以任何順序執行這些處理,因此步驟S121獲得的原始影像或是接受任一處理之後的影像均可用於下一個處理。舉例來說,S221的原始影像先經受(i)處理或步驟S2211,其中原始影像的BGR色彩被轉換成灰階,藉以產生一灰階影像。接著以(ii)至(v)任一處理,對從(i)處理得來的灰階影像進行處理,直到成功地對該影像施用所有(i)至(v)處理,藉此產生具有經定義輪廓的第一處理影像。The actual application is that the original image in step S221 undergoes all the processes (i) to (v) before proceeding to step S222. As described above, these processes can be performed in any order, so the original image obtained in step S121 or the image after receiving any process can be used for the next process. For example, the original image of S221 is first subjected to (i) processing or step S2211, in which the BGR color of the original image is converted into grayscale, thereby generating a grayscale image. Then, by any processing from (ii) to (v), the gray-scale image from (i) processing is processed until all (i) to (v) processing is successfully applied to the image, thereby generating The first processed image that defines the outline.

抑或是,或非必要地,步驟S221的原始影像是先經受(iv)處理或步驟S1214。在此步驟中則是執行一凸包運算演算法。凸包運算演算法是用來在平面或其他低維度找到有限點集合(set of points)的凸包。本揭示內容則是透過計算影像中藥物泡型包裝的實際面積來定義凸包。從(iv)處理或步驟S2214獲得的影像可接著經受(i)、(ii)、(iii)或(v)的任一處理,直到(i)至(v)所有處理均被成功地施用在該影像上,接著產生具有經定義輪廓的第一處理影像。Or, or unnecessarily, the original image in step S221 is first subjected to (iv) processing or step S1214. In this step, a convex hull algorithm is executed. Convex hull arithmetic is used to find the convex hull of a set of points in a plane or other low dimensions. In this disclosure, the convex hull is defined by calculating the actual area of the drug bubble packaging in the image. The image obtained from the (iv) process or step S2214 can then be subjected to any one of (i), (ii), (iii) or (v) until all the processes (i) to (v) are successfully applied at On this image, a first processed image with a defined contour is then generated.

在一較佳實施方式中,是依照步驟 S2211至步驟S2215的順序進行處理。步驟S2211至S2215使用的策略及演算法與前述步驟S121描述的雷同,因此為了精簡在此不再贅述。In a preferred embodiment, the processing is performed in the order of step S2211 to step S2215. The strategies and algorithms used in steps S2211 to S2215 are the same as those described in step S121, so they are not repeated here for simplicity.

應當注意的是,可由本技術領域習知的任何替代演算法執行步驟S2212至S2215的每個步驟,只要該演算法可達成與前述相同的結果即可。It should be noted that each of steps S2212 to S2215 can be executed by any alternative algorithm known in the art, as long as the algorithm can achieve the same result as described above.

藉由執行步驟S2211至S2215,從兩張原始影像(即從一藥物的泡型包裝取得的正面與反面影像)產生各具有經定義輪廓的兩張第一處理影像,其中原始影像的背景雜訊被消除,且該泡型包裝的主要部分被提取並醒目化以利後續影像處理程序。By performing steps S2211 to S2215, two first processed images each having a defined contour are generated from two original images (ie, front and back images obtained from a blister package of a drug), wherein the background noise of the original image It is eliminated, and the main part of the bubble packaging is extracted and eye-catching to facilitate subsequent image processing procedures.

接著繼續執行步驟S222至S224,這些步驟的目的與步驟S122-S124所述的相同。步驟S222測定泡型包裝的至少三條直線,接著透過幾何學推理方式以獲得最貼近泡型包裝邊緣的四邊形以及其各角點座標。一旦識別角點的步驟S222完成之後,可基於經確定之座標執行步驟S223以產生兩張經處理、醒目化並具有固定特徵模板的影像。在步驟S224,可將該兩張第二處理影像並排規整並合併成一張影像,藉此產生含有藥物兩面包裝之資訊的候選影像。步驟S220利用的策略與方法100中所使用的策略相同,在此不再贅述。Then continue to perform steps S222 to S224. The purpose of these steps is the same as that described in steps S122-S124. Step S222 measures at least three straight lines of the blister package, and then uses geometric reasoning to obtain the quadrilateral closest to the edge of the blister package and the coordinates of each corner point. Once the step S222 of identifying corner points is completed, step S223 can be executed based on the determined coordinates to generate two processed, eye-catching images with fixed feature templates. In step S224, the two second processed images can be regularized side by side and merged into one image, thereby generating a candidate image containing information on both sides of the medicine package. The strategy used in step S220 is the same as the strategy used in the method 100, and will not be repeated here.

接著在步驟S240將合併影像或候選影像與藥物資料庫儲存的參考影像進行比對,藉此以確定該藥物的身分識別。在某些實施方式中,藥物資料庫可存在與具有用於執行步驟S220之指令相同或不同的計算裝置或其他處理器可讀取儲存媒體中。根據本揭示內容非必要實施方式,候選影像可在嵌有用於執行步驟S220之指令的第一處理單元產生,接著可將該經處理的影像傳送到位在第二處理單元的藥物資料庫(例如藉由前述方法100建立的藥物資料庫)中。在嵌入儲存媒體之機器學習指令下執行步驟S240,藉以比對候選影像與藥物資料庫中的參考影像,並將結果(不論有無匹配)輸出至一使用者。若比對結果指明候選影像與藥物資料庫中的一特定參考影像匹配或是具有高相似度,則輸出與該特定參考影像對應之藥物資訊。若比對結果無法產生候選影像與資料庫所有參考影像之間的匹配結果,則輸出「無匹配結果」之概念性表示。Next, in step S240, the merged image or candidate image is compared with the reference image stored in the drug database, so as to determine the identity of the drug. In some embodiments, the drug database may exist in a storage medium readable by a computing device or other processor having the same or different instructions as used to execute step S220. According to a non-essential embodiment of the present disclosure, the candidate image may be generated in the first processing unit embedded with the instruction for executing step S220, and then the processed image may be transferred to the drug database located in the second processing unit (for example, by borrowing The drug database established by the aforementioned method 100). Step S240 is executed under the machine learning instruction embedded in the storage medium, to compare the candidate image with the reference image in the drug database, and output the result (whether or not there is a match) to a user. If the comparison result indicates that the candidate image matches a specific reference image in the drug database or has a high degree of similarity, the drug information corresponding to the specific reference image is output. If the result of the comparison cannot produce a matching result between the candidate image and all reference images in the database, the conceptual representation of "no matching result" is output.

如同前述,機器學習演算法之目的在於改善藥物的視覺辨識。在某些實施方式中,例示性演算法可以是在任何習知的視覺對象偵測模型下執行的深度學習演算法,其根據實際需求有或沒有以某些準則進行優化。在一例示性實施方式,是在優化的偵測模型下執行深度學習。在另一例示性實施方式中,以本發明方法處理的候選影像應為具有預定像素尺寸(例如 448×224像素)的「滿版影像」;因此,可以將機器學習的操作參數設在最小數值,藉此增進計算速度與效能。As mentioned above, the purpose of machine learning algorithms is to improve the visual recognition of drugs. In some embodiments, the exemplary algorithm may be a deep learning algorithm executed under any conventional visual object detection model, which may or may not be optimized with certain criteria according to actual needs. In an exemplary embodiment, deep learning is performed under an optimized detection model. In another exemplary embodiment, the candidate image processed by the method of the present invention should be a "full version image" having a predetermined pixel size (for example, 448×224 pixels); therefore, the operation parameters of machine learning can be set to the minimum value , To improve the calculation speed and performance.

需注意的是,藉由將兩張處理影像規整成一張的「合併影像」,當處理大規模藥物包裝的大量數據時,本揭示內容的機器學習系統及/或方法也能順暢快速地運行。換言之,經處理及醒目化的影像實際上增加計算效能與準確度。藉由上述技術特徵,本揭示內容目的在於透過藥物泡型包裝識別該藥物的方法200,可於配藥過程增進藥物識別的準確度並消除人為疏失,藉此改善藥物使用安全及病患照護的品質。It should be noted that by organizing two processed images into one "merged image", the machine learning system and/or method of the present disclosure can also run smoothly and quickly when processing large amounts of data for large-scale pharmaceutical packaging. In other words, the processed and eye-catching images actually increase the computing power and accuracy. Based on the above technical features, the purpose of the present disclosure is to identify the method of the drug 200 through the bubble packaging of the drug, which can improve the accuracy of drug identification and eliminate human errors during the dispensing process, thereby improving the safety of drug use and the quality of patient care .

本文所述之標的可以使用儲存有處理器可讀取指令的非暫時、有形的處理器可讀取儲存媒體來實施。當受到一可程式化裝置的處理器執行時,所述指令可控制該可程式化裝置以執行根據本揭示內容實施方式的方法。適用於實現本文所述之標的的例示性處理器可讀取儲存媒體可包含(但不限於)RAM、ROM、EPROM、EEPROM、快閃記憶體或其他故態記憶體技術:CD-ROM、DVD或其他光儲存、磁卡式、磁碟儲存或其他磁性儲存裝置,以及其他可用於儲存所需資訊且可由處理器讀取的媒體。此外,實現本發明標的之處理器可讀取儲存媒體可以位於單一裝置或計算平台上,或者也可分佈在多個裝置或計算平台上。在某些實施方式中,計算平台是具有實時計算約束的嵌入式系統。The subject matter described herein can be implemented using non-transitory, tangible processor-readable storage media storing processor-readable instructions. When executed by a processor of a programmable device, the instructions can control the programmable device to execute the method according to the embodiments of the present disclosure. Exemplary processor-readable storage media suitable for implementing the subject matter described herein may include (but are not limited to) RAM, ROM, EPROM, EEPROM, flash memory, or other native memory technologies: CD-ROM, DVD, or Other optical storage, magnetic card, disk storage or other magnetic storage devices, and other media that can be used to store the required information and can be read by the processor. In addition, the processor-readable storage medium implementing the subject of the present invention may be located on a single device or computing platform, or may be distributed on multiple devices or computing platforms. In some embodiments, the computing platform is an embedded system with real-time computing constraints.

3.3. 藥物管理系統Drug Management System

本發明標的之另一個態樣是提供一藥物管理系統。參考繪示系統300之第3圖,系統300包含一影像擷取裝置310、一影像處理器320以及一機器學習處理器330,其中該影像擷取裝置310及機器學習處理器330分別耦合至該影像處理器320。影像擷取裝置310設以擷取一藥物之包裝的複數個影像。在某些實施方式中,影像擷取裝置310其結構包含有一透明板體3101以及兩個分別設置在該透明板體3101兩側的影像擷取單元3102。該透明板體3101可以是由玻璃或丙烯酸聚合物製成。在實施上,藥物放置在透明板體3101上,並藉由兩個影像擷取單元3102同時擷取該藥物包裝的兩面影像。舉例來說,兩個影像擷取單元3102是實時數位相機。Another aspect of the present invention is to provide a medicine management system. Referring to FIG. 3 of the drawing system 300, the system 300 includes an image capturing device 310, an image processor 320, and a machine learning processor 330, wherein the image capturing device 310 and the machine learning processor 330 are respectively coupled to the Image processor 320. The image capturing device 310 is configured to capture multiple images of a medicine package. In some embodiments, the structure of the image capturing device 310 includes a transparent plate 3101 and two image capturing units 3102 disposed on both sides of the transparent plate 3101 respectively. The transparent plate body 3101 may be made of glass or acrylic polymer. In implementation, the medicine is placed on the transparent plate 3101, and two images of the medicine package are simultaneously captured by two image capturing units 3102. For example, the two image capturing units 3102 are real-time digital cameras.

除非另有說明,根據本揭示內容,影像處理器320及機器學習處理器330分別包含用於儲存複數個指令的記憶體,該些指令可使處理器實現本發明的方法。在某些實施方式中,是將影像處理器320及機器學習處理器330設成兩個獨立的裝置;或者也可將兩者設置在相同硬體中。在某些實施方式中,影像處理器320及機器學習處理器330是可通訊式彼此連接。具體而言,影像處理器320是可通訊式與影像擷取裝置310連接,以接受經由該影像擷取裝置310擷取的影像,並設以執行本發明方法的影像處理步驟(像是步驟S120及S220),據此產生可用於後續識別的候選影像。機器學習處理器330是可通訊式與影像處理器320連接,並設以實現本發明方法之影像比對(例如步驟S240)以用於藥物識別。此步驟包含將候選影像與以本發明方法建立之藥物資料庫中的參考影像進行比對。藥物資料庫可通訊式與機器學習處理器330連接。在第3圖描述的例示性實施方式中,本發明藥物資料庫3301是儲存於機器學習處理器330中抑或是,或是非必要地,藥物資料庫可以儲存在透過電纜連接或無線網路與機器學習處理器330相連的儲存裝置中。Unless otherwise stated, according to the present disclosure, the image processor 320 and the machine learning processor 330 each include a memory for storing a plurality of instructions that enable the processor to implement the method of the present invention. In some embodiments, the image processor 320 and the machine learning processor 330 are set as two independent devices; or the two may be set in the same hardware. In some embodiments, the image processor 320 and the machine learning processor 330 are communicatively connected to each other. Specifically, the image processor 320 is communicatively connected to the image capture device 310 to accept the image captured by the image capture device 310 and is configured to perform the image processing steps of the method of the present invention (such as step S120 And S220), generating candidate images that can be used for subsequent identification. The machine learning processor 330 is communicatively connected to the image processor 320, and is configured to implement the image comparison of the method of the present invention (for example, step S240) for drug identification. This step includes comparing candidate images with reference images in the drug database created by the method of the present invention. The drug database can be communicatively connected to the machine learning processor 330. In the exemplary embodiment described in FIG. 3, is the medicine database 3301 of the present invention stored in the machine learning processor 330 or is it optional, or the medicine database may be stored in a machine connected to the machine through a cable connection or a wireless network The learning processor 330 is connected to a storage device.

在某些實施方式中,系統300更包含一使用者介面(未示出),設以輸出藥物識別結果、接受來自外部使用者之指令、以及將使用者輸入反饋至影像處理器320及機器學習處理器330。In some embodiments, the system 300 further includes a user interface (not shown) configured to output the drug recognition results, accept instructions from external users, and feed back user input to the image processor 320 and machine learning Processor 330.

可使用各種技術實施影像擷取裝置310、影像處理器320及影像學習處理器330之間的通訊。舉例來說,本發明系統300可包含一網路介面以許可影像擷取裝置310、影像處理器320以及機器學習處理器330之間通過網路(例如一區域通信網路(LAN)、一廣域網路(WAN)、網路或無線網路)來通訊。在其他實施例中,該系統可具有一系統匯流排(system bus),其耦合各種系統組件(包含影像擷取裝置310)至該影像處理器320。Various techniques can be used to implement communication between the image capturing device 310, the image processor 320, and the image learning processor 330. For example, the system 300 of the present invention may include a network interface to allow the image capturing device 310, the image processor 320, and the machine learning processor 330 to pass through a network (such as a local area network (LAN), a wide area network Way (WAN), network or wireless network) to communicate. In other embodiments, the system may have a system bus that couples various system components (including the image capturing device 310) to the image processor 320.

下文提出多個實施例來說明本發明的某些態樣,以利本發明所屬技術領域中具有通常知識者實作本發明。不應將這些實驗例視為對本發明範圍的限制。無須進一步說明,據信所屬技術領域中具有通常知識者可根據本文的描述,最大限度地利用本發明。本文引用的所有公開文獻均透過引用其整體併入本文。The following provides a number of embodiments to illustrate some aspects of the present invention, so that those with ordinary knowledge in the technical field to which the present invention pertains can implement the present invention. These experimental examples should not be regarded as limiting the scope of the present invention. Without further explanation, it is believed that those with ordinary knowledge in the technical field to which they belong can make maximum use of the present invention according to the description herein. All publications cited herein are incorporated by reference in their entirety.

實施例1:構建藥物資料庫Example 1: Building a drug database

製備校正雙面併整影像Prepare the corrected double-sided and straighten the image (Rectified Two-sided Images(Rectified Two-sided Images , RTIs)RTIs)

馬偕紀念醫院(臺北,臺灣)醫院藥房收集市售現有的250多種藥物。為了構建藥物資料庫的數據庫,盡可能地取得所有藥物泡型包裝的照片。利用編程在影像處理器中的開發函式庫(Open Source Computer Vision 2,OpenCV 2)函式,對影像進行裁剪以最小化背景雜訊並對其進行處理以產生每個藥物的多種合併影像(也稱為『校正雙面併整影像(RTIs)』)。每個RTI可與一預定模板(以下稱為校正雙面併整模板(rectified two-sided template,RTT)適配並且包含一藥物泡型包裝的兩個面。總共取得18,000的RTIs,並利用CNN模型進行後續深度學習處理程序。Mackay Memorial Hospital (Taipei, Taiwan) Hospital Pharmacy collects more than 250 existing drugs on the market. In order to build the database of the drug database, as much as possible to get all the photos of the blister packaging. Using the development function library (Open Source Computer Vision 2, OpenCV 2) function programmed in the image processor, the image is cropped to minimize background noise and processed to produce multiple merged images of each drug ( Also known as "correcting double-sided and straighten images (RTIs)"). Each RTI can be fitted with a predetermined template (hereinafter referred to as a corrected two-sided template (RTT) and contains both sides of a drug bubble package. A total of 18,000 RTIs are obtained and CNN is used The model performs subsequent deep learning processing procedures.

偵測藥物包裝之不規則四邊形角點的策略Strategies for detecting irregular quadrilateral corners of drug packaging

在待測藥物包裝具有三直線邊緣及一曲線邊緣構成的不規則四邊形形狀之情況下,利用質心算法(表1)透過幾何學推理以測定曲線邊緣和未定義角點。  表1:用於偵測不規則四邊形形狀角點的質心演算法

Figure 107126302-A0305-0001
In the case where the drug package to be tested has an irregular quadrilateral shape composed of three straight edges and one curved edge, the centroid algorithm (Table 1) is used to determine the curved edges and undefined corner points through geometric reasoning. Table 1: Centroid algorithm for detecting corners of irregular quadrilateral shapes
Figure 107126302-A0305-0001

第4圖例示性地呈現如何對具有不規則四邊形之藥物包裝執行角點識別。如第4圖所繪,具有三條直線邊緣( L 1 L 2 L 3 )及一曲線邊緣(C1)的泡型包裝呈現隨機的方向,其中將 L 1 L 2 的交點指定為 P 1 ,而 L 2 L 3 的交點則指定為 P 4 。目標是要確定 P 2 P 3 的座標,據此四個點( P 1 P 2 P 3 P 4 )以及四個邊緣( L 1 L 2 L 3 C 1 )包圍的區域會涵蓋整個包裝的影像。首先確定 P 1 P 4 之間在邊緣 L 2 上的中點 M,接著可透過演算法cv2.moments (OpenCV 2)確定泡型包裝面積的質心 B。基於中點 M及質心 B的座標計算出位移向量 v。最後可確定分別距交點 P 1 P 4 兩倍位移向量之位置便是 P 2 P 3 之座標。透過前述流程,四個交點或角點 P 1 P 2 P 3 P 4 可自動地以順時針或逆時針方向排序。 FIG. 4 exemplarily shows how to perform corner recognition on a medicine package having an irregular quadrilateral. As depicted in Figure 4, a blister package with three straight edges ( L 1 , L 2 , L 3 ) and a curved edge (C1) presents a random direction, where the intersection of L 1 and L 2 is designated as P 1 , And the intersection of L 2 and L 3 is designated as P 4 . The goal is to determine the coordinates of P 2 and P 3 , according to the area surrounded by four points ( P 1 , P 2 , P 3 and P 4 ) and four edges ( L 1 , L 2 , L 3 and C 1 ) Will cover the image of the entire package. First determine the midpoint M on the edge L 2 between P 1 and P 4 , then the centroid B of the bubble packaging area can be determined by the algorithm cv2.moments (OpenCV 2). The displacement vector v is calculated based on the coordinates of the midpoint M and the center of mass B. Finally, it can be determined that the positions of the displacement vectors twice from the intersection points P 1 and P 4 are the coordinates of P 2 and P 3 . Through the foregoing process, the four intersection points or corner points P 1 , P 2 , P 3 and P 4 can be automatically sorted in a clockwise or counterclockwise direction.

優化機器學習能力Optimize machine learning capabilities

本實施例是透過卷積神經網路(Convolutional Neural Network,CNN)來實現機器學習之目的。為了達成前述目的,透過圖形處理單元(graphics processing unit,GPU)執行的YOLOv2微型版本(也稱為Tiny YOLO)神經以訓練每一RTI的視覺辨識。常規YOLOv2的通則概念是將一影像分割成多個網格並使用錨框以預測定界框。簡言之,以本實施例的每一具有416×416像素尺寸的RTI影像來說,當輸入至網路時,Tiny YOLOv2神經會在每一影像產生奇數個網格;據此,在中心的網格只會有一個(即,中心網格)。接著分析每個網格和其鄰居網格以確定該些網格中是否含有任何特徵資訊。舉例來說,可將輸入的影像分割成13×13個網格,其中可預測五種尺寸的定界框,且從每個網格回傳的信心分數可用於測定該網格是否含有欲分析的物體。在本實施例中,因為已經先對輸入影像進行醒目化及尋找輪廓,如此一來,優化常規Tiny YOLO可改良識別效率,也可減少操作成本。由於每一張RTI的輸入影像已經處理成「滿版影像」且已適配至一預定的像素尺寸,可將操作的錨框數目減至一個,且錨框的尺寸也可以設定在7×7個網格。據此,機器學習網路只需對一個擁有全尺寸(完整大小)之輸入影像的定界框進行預測。優化的Tiny YOLO之具體參數列於表2。 表2:經優化Tiny YOLO網路的參數

Figure 107126302-A0305-0002
In this embodiment, a convolutional neural network (Convolutional Neural Network, CNN) is used to achieve the purpose of machine learning. In order to achieve the aforementioned objective, a miniature version of YOLOv2 (also called Tiny YOLO) nerve executed by a graphics processing unit (GPU) is used to train the visual recognition of each RTI. The general concept of conventional YOLOv2 is to divide an image into multiple grids and use anchor boxes to predict bounding boxes. In short, for each RTI image with a size of 416×416 pixels in this embodiment, when input to the network, the Tiny YOLOv2 nerve will generate an odd number of grids in each image; accordingly, the There will only be one grid (ie, the center grid). Then, each grid and its neighbor grids are analyzed to determine whether these grids contain any feature information. For example, the input image can be divided into 13×13 grids, in which five size bounding boxes can be predicted, and the confidence score returned from each grid can be used to determine whether the grid contains the analysis Objects. In this embodiment, because the input image has been highlighted and outlined, the optimization of conventional Tiny YOLO can improve the recognition efficiency and reduce the operation cost. Since each RTI input image has been processed into a "full version image" and has been adapted to a predetermined pixel size, the number of operating anchor frames can be reduced to one, and the size of the anchor frame can also be set at 7×7 Grids. According to this, the machine learning network only needs to predict a bounding box with a full-size (full-size) input image. The specific parameters of optimized Tiny YOLO are listed in Table 2. Table 2: Parameters of optimized Tiny YOLO network
Figure 107126302-A0305-0002

執行的時候,可將優化的Tiny YOLO編程在一有高解析度圖形卡的機器學習處理器上(GEFORCE ®GTX 1080 (NVIDIA, USA))。透過優化後的Tiny YOLO訓練模型來使機器學習處理器執行深度學習程序。表3列出優化的Tiny YOLO的訓練準則。 表3:經優化Tiny YOLO的訓練準則

Figure 107126302-A0305-0003
During execution, the optimized Tiny YOLO can be programmed on a machine learning processor with a high-resolution graphics card (GEFORCE ® GTX 1080 (NVIDIA, USA)). The optimized Tiny YOLO training model is used to make the machine learning processor execute the deep learning program. Table 3 lists the optimized training guidelines for Tiny YOLO. Table 3: Optimized training guidelines for Tiny YOLO
Figure 107126302-A0305-0003

實施例2:評估基於實施例1的藥學資料庫執行機器學習的藥物管理系統的分類效率Example 2: Evaluation of the classification efficiency of a drug management system that performs machine learning based on the pharmaceutical database of Example 1

處理並合併影像Process and merge images

為了評估本揭示內容RTIs的視覺辨識效能,將藥物的原始影像(未處理且未合併)及RTIs(經處理並合併)分別輸入至訓練模型(經優化的Tiny YOLO)以進行深度學習。表4總結訓練結果。根據表4的數據,相較於未處理的原始影像,本揭示內容的「醒目化」影像在訓練視覺辨識方面具高效能。訓練的F1-分數越高,表示深度學習網路對包裝影像的視覺辨識的效果越好。相較於那些未經處理的影像,經處理成RTT的RTIs也顯著地增加訓練效果。 表4:未處理影像與RTIs之比較

Figure 107126302-A0305-0004
In order to evaluate the visual recognition performance of the RTIs of this disclosure, the original images of the drugs (unprocessed and unmerged) and RTIs (processed and merged) were input to the training model (optimized Tiny YOLO) for deep learning. Table 4 summarizes the training results. According to the data in Table 4, compared to the unprocessed original image, the "eye-catching" image of the present disclosure is highly effective in training visual recognition. The higher the trained F1-score, the better the deep learning network's effect on the visual recognition of packaging images. Compared with those unprocessed images, RTIs processed into RTT also significantly increase the training effect. Table 4: Comparison of unprocessed images and RTIs
Figure 107126302-A0305-0004

經優化學習模型Optimized learning model

進行另一次比較以評估本發明訓練模型的效能。使用兩種常規學習模型:ResNet 101及SE-ResNet 101來建立RTIs的訓練模型。表5總結比較結果。 表5:經優化系統及常規系統之比較

Figure 107126302-A0305-0005
Another comparison was made to evaluate the effectiveness of the training model of the present invention. Two conventional learning models are used: ResNet 101 and SE-ResNet 101 to build a training model of RTIs. Table 5 summarizes the comparison results. Table 5: Comparison between optimized system and conventional system
Figure 107126302-A0305-0005

從表4及表5之比較結果可看出,特別是當以經優化Tiny YOLO模型處理影像時,經本發明方法處理以適配RTT的RTIs可增加訓練效能,同時還可顯著縮短訓練時間。將所有的18,000張RTIs輸入至訓練模型中,藉此透過執行經優化Tiny YOLO模型建立本發明的藥物資料庫。可將藥物資料庫和深度學習網路進一步儲存在一嵌入式系統用於後續應用。From the comparison results of Table 4 and Table 5, it can be seen that, especially when processing images with the optimized Tiny YOLO model, the RTIs processed by the method of the present invention to adapt to RTT can increase training performance and also significantly shorten training time. All 18,000 RTIs were input into the training model, thereby establishing the drug database of the present invention by executing the optimized Tiny YOLO model. The drug database and deep learning network can be further stored in an embedded system for subsequent applications.

實時藥物識別應用Real-time drug identification application

操作時,隨機選擇一藥物放置在配備有玻璃製成之透明板體的特製櫃體內。設置圍繞透明板體的光源以利照明,並將兩個BRIO網路攝影機 (Logitech, USA)個別設置在透明板體的兩側(與板體保持一距離),確保該網路攝影機的視野能涵蓋整個板體區域。另一方面,先將建好的藥物資料庫儲存在實時嵌入式計算裝置JETSON TMTX2 (NVIDIA, USA)中。JETSON TMTX2,稱為「開發者套件(developer kit)」,其包含記憶體、 CPU、GPU、USB埠、網形天線以及其他計算元件,藉此使影像處理步驟及機器學習步驟可在相同裝置內執行。操作上,兩個網路攝影機可透過USB電纜與 JETSON TMTX2連接,據此所選藥物的兩面影像可實時地同時傳送給處理器。將所選藥物泡型包裝的兩張原始影像處理成一張RTI,接著該RTI接受編程在JETSON TMTX2內的學習模型Tiny YOLO進行後續程序。 Tiny YOLO模型的視覺辨識速度可達每秒約200幀(FPS)。總流程(即,從擷取影像至視覺識別)所花費的時間約每秒6.23幀。識別結果可實時呈現在一外部顯示裝置上,例如電腦螢幕或是行動電話使用介面。因此,外部使用者幾乎可與藥物放入櫃體同時獲得該藥物之識別結果。 During operation, a drug is randomly selected and placed in a special cabinet equipped with a transparent plate made of glass. Set up a light source around the transparent board to facilitate lighting, and set two BRIO network cameras (Logitech, USA) on both sides of the transparent board (at a distance from the board) to ensure the vision of the network camera Covers the entire board area. On the other hand, the established drug database is first stored in the real-time embedded computing device JETSON TM TX2 (NVIDIA, USA). JETSON TM TX2, called "developer kit", contains memory, CPU, GPU, USB port, mesh antenna and other computing components, so that the image processing steps and machine learning steps can be on the same device Executed within. In operation, the two webcams can be connected to JETSON TM TX2 via a USB cable, according to which the images of both sides of the selected drug can be transmitted to the processor simultaneously in real time. The two original images of the selected medicine bubble packaging are processed into one RTI, and then the RTI accepts the learning model Tiny YOLO programmed in JETSON TM TX2 for subsequent procedures. The visual recognition speed of the Tiny YOLO model can reach about 200 frames per second (FPS). The total process (ie, from capturing images to visual recognition) takes approximately 6.23 frames per second. The recognition result can be presented on an external display device in real time, such as a computer screen or mobile phone user interface. Therefore, the external user can obtain the identification result of the medicine at the same time as putting the medicine in the cabinet.

再者,為了評估本發明系統是否可有效地降低配藥過程人為疏失之機率,再次進行另一比較。舉例來說,一抗焦慮藥物:樂耐平(Lorazepam),其外觀與其他藥物的相似度頗高,因此在配藥過程中是最容易誤認的藥物。首先,將樂耐平的RTI輸入本發明系統,以與該資料庫中的所有藥物進行比較。經計算後,學習模型提出經該學習模型分析後基於外觀相似度可能誤認的數種候選藥物。候選表單可協助臨床人員複核所選藥物是否正確。因此,藉由本揭示內容的藥物管理系統,可改善配藥的準確度,也可以將人為疏失減至最小以確保病患安全。Furthermore, in order to evaluate whether the system of the present invention can effectively reduce the probability of human error in the dispensing process, another comparison is made again. For example, an anti-anxiety drug: Lorazepam (Lorazepam), its appearance is quite similar to other drugs, so it is the most misunderstood drug during the dispensing process. First, the RTI of Renepine was entered into the system of the present invention to be compared with all drugs in this database. After calculation, the learning model proposes several drug candidates that may be misidentified based on appearance similarity after analysis by the learning model. The candidate form can assist the clinical staff to check whether the selected drug is correct. Therefore, the drug management system of the present disclosure can improve the accuracy of dispensing, and can also minimize human error to ensure patient safety.

總言之,本發明用於藥物管理之方法和系統可藉由執行影像處理步驟以將影像合併成一張,以及以實時的方式執行深度學習模型,藉以達成前述目的。本揭示內容的優勢不僅在於無論物體(即:藥物)的方位為何皆可有效地處理原始影像,還可經由藥物外觀增加藥物分類的準確性。In summary, the method and system for drug management of the present invention can achieve the aforementioned objective by performing image processing steps to merge images into one, and executing a deep learning model in a real-time manner. The advantage of the present disclosure is not only that the original image can be effectively processed regardless of the orientation of the object (ie, medicine), but also the accuracy of medicine classification can be increased through the appearance of the medicine.

應當理解的是,前述對實施方式的描述僅是以實施例的方式給出,且本領域所屬技術領域中具有通常知識者可進行各種修改。以上說明書、實施例及實驗結果提供本發明之例示性實施方式之結構與用途的完整描述。雖然上文實施方式中揭露了本發明的各種具體實施例,然其並非用以限定本發明,本發明所屬技術領域中具有通常知識者,在不悖離本發明之原理與精神的情形下,當可對其進行各種更動與修飾,因此本發明之保護範圍當以附隨申請專利範圍所界定者為準。It should be understood that the foregoing description of the embodiments is given by way of examples only, and those with ordinary knowledge in the technical field to which the art belongs can make various modifications. The above specification, examples and experimental results provide a complete description of the structure and use of exemplary embodiments of the invention. Although the above embodiments disclose various specific examples of the present invention, they are not intended to limit the present invention. Those with ordinary knowledge in the technical field to which the present invention belongs, without departing from the principle and spirit of the present invention, Various changes and modifications can be made to it, so the scope of protection of the present invention shall be defined by the scope of the accompanying patent application.

100、200                             方法 300                                       系統 310                                       影像擷取裝置 3101                                     透明板體 3102                                     影像擷取單元 320                                       影像處理器 330                                       機器學習處理器 3301                                     藥物資料庫 S110-S140、S210-S250       步驟3102 image capturing unit 320 image processor machine learning processor 330 3301 Drug Database S110-S140, S210-S250 step 300 method 100, 200, 310 image capturing apparatus 3101 system transparent plate

為讓本發明的上述與其他目的、特徵、優點與實施例能更明顯易懂,所附圖式之說明如下。In order to make the above and other objects, features, advantages and embodiments of the present invention more comprehensible, the drawings are described below.

第1圖是根據本揭示內容實施方式的方法100繪示的流程圖。FIG. 1 is a flowchart of a method 100 according to an embodiment of the present disclosure.

第2圖是根據本揭示內容實施方式的方法200繪示的流程圖。FIG. 2 is a flowchart of the method 200 according to an embodiment of the present disclosure.

第3圖繪示本揭示內容一實施方式的藥物管理系統300。FIG. 3 illustrates a medicine management system 300 according to an embodiment of the present disclosure.

第4圖繪示本揭示內容之一實例,以例示如何透過質心演算法定義包裝的角點。FIG. 4 shows an example of this disclosure to illustrate how to define the corners of the package through the centroid algorithm.

根據慣常的作業方式,圖中各種元件與特徵並未依比例繪製,其繪製方式是為了以最佳的方式呈現本發明相關的具體特徵與元件。此外,在不同的圖式間,以相同或相似的元件符號來指稱相似的元件/部件。According to the usual working methods, the various elements and features in the drawings are not drawn to scale. The drawing method is to present the specific features and elements related to the present invention in an optimal manner. In addition, the same or similar element symbols are used to refer to similar elements/components between different drawings.

S110-S140:步驟S110-S140: Steps

Claims (17)

一種用以建立一藥物資料庫的電腦實施方法,包含:(a)接受一藥物包裝的複數個原始影像;(b)規整該複數個原始影像中的兩張以產生一合併影像,其中該兩張原始影像彼此相異;(c)處理該合併影像以產生一參考影像;以及(d)借助於該參考影像以建立該藥物資料庫,其中步驟(b)包含:(b-1)分別處理該複數個原始影像,以產生複數個分別具有經定義輪廓的第一處理影像;(b-2)識別該複數個第一處理影像之各經定義輪廓的角點,以確定其座標;(b-3)基於步驟(b-2)確定的該座標,旋轉步驟(b-1)的各該第一處理影像,以產生複數個第二處理影像;以及(b-4)結合該複數個第二處理影像中的任兩張,以產生步驟(b)的該合併影像。 A computer-implemented method for establishing a drug database includes: (a) receiving a plurality of original images of a drug package; (b) arranging two of the plurality of original images to generate a combined image, wherein the two The original images are different from each other; (c) processing the merged image to generate a reference image; and (d) using the reference image to create the drug database, wherein step (b) includes: (b-1) processing separately The plurality of original images to generate a plurality of first processed images each having a defined contour; (b-2) identifying the corner points of each defined contour of the plurality of first processed images to determine their coordinates; (b -3) Based on the coordinates determined in step (b-2), rotate the first processed images of step (b-1) to generate a plurality of second processed images; and (b-4) combine the plurality of Second, process any two of the images to generate the combined image of step (b). 如請求項1所述之電腦實施方法,更包含在步驟(a)之前同時擷取該藥物包裝的該複數個原始影像。 The computer-implemented method as described in claim 1, further comprising simultaneously capturing the plurality of original images of the medicine package before step (a). 如請求項1所述之電腦實施方法,其中該藥物是一泡型包裝的形式。 The computer-implemented method according to claim 1, wherein the medicine is in the form of a blister pack. 如請求項1所述之電腦實施方法,其中步驟(b-1)之各該複數個原始影像受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。 The computer-implemented method according to claim 1, wherein each of the plurality of original images in step (b-1) is subjected to the following processing: (i) a gray-scale conversion process, (ii) a noise filtering process, (iii ) An edge recognition process, (iv) a convex hull operation process, and (v) a contour search process. 如請求項1所述之電腦實施方法,其中以一直線轉換演算法或一質心演算法來執行步驟(b-2)。 The computer-implemented method according to claim 1, wherein step (b-2) is performed with a linear transformation algorithm or a centroid algorithm. 如請求項1所述之電腦實施方法,其中該合併影像包含該藥物之該泡型包裝的雙面影像。 The computer-implemented method of claim 1, wherein the combined image includes a double-sided image of the blister package of the drug. 如請求項1所述之電腦實施方法,其中以一機器學習演算法來執行步驟(c)。 The computer-implemented method according to claim 1, wherein step (c) is performed with a machine learning algorithm. 一種用以經由一藥物泡型包裝來識別該藥物的電腦實施方法,包含:(a)同時取得該藥物泡型包裝的正面與反面影像;(b)規整步驟(a)的該正面與反面影像以產生一候選影像;(c)比對該候選影像與如請求項1所述之方法所建立的該藥物資料庫之參考影像;以及(d)輸出步驟(c)的結果,其中步驟(b)包含:(b-1)分別處理步驟(a)的該正面與反面影像,以產生兩張分別具有經定義輪廓的第一處理影像;(b-2)識別該兩張第一處理影像之各經定義輪廓的角點,以確定其座標; (b-3)基於步驟(b-2)確定的該座標,旋轉步驟(b-1)之各該兩張第一處理影像,以產生兩張第二處理影像;以及(b-4)結合步驟(b-3)的該兩張第二處理影像,以產生該步驟(b)之該候選影像。 A computer-implemented method for identifying a medicine through a medicine bubble package, including: (a) acquiring both front and back images of the medicine bubble package; (b) regularizing the front and back images of step (a) To generate a candidate image; (c) compare the candidate image with the reference image of the drug database created as described in claim 1; and (d) output the result of step (c), wherein step (b) ) Includes: (b-1) separately processing the front and back images of step (a) to generate two first processed images each having a defined contour; (b-2) identifying the two first processed images The corner points of each defined contour to determine its coordinates; (b-3) Based on the coordinates determined in step (b-2), rotate the two first processed images of step (b-1) to generate two second processed images; and (b-4) combine The two second processed images of step (b-3) to generate the candidate image of step (b). 如請求項8所述之電腦實施方法,其中步驟(b-1)之該正面與反面影像分別受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。 The computer-implemented method according to claim 8, wherein the front and back images of step (b-1) are subjected to the following processes: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) ) An edge recognition process, (iv) a convex hull operation process, and (v) a contour search process. 如請求項8所述之電腦實施方法,其中以一直線轉換演算法或一質心演算法來執行步驟(b-2)。 The computer-implemented method according to claim 8, wherein step (b-2) is performed with a linear transformation algorithm or a centroid algorithm. 如請求項8所述之電腦實施方法,其中以一機器學習演算法來執行步驟(c)。 The computer-implemented method according to claim 8, wherein step (c) is performed with a machine learning algorithm. 如請求項8所述之電腦實施方法,更包含在步驟(c)之前傳送該候選影像至該藥物資料庫內。 The computer-implemented method described in claim 8 further includes sending the candidate image to the drug database before step (c). 一種藥物管理系統,包含:一影像擷取裝置,用以擷取一藥物包裝的複數個影像;一影像處理器,經指令編程執行一用以產生一候選影像的方法,其中該方法包含:(1)分別處理該藥物包裝的該複數個影像,以產生複數個分別具有經定義輪廓的第一處理影像;(2)識別該複數個第一處理影像之各經定義輪廓的角點,以確定其座標; (3)基於步驟(2)確定的該座標,旋轉步驟(1)的各該第一處理影像,以產生複數個第二處理影像;以及(4)規整步驟(3)的該複數個第二處理影像中的兩張,以產生該候選影像,其中該兩張第二處理影像彼此相異;以及一機器學習處理器,經指令編程執行一方法,該方法係用於比對該候選影像與如請求項1所述之方法建立的該藥物資料庫之參考影像。 A medicine management system includes: an image capture device for capturing a plurality of images of a medicine package; an image processor, programmed by instructions to execute a method for generating a candidate image, wherein the method includes: ( 1) Process the plurality of images of the drug package separately to generate a plurality of first processed images each having a defined contour; (2) Identify the corner points of each defined contour of the plurality of first processed images to determine Its coordinates; (3) Based on the coordinates determined in step (2), rotate the first processed images of step (1) to generate a plurality of second processed images; and (4) normalize the plurality of second processed images of step (3) Processing two of the images to generate the candidate image, wherein the two second processed images are different from each other; and a machine learning processor, through instruction programming, executes a method that is used to compare the candidate image with Reference image of the drug database created by the method described in claim 1. 如請求項13所述之系統,其中該影像擷取裝置包含:一透明板體,用以使該藥物放置其上;以及二影像擷取單元,個別設置於該透明板體各側之上方。 The system according to claim 13, wherein the image capturing device includes: a transparent plate body for placing the drug thereon; and two image capturing units, which are individually disposed above each side of the transparent plate body. 如請求項13所述之系統,其中步驟(1)的各複數個影像受到以下處理:(i)一灰階轉換處理、(ii)一雜訊濾除處理、(iii)一邊緣識別處理、(iv)一凸包運算處理以及(v)一尋找輪廓處理。 The system according to claim 13, wherein each of the plurality of images in step (1) is subjected to the following processing: (i) a grayscale conversion process, (ii) a noise filtering process, (iii) an edge recognition process, (iv) A convex hull calculation process and (v) A contour finding process. 如請求項13所述之系統,其中以一直線轉換演算法或一質心演算法來執行步驟(2)。 The system according to claim 13, wherein step (2) is performed with a linear transformation algorithm or a centroid algorithm. 如請求項13所述之系統,其中以一機器學習演算法來執行該用於比對該候選影像與該藥物資料庫之參考影像之方法。The system of claim 13, wherein the method for comparing the candidate image with the reference image of the drug database is performed by a machine learning algorithm.
TW107126302A 2018-07-30 2018-07-30 Method and system for sorting and identifying medication via its label and/or package TWI695347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107126302A TWI695347B (en) 2018-07-30 2018-07-30 Method and system for sorting and identifying medication via its label and/or package

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107126302A TWI695347B (en) 2018-07-30 2018-07-30 Method and system for sorting and identifying medication via its label and/or package

Publications (2)

Publication Number Publication Date
TW202008309A TW202008309A (en) 2020-02-16
TWI695347B true TWI695347B (en) 2020-06-01

Family

ID=70412936

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107126302A TWI695347B (en) 2018-07-30 2018-07-30 Method and system for sorting and identifying medication via its label and/or package

Country Status (1)

Country Link
TW (1) TWI695347B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201625195A (en) * 2014-09-25 2016-07-16 Yuyama Mfg Co Ltd Inspection assistance system and tablet packaging device
TW201631521A (en) * 2015-02-16 2016-09-01 美和學校財團法人美和科技大學 Method of drug identification
US20160342767A1 (en) * 2015-05-20 2016-11-24 Watchrx, Inc. Medication adherence device and coordinated care platform
CN106981064A (en) * 2017-03-16 2017-07-25 亿信标准认证集团有限公司 Drug bottle packaging standard certification on-line detecting system based on machine vision technique
US20170270355A1 (en) * 2011-02-28 2017-09-21 Aic Innovations Group, Inc. Method and Apparatus for Pattern Tracking
CN107920956A (en) * 2015-09-28 2018-04-17 富士胶片株式会社 Medicament check device and method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270355A1 (en) * 2011-02-28 2017-09-21 Aic Innovations Group, Inc. Method and Apparatus for Pattern Tracking
TW201625195A (en) * 2014-09-25 2016-07-16 Yuyama Mfg Co Ltd Inspection assistance system and tablet packaging device
TW201631521A (en) * 2015-02-16 2016-09-01 美和學校財團法人美和科技大學 Method of drug identification
US20160342767A1 (en) * 2015-05-20 2016-11-24 Watchrx, Inc. Medication adherence device and coordinated care platform
CN107920956A (en) * 2015-09-28 2018-04-17 富士胶片株式会社 Medicament check device and method and program
CN106981064A (en) * 2017-03-16 2017-07-25 亿信标准认证集团有限公司 Drug bottle packaging standard certification on-line detecting system based on machine vision technique

Also Published As

Publication number Publication date
TW202008309A (en) 2020-02-16

Similar Documents

Publication Publication Date Title
US10467477B2 (en) Automated pharmaceutical pill identification
US10321747B2 (en) Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
Lui et al. Tangent bundle for human action recognition
Tan et al. Comparison of yolo v3, faster r-cnn, and ssd for real-time pill identification
CN109410168A (en) For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image
Koo et al. CNN-based multimodal human recognition in surveillance environments
CN110781709A (en) Method and system for classifying and identifying drugs via their packaging and/or labeling
Zhao et al. Fine-grained diabetic wound depth and granulation tissue amount assessment using bilinear convolutional neural network
JP6820886B2 (en) Methods and systems for classifying and identifying drugs by drug label and / or packaging
CN111382622A (en) Medicine identification system based on deep learning and implementation method thereof
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN109215131A (en) The driving method and device of conjecture face
Tarando et al. Increasing CAD system efficacy for lung texture analysis using a convolutional network
Wu et al. Appearance-based gaze block estimation via CNN classification
Maitrichit et al. Intelligent medicine identification system using a combination of image recognition and optical character recognition
TWI695347B (en) Method and system for sorting and identifying medication via its label and/or package
TWI692356B (en) Method of monitoring medication regimen complemented with portable apparatus
TWI731484B (en) Method and system for building medication library and managing medication via the image of its blister package
Hnoohom et al. Blister Package Classification Using ResNet-101 for Identification of Medication
García-García et al. Automated location of orofacial landmarks to characterize airway morphology in anaesthesia via deep convolutional neural networks
Chang et al. Characterizing placental surface shape with a high-dimensional shape descriptor
BR112021014579A2 (en) USER IDENTIFICATION METHOD BY BIOMETRIC CHARACTERISTICS AND MOBILE DEVICE
Wu et al. Real-time visual tracking via incremental covariance model update on Log-Euclidean Riemannian manifold
Ye et al. Image-Based Pain Intensity Estimation Using Parallel CNNs with Regional Attention
Yuan et al. Pedestrian detection using integrated aggregate channel features and multitask cascaded convolutional neural-network-based face detectors