TWI839583B - Product line monitoring method and monitoring system thereof - Google Patents

Product line monitoring method and monitoring system thereof Download PDF

Info

Publication number
TWI839583B
TWI839583B TW109138721A TW109138721A TWI839583B TW I839583 B TWI839583 B TW I839583B TW 109138721 A TW109138721 A TW 109138721A TW 109138721 A TW109138721 A TW 109138721A TW I839583 B TWI839583 B TW I839583B
Authority
TW
Taiwan
Prior art keywords
action type
recognition model
image recognition
images
training
Prior art date
Application number
TW109138721A
Other languages
Chinese (zh)
Other versions
TW202219671A (en
Inventor
健麟 羅
志恒 王
Original Assignee
英屬維爾京群島商百威雷科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英屬維爾京群島商百威雷科技控股有限公司 filed Critical 英屬維爾京群島商百威雷科技控股有限公司
Priority to TW109138721A priority Critical patent/TWI839583B/en
Publication of TW202219671A publication Critical patent/TW202219671A/en
Application granted granted Critical
Publication of TWI839583B publication Critical patent/TWI839583B/en

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Burglar Alarm Systems (AREA)
  • Image Analysis (AREA)
  • General Factory Administration (AREA)

Abstract

The present disclosure provides a product line monitoring method and a monitoring system thereof. The monitoring system is configured to: obtain a plurality of images of an operator; based on an image recognition model, determine a motion type of the operation in the images; determine a time of occurrence and a motion period of the operator motion; and record the time of occurrence and the motion period of the operator motion.

Description

生產線監視方法及其監視系統Production line monitoring method and monitoring system

本發明係關於一種生產線監視方法及其監視系統,更特定言之,係關於一種基於機器學習技術之生產線監視方法及其監視系統。 The present invention relates to a production line monitoring method and a monitoring system thereof, and more specifically, to a production line monitoring method and a monitoring system thereof based on machine learning technology.

傳統工業產品之製作流程中,各種裝置之零件組裝仍需要人工之協助。具體而言,單一裝置多需配置許多零件,而不同零件於裝置上之組裝,通常於工廠之生產線之各站台上,經由操作員人工完成。 In the manufacturing process of traditional industrial products, the assembly of parts of various devices still requires manual assistance. Specifically, a single device often needs to be equipped with many parts, and the assembly of different parts on the device is usually completed manually by operators at each station of the factory's production line.

惟人工操作之失誤或各種因素產生之延遲,常常造成生產線輸出之瓶頸,因此,生產線上需要監視裝置來記錄並確定造成生產線輸出瓶頸之原因,以利後續效率之改善。 However, errors in manual operation or delays caused by various factors often cause bottlenecks in the production line output. Therefore, monitoring devices are needed on the production line to record and determine the causes of the bottlenecks in the production line output, so as to improve subsequent efficiency.

然而,習知之監視裝置多僅有影像記錄之功能,因此,當得知生產線上發生狀況時,一般仍需針對記錄此生產線之影像,透過人工的方式對影像進行搜尋並判斷失誤或延遲之因素。 However, most of the monitoring devices we know of only have the function of image recording. Therefore, when we know something happened on the production line, we generally still need to manually search the images recording the production line and determine the factors of error or delay.

本發明之一些實施例提供了一種用於監視系統之生產線監視方法,包含:獲取操作者之複數影像;基於影像辨識模型,判斷複數影像之操作者之動作類型;決定動作類型之發生時間及動作週期;以及記錄動作類型、發生時間及動作週期。 Some embodiments of the present invention provide a production line monitoring method for a monitoring system, comprising: obtaining multiple images of an operator; judging the action type of the operator in the multiple images based on an image recognition model; determining the occurrence time and action cycle of the action type; and recording the action type, occurrence time and action cycle.

本發明之一些實施例提供了一種用於監視系統之生產線監視方法,包含:獲取視訊,其中,視訊包含複數視訊片段;基於影像辨識模型,判斷各視訊片段之動作類型;接收使用者設定以變更複數視訊片段之第一視訊片段之動作類型;以及根據第一視訊片段之動作類型調整影像辨識模型。 Some embodiments of the present invention provide a production line monitoring method for a monitoring system, comprising: obtaining a video, wherein the video comprises a plurality of video segments; determining the action type of each video segment based on an image recognition model; receiving a user setting to change the action type of a first video segment of the plurality of video segments; and adjusting the image recognition model according to the action type of the first video segment.

本發明之一些實施例提供了一種用於生產線監視之監視系統,包含處理器以及儲存單元。儲存單元儲存程式以及影像辨識模型。程式執行時,使處理器:獲取操作者之複數影像;基於影像辨識模型,判斷複數影像之操作者之動作類型;決定動作類型之發生時間及動作週期;以及記錄動作類型、發生時間及動作週期。 Some embodiments of the present invention provide a monitoring system for production line monitoring, including a processor and a storage unit. The storage unit stores programs and image recognition models. When the program is executed, the processor: obtains multiple images of the operator; based on the image recognition model, determines the action type of the operator in the multiple images; determines the occurrence time and action cycle of the action type; and records the action type, occurrence time and action cycle.

上述已經相當廣泛地概述了本發明的特徵及技術優點,以便可以更好地理解本發明的以下實施方式。下文中將對本發明的另外的特徵及優點進行描述,並且該等特徵及優點形成本發明的申請專利範圍的主題。熟習此項技術者應當理解的是,所揭示的概念及具體實施例可以容易地用作修改或設計用於實現本發明的相同目的的其他結構或製程的基礎。熟習此項技術者還應該認識到,此類等同的構造不脫離如所附申請專利範圍中所闡述的本發明的精神及範疇。 The above has been a fairly broad overview of the features and technical advantages of the present invention so that the following embodiments of the present invention may be better understood. Additional features and advantages of the present invention are described below and form the subject of the patent claims of the present invention. Those skilled in the art should understand that the disclosed concepts and specific embodiments can be easily used as a basis for modifying or designing other structures or processes for achieving the same purpose of the present invention. Those skilled in the art should also recognize that such equivalent structures do not depart from the spirit and scope of the present invention as set forth in the attached patent claims.

1:監視系統 1: Monitoring system

11:處理器 11: Processor

13:儲存單元 13: Storage unit

130:程式 130: Program

132:影像辨識模型 132: Image recognition model

17:通信匯流排 17: Communication bus

2:監視系統 2: Monitoring system

21:處理器 21: Processor

23:儲存單元 23: Storage unit

230:程式 230: Program

232:影像辨識模型 232: Image recognition model

234:訓練資料 234: Training information

25:輸入裝置 25: Input device

27:通信匯流排 27: Communication bus

3:監視系統 3: Monitoring system

31:處理器 31: Processor

33:儲存單元 33: Storage unit

330:程式 330: Program

332A:影像辨識模型 332A: Image recognition model

332B:影像辨識模型 332B: Image recognition model

334:訓練資料 334: Training information

35:輸入裝置 35: Input device

37:通信匯流排 37: Communication bus

70A:監視區域 70A: Surveillance area

70B:監視區域 70B: Surveillance area

71:影像擷取裝置 71: Image capture device

710:視訊 710: Video

72:生產線機台 72: Production line machine

73:操作者 73: Operator

74:物件 74: Objects

80A:監視區域 80A: Surveillance area

81:影像擷取裝置 81: Image capture device

810:視訊 810: Video

812:視訊 812: Video

82:生產線機台 82: Production line machine

83:操作者 83: Operator

91:影像擷取裝置 91: Image capture device

910:影像 910: Image

92:生產線機台 92: Production line machine

93:操作者 93: Operator

S401~S404:步驟 S401~S404: Steps

S501~S518:步驟 S501~S518: Steps

當與附圖一起閱讀以下實施方式時,可以根據以下實施方式最好地理解本發明的各態樣。應注意,根據行業中的標準實踐,各種特徵不是按比例繪製的。實際上,為了討論的清晰起見,可以任意地增大或減小各種特徵的尺寸。 Various aspects of the present invention may be best understood from the following embodiments when read in conjunction with the accompanying drawings. It should be noted that, in accordance with standard practice in the industry, the various features are not drawn to scale. In fact, the sizes of the various features may be arbitrarily increased or decreased for clarity of discussion.

當結合附圖考慮時,可以藉由參考實施方式及申請專利範圍得出對本發明更徹底的理解,其中貫穿附圖,相似的參考數字係指類似 的元件。 A more thorough understanding of the present invention may be obtained by reference to the embodiments and claims when considered in conjunction with the accompanying drawings, wherein like reference numerals refer to similar elements throughout the accompanying drawings.

圖1A係本發明之一些實施例之監視系統之方塊圖。 FIG. 1A is a block diagram of a monitoring system of some embodiments of the present invention.

圖1B係本發明之一些實施例之監視系統之使用示意圖。 FIG1B is a schematic diagram of the use of the monitoring system of some embodiments of the present invention.

圖2A係本發明之一些實施例之監視系統之方塊圖。 FIG. 2A is a block diagram of a monitoring system of some embodiments of the present invention.

圖2B係本發明之一些實施例之監視系統之使用示意圖。 FIG2B is a schematic diagram of the use of the monitoring system of some embodiments of the present invention.

圖2C至2F係本發明之一些實施例之影像擷取裝置所擷取之畫面之示意圖。 Figures 2C to 2F are schematic diagrams of images captured by the image capture device of some embodiments of the present invention.

圖3A係本發明之一些實施例之監視系統之方塊圖。 FIG. 3A is a block diagram of a monitoring system of some embodiments of the present invention.

圖3B係本發明之一些實施例之監視系統之使用示意圖。 FIG3B is a schematic diagram of the use of the monitoring system of some embodiments of the present invention.

圖3C至3F係本發明之一些實施例之影像擷取裝置所擷取之畫面之示意圖。 Figures 3C to 3F are schematic diagrams of images captured by the image capture device of some embodiments of the present invention.

圖4A至4B係本發明之一些實施例之生產線監視方法之流程圖。 Figures 4A to 4B are flow charts of production line monitoring methods of some embodiments of the present invention.

圖5A至5G係本發明之一些實施例之生產線監視方法之流程圖。 Figures 5A to 5G are flow charts of production line monitoring methods of some embodiments of the present invention.

現在使用特定語言描述了附圖中展出的本發明的實施例或實例。應當理解的是,在此不意欲限制本發明的範疇。對於本發明所關聯的一般熟習此項技術者,所描述的實施例的任何改變或修改以及本文件所描述的原理的任何進一步應用都被認為是通常發生的。參考數字可以在整個實施例中重複,但這並不一定意味著一個實施例的一或多個特徵適用於另一個實施例,即使此等實施例共用相同的參考數字。 Embodiments or examples of the present invention shown in the accompanying drawings are now described using specific language. It should be understood that no limitation of the scope of the present invention is intended herein. Any changes or modifications of the described embodiments and any further application of the principles described in this document are considered to be common occurrences for those of ordinary skill in the art to which the present invention relates. Reference numerals may be repeated throughout the embodiments, but this does not necessarily mean that one or more features of one embodiment are applicable to another embodiment, even if such embodiments share the same reference numerals.

應當理解,儘管本文可以使用術語第一、第二、第三等來描述各種元件、組件、區域、層或部分,但是此等元件、組件、區域、層 或部分不受此等術語的限制。相反,此等術語僅用於將一個元件、組件、區域、層或部分與另一個元件、組件、區域、層或部分進行區分。因此,在不脫離本發明構思的教導的情況下,下文所討論的第一元件、組件、區域、層或部分可以被稱為第二元件、組件、區域、層或部分。 It should be understood that although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers, or parts, such elements, components, regions, layers, or parts are not limited by such terms. Instead, such terms are only used to distinguish one element, component, region, layer, or part from another element, component, region, layer, or part. Therefore, without departing from the teachings of the present invention, the first element, component, region, layer, or part discussed below may be referred to as the second element, component, region, layer, or part.

本文所使用的術語只是為了描述特定示例實施例並且不意欲限制本發明構思。如這裏所使用的,除非上下文另外清楚地指明,否則單數形式「一個/種(a/an)」及「該(the)」意欲還包括複數形式。應進一步理解的是,當在本說明書中使用時,術語「包含(comprises及comprising)」指出所陳述的特徵、整數、步驟、操作、元件或組件的存在,但不排除存在或添加一或多個其他特徵、整數、步驟、操作、元件、組件或其組。 The terms used herein are only for describing specific example embodiments and are not intended to limit the inventive concept. As used herein, the singular forms "a/an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that when used in this specification, the terms "comprises and comprising" indicate the presence of the stated features, integers, steps, operations, elements or components, but do not exclude the presence or addition of one or more other features, integers, steps, operations, elements, components or groups thereof.

生產線上人工操作之失誤或其他因素造成之延遲,常導致生產線輸出之瓶頸,惟習知生產線上之監視裝置僅為單純影像記錄,因此,仍需透過人工的方式搜尋影像以找出失誤或延遲之因素,而此種偵錯方式之效率以及使用彈性非常低,並無法有效地改善生產線輸出之瓶頸。據此,為了更快速且精確地找出造成生產線上失誤或延遲之因素,進而提升生產線輸出之效率,需要創新的監視方法及監視系統。 Errors in manual operation or delays caused by other factors on the production line often lead to bottlenecks in the production line output. However, it is known that the monitoring device on the production line is only a simple image record. Therefore, it is still necessary to manually search the image to find the factors of errors or delays. The efficiency and flexibility of this debugging method are very low, and it cannot effectively improve the bottleneck of production line output. Therefore, in order to more quickly and accurately find the factors that cause errors or delays on the production line, and then improve the efficiency of production line output, innovative monitoring methods and monitoring systems are needed.

請參考圖1A,其係本發明之一些實施例之監視系統1之方塊圖。監視系統1包括處理器11及儲存單元13。儲存單元13儲存程式130以及影像辨識模型132。其中,影像辨識模型132可以包含機器學習(machine learning)技術相關之模型。進一步來說,影像辨識模型132主要係根據機器學習演算法,利用複數訓練資料產生之機器學習模型。 Please refer to FIG. 1A, which is a block diagram of a monitoring system 1 of some embodiments of the present invention. The monitoring system 1 includes a processor 11 and a storage unit 13. The storage unit 13 stores a program 130 and an image recognition model 132. The image recognition model 132 may include a model related to machine learning technology. In particular, the image recognition model 132 is mainly a machine learning model generated by using a plurality of training data according to a machine learning algorithm.

具體而言,於一些實施例中,一些影像資料以及此些影像資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型132(亦即產生影像辨識模型132)。如此一來,影像辨識 模型132便可用於接收影像資料,並輸出影像中操作者之動作類型。舉例來說,影像辨識模型132於接收操作者之影像序列後,判斷操作者正在進行"拾起"或"放下"之動作,並輸出動作類型為"拾起"或"放下"。 Specifically, in some embodiments, some image data and the action types actually corresponding to these image data can be used as training data to train the image recognition model 132 based on the machine learning algorithm (i.e., to generate the image recognition model 132). In this way, the image recognition model 132 can be used to receive image data and output the action type of the operator in the image. For example, after receiving the image sequence of the operator, the image recognition model 132 determines that the operator is performing the action of "picking up" or "putting down", and outputs the action type as "picking up" or "putting down".

處理器11及儲存單元13經由通信匯流排(communication bus)17電性連接。透過通信匯流排17,處理器11可執行儲存於儲存單元13中的程式130。程式130執行時可以產生一或多個中斷(interrupt),例如:軟體中斷(software interrupt),以使處理器11執行具有生產線監視功能之程式130。下文將進一步描述程式130之功能。 The processor 11 and the storage unit 13 are electrically connected via a communication bus 17. Through the communication bus 17, the processor 11 can execute a program 130 stored in the storage unit 13. When the program 130 is executed, one or more interrupts may be generated, such as a software interrupt, so that the processor 11 executes the program 130 having the production line monitoring function. The function of the program 130 will be further described below.

請參考圖1B,其係本發明之一些實施例之監視系統1之使用示意圖。詳言之,當生產線機台92之操作需要被監視及分析時,可將影像擷取裝置91裝設於生產線機台92所在之環境,用以擷取與生產線機台92相關之影像。其中,監視系統1可透過網路連線(有線網路或無線網路)與影像擷取裝置91連線。 Please refer to FIG. 1B , which is a schematic diagram of the use of the monitoring system 1 of some embodiments of the present invention. In detail, when the operation of the production line machine 92 needs to be monitored and analyzed, the image capture device 91 can be installed in the environment where the production line machine 92 is located to capture images related to the production line machine 92. Among them, the monitoring system 1 can be connected to the image capture device 91 via a network connection (wired network or wireless network).

於一些實施例中,當操作者93於生產線機台92上進行操作時,影像擷取裝置91可針對生產線機台92之位置,擷取操作者93之複數影像910,並透過網路將複數影像910傳送至監視系統1。換言之,監視系統1可透過網路自影像擷取裝置91獲取操作者93之複數影像910。 In some embodiments, when the operator 93 operates on the production line machine 92, the image capture device 91 can capture multiple images 910 of the operator 93 with respect to the position of the production line machine 92, and transmit the multiple images 910 to the monitoring system 1 via the network. In other words, the monitoring system 1 can obtain the multiple images 910 of the operator 93 from the image capture device 91 via the network.

接著,利用前述產生並儲存於儲存單元13之影像辨識模型132,監視系統1之處理器11可判斷複數影像910之操作者93之動作類型。其中,由於複數影像910帶有時間戳記(timestamp)之相關資訊,因此,處理器11可判斷複數影像910之擷取時間,並進一步地決定複數影像910所代表之動作類型之發生時間以及動作週期。處理器11可將動作類型、發生時間以及動作週期記錄於儲存單元13,俾利後續使用。 Next, using the image recognition model 132 generated and stored in the storage unit 13, the processor 11 of the monitoring system 1 can determine the action type of the operator 93 of the multiple images 910. Since the multiple images 910 carry relevant information of the timestamp, the processor 11 can determine the capture time of the multiple images 910, and further determine the occurrence time and action cycle of the action type represented by the multiple images 910. The processor 11 can record the action type, occurrence time and action cycle in the storage unit 13 for subsequent use.

請參考圖2A,其係本發明之一些實施例之監視系統2之方 塊圖。監視系統2包括處理器21、儲存單元23以及輸入裝置25。儲存單元23儲存程式230、影像辨識模型232以及訓練資料234。其中,影像辨識模型232可以包含機器學習技術相關之模型,用於接收視訊資料(即影像序列資料),並輸出視訊中操作者之動作類型。 Please refer to FIG. 2A, which is a block diagram of a monitoring system 2 of some embodiments of the present invention. The monitoring system 2 includes a processor 21, a storage unit 23, and an input device 25. The storage unit 23 stores a program 230, an image recognition model 232, and training data 234. The image recognition model 232 may include a model related to machine learning technology, which is used to receive video data (i.e., image sequence data) and output the action type of the operator in the video.

處理器21、儲存單元23以及輸入裝置25經由通信匯流排27電性連接。透過通信匯流排27,處理器21可執行儲存於儲存單元23中的程式230。程式230執行時可以產生一或多個中斷,例如:軟體中斷,以使處理器21執行具有生產線監視功能之程式230。下文將進一步描述程式230之功能。 The processor 21, the storage unit 23, and the input device 25 are electrically connected via the communication bus 27. Through the communication bus 27, the processor 21 can execute the program 230 stored in the storage unit 23. When the program 230 is executed, one or more interrupts can be generated, such as software interrupts, so that the processor 21 executes the program 230 with the production line monitoring function. The function of the program 230 will be further described below.

於一些實施例中,影像辨識模型232主要係根據機器學習演算法,利用複數訓練資料234產生之機器學習模型。詳言之,一些視訊資料以及此些視訊資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型232(亦即產生影像辨識模型232)。 In some embodiments, the image recognition model 232 is mainly a machine learning model generated by using a plurality of training data 234 according to a machine learning algorithm. Specifically, some video data and the action types actually corresponding to these video data can be used as training data to train the image recognition model 232 based on the machine learning algorithm (i.e., to generate the image recognition model 232).

更詳細來說,每一訓練資料234可以包括:(1)視訊資料;及(2)與此視訊資料相對應之動作類型,而程式230執行時,使處理器21自儲存單元23中擷取訓練資料234,並利用機器學習演算法,根據複數訓練資料234訓練影像辨識模型232。 In more detail, each training data 234 may include: (1) video data; and (2) an action type corresponding to the video data. When the program 230 is executed, the processor 21 extracts the training data 234 from the storage unit 23 and uses a machine learning algorithm to train the image recognition model 232 based on the multiple training data 234.

換句話說,複數訓練資料234之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料234之動作類型可以在訓練階段期間用作訓練輸出資料。在處理器21產生影像辨識模型232之後,可將影像辨識模型232儲存於儲存單元23中以待後續使用。 In other words, the video data of the plurality of training data 234 can be used as training input data during the training phase, and the action types of the plurality of training data 234 can be used as training output data during the training phase. After the processor 21 generates the image recognition model 232, the image recognition model 232 can be stored in the storage unit 23 for subsequent use.

須說明,在一些實施例中,機器學習演算法主要係引入卷積神經網路(Convolutional Neural Network,CNN)演算法,以基於訓練資料234構建用於判斷動作類型之影像辨識模型232。於一些範例中,CNN 演算法可包含YOLO(you only look once)演算法、R3D(ResNet 3D)演算法等影像處理及影像辨識演算法,惟其並非用以限制本發明中機器學習演算法之態樣。 It should be noted that in some embodiments, the machine learning algorithm mainly introduces a convolutional neural network (CNN) algorithm to construct an image recognition model 232 for determining the type of action based on training data 234. In some examples, the CNN algorithm may include image processing and image recognition algorithms such as the YOLO (you only look once) algorithm and the R3D (ResNet 3D) algorithm, but it is not intended to limit the machine learning algorithm in the present invention.

於一些實施例中,在用於訓練影像辨識模型232之CNN演算法之程式碼中,存在用於訓練影像辨識模型232之訓練函式(function)。在影像辨識模型232之訓練期間,訓練函式可以包括用於接收訓練資料234之部分。 In some embodiments, in the code of the CNN algorithm for training the image recognition model 232, there is a training function for training the image recognition model 232. During the training of the image recognition model 232, the training function may include a portion for receiving the training data 234.

進一步地,視訊資料可以用作訓練輸入資料,與視訊資料相對應之動作類型可以用作訓練輸出資料。接著,可以在執行CNN演算法之程式碼之主函式(main function)後執行訓練函式,以訓練影像辨識模型232。在基於CNN演算法並利用訓練資料產生影像辨識模型232之後,影像辨識模型232可以用於判斷輸入視訊所對應之動作類型。 Furthermore, the video data can be used as training input data, and the action type corresponding to the video data can be used as training output data. Then, the training function can be executed after the main function of the CNN algorithm code is executed to train the image recognition model 232. After the image recognition model 232 is generated based on the CNN algorithm and using the training data, the image recognition model 232 can be used to determine the action type corresponding to the input video.

請參考圖2B,其係本發明之一些實施例之監視系統2之使用示意圖。詳言之,當生產線機台82之操作需要被監視及分析時,可將影像擷取裝置81裝設於生產線機台82所在之環境,用以擷取與生產線機台82相關之視訊。其中,監視系統2可透過網路連線(有線網路或無線網路)與影像擷取裝置81連線。 Please refer to FIG. 2B, which is a schematic diagram of the use of the monitoring system 2 of some embodiments of the present invention. In detail, when the operation of the production line machine 82 needs to be monitored and analyzed, the image capture device 81 can be installed in the environment where the production line machine 82 is located to capture the video related to the production line machine 82. Among them, the monitoring system 2 can be connected to the image capture device 81 through a network connection (wired network or wireless network).

於一些實施例中,當操作者83於生產線機台82上進行操作時,影像擷取裝置81可針對生產線機台82之位置,即時(real-time)擷取操作者83之視訊810(例如:視訊串流),並透過網路將視訊810傳送至監視系統2。換言之,監視系統2可透過網路自影像擷取裝置81獲取操作者83之視訊810。 In some embodiments, when the operator 83 operates on the production line machine 82, the image capture device 81 can capture the video 810 of the operator 83 in real time (e.g., video streaming) with respect to the position of the production line machine 82, and transmit the video 810 to the monitoring system 2 via the network. In other words, the monitoring system 2 can obtain the video 810 of the operator 83 from the image capture device 81 via the network.

於一些實施例中,為了增加影像辨識模型232轉換之準確度,可利用生產機台82現場攝得之視訊作為反饋資料調整影像辨識模型 232。詳細來說,視訊810可包含多個視訊片段,利用前述產生並儲存於儲存單元23之影像辨識模型232,監視系統2之處理器21可判斷每一視訊片段之操作者83之動作類型。 In some embodiments, in order to increase the accuracy of the image recognition model 232 conversion, the video captured on-site by the production machine 82 can be used as feedback data to adjust the image recognition model 232. In detail, the video 810 can include multiple video clips. Using the image recognition model 232 generated and stored in the storage unit 23, the processor 21 of the monitoring system 2 can determine the action type of the operator 83 in each video clip.

當處理器21利用影像辨識模型232判斷完視訊810之每一視訊片段之操作者83之動作類型後,監視系統2可將視訊片段以及相應之動作類型提供予使用者,俾利使用者判斷是否有影像辨識模型232之轉換偏誤。於一些實施例中,監視系統2可透過顯示器(未繪示)以及圖形使用者介面(graphical user interface,GUI)提供視訊片段以及相應之動作類型提供予使用者。 After the processor 21 uses the image recognition model 232 to determine the action type of the operator 83 in each video segment of the video 810, the monitoring system 2 can provide the video segment and the corresponding action type to the user, so that the user can determine whether there is a conversion error in the image recognition model 232. In some embodiments, the monitoring system 2 can provide the video segment and the corresponding action type to the user through a display (not shown) and a graphical user interface (GUI).

接著,若使用者判斷特定視訊片段及其相應之動作類型係影像辨識模型232之轉換偏誤,使用者便可透過輸入裝置25輸入使用者設定,藉以將此特定視訊片段之動作類型變更為正確之動作。 Then, if the user determines that the specific video clip and its corresponding action type are conversion errors of the image recognition model 232, the user can input user settings through the input device 25 to change the action type of the specific video clip to a correct action.

隨後,處理器21便可利用此特定視訊片段以及更正後動作類型更新訓練資料234,並重新利用更新後之複數訓練資料234產生影像辨識模型232。更詳細來說,處理器21可利用原有之訓練資料234、至少一特定視訊片段、相應於此至少一特定視訊片段之至少一動作類型,基於機器學習演算法產生影像辨識模型232。 Subsequently, the processor 21 can use the specific video clip and the corrected action type to update the training data 234, and reuse the updated multiple training data 234 to generate the image recognition model 232. More specifically, the processor 21 can use the original training data 234, at least one specific video clip, and at least one action type corresponding to the at least one specific video clip to generate the image recognition model 232 based on the machine learning algorithm.

如此一來,由於影像辨識模型232重新用以進行之訓練資料中,包含針對生產線機台82以及操作者83之相關資料(即至少一特定視訊片段及相應於此至少一特定視訊片段之至少一動作類型),則更新後之影像辨識模型232應用於生產線機台82之環境時將有更高之轉換準確度。 In this way, since the training data used for the image recognition model 232 includes relevant data for the production line machine 82 and the operator 83 (i.e., at least one specific video clip and at least one action type corresponding to the at least one specific video clip), the updated image recognition model 232 will have a higher conversion accuracy when applied to the environment of the production line machine 82.

利用生產機台82現場攝得之視訊作為反饋資料調整影像辨識模型232之技術,可透過下述範例更清楚地理解。舉例來說,視訊810包含十個視訊片段"C1~C10",利用前述產生並儲存於儲存單元23之影像 辨識模型232,監視系統2之處理器21可判斷視訊片段"C1~C10"中每一視訊片段之操作者83之動作類型(例如:"拾起"動作或"放下"動作)。 The technology of using the video captured on-site by the production machine 82 as feedback data to adjust the image recognition model 232 can be more clearly understood through the following example. For example, the video 810 includes ten video segments "C1~C10". Using the image recognition model 232 generated and stored in the storage unit 23, the processor 21 of the monitoring system 2 can determine the action type of the operator 83 in each video segment "C1~C10" (for example: "pick up" action or "put down" action).

當處理器21利用影像辨識模型232判斷完視訊片段"C1~C10"之動作類型後,監視系統2便將視訊片段"C1~C10"及其各自相應之動作類型,透過顯示器以及GUI提供予使用者,俾利使用者判斷是否有影像辨識模型232之轉換偏誤。 After the processor 21 uses the image recognition model 232 to determine the action type of the video segment "C1~C10", the monitoring system 2 provides the video segment "C1~C10" and its corresponding action type to the user through the display and GUI, so that the user can determine whether there is a conversion error in the image recognition model 232.

於此範例中,視訊片段"C1"及"C8"之動作類型分別被監視系統2判斷為"拾起動作"及"放下動作"。惟使用者判斷視訊片段"C1"及"C8"之動作類型應分別為"放下動作"及"拾起動作",因此,使用者透過輸入裝置25輸入使用者設定,分別將視訊片段"C1"及"C8"之動作類型修正為"放下動作"及"拾起動作"。隨後,處理器21將視訊片段"C1"及"C8"以及更正後動作類型更新訓練資料234,並重新利用更新後之複數訓練資料234產生影像辨識模型232。 In this example, the action types of the video clips "C1" and "C8" are judged by the monitoring system 2 as "pick-up action" and "put down action", respectively. However, the user judges that the action types of the video clips "C1" and "C8" should be "put down action" and "pick-up action", respectively. Therefore, the user inputs the user setting through the input device 25 to correct the action types of the video clips "C1" and "C8" to "put down action" and "pick-up action", respectively. Subsequently, the processor 21 updates the training data 234 with the video clips "C1" and "C8" and the corrected action types, and reuses the updated multiple training data 234 to generate the image recognition model 232.

於一些實施例中,透過前述步驟更新影像辨識模型232後,當操作者83於生產線機台82上持續進行操作時,影像擷取裝置81可針對生產線機台82之位置,擷取操作者83之視訊812,並透過網路將視訊812傳送至監視系統2。換言之,監視系統2可透過網路自影像擷取裝置81獲取操作者83之視訊812。其中,視訊812包含複數視訊片段。 In some embodiments, after the image recognition model 232 is updated through the aforementioned steps, when the operator 83 continues to operate on the production line machine 82, the image capture device 81 can capture the video 812 of the operator 83 according to the position of the production line machine 82, and transmit the video 812 to the monitoring system 2 through the network. In other words, the monitoring system 2 can obtain the video 812 of the operator 83 from the image capture device 81 through the network. The video 812 includes multiple video clips.

接著,利用前述已更新並儲存於儲存單元23之影像辨識模型232,監視系統2之處理器21可判斷視訊812之每一視訊片段之動作類型。其中,由於每一視訊片段帶有時間戳記之相關資訊,因此,處理器21可判斷各視訊片段之擷取時間,並進一步地決定各視訊片段所代表之動作類型之發生時間以及動作週期。處理器21可將動作類型以及動作週期記錄於儲存單元23,俾利後續使用。 Then, using the image recognition model 232 that has been updated and stored in the storage unit 23, the processor 21 of the monitoring system 2 can determine the action type of each video segment of the video 812. Since each video segment carries relevant information of the timestamp, the processor 21 can determine the capture time of each video segment, and further determine the occurrence time and action cycle of the action type represented by each video segment. The processor 21 can record the action type and action cycle in the storage unit 23 for subsequent use.

於一些實施例中,處理器21可針對所有儲存於儲存單元23之每一視訊片段,判斷相應之動作類型之動作週期是否超過週期門檻值。若是,則標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之視訊片段,並進一步理解此些視訊片段中動作類型之動作週期超過週期門檻值之原因,以快速地排除造成延遲之因素。 In some embodiments, the processor 21 can determine whether the action cycle of the corresponding action type exceeds the cycle threshold for each video clip stored in the storage unit 23. If so, the action type and the corresponding video clip are marked, and the action type, occurrence time and occurrence cycle corresponding to the video clip are recorded in the log file. In this way, the user can use the log file to efficiently call out the marked video clips in the video 812, and further understand the reasons why the action cycle of the action type in these video clips exceeds the cycle threshold, so as to quickly eliminate the factors causing the delay.

舉例來說,預設"拾起"動作應在3秒內完成,則處理器21針對所有相應於"拾起"之動作之視訊片段,判斷其動作週期是否超過3秒值。若是,則標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。則使用者便可利用記錄檔,有效率地於視訊812中調出被標註之視訊片段,並進一步理解此些視訊片段中動作類型之動作週期超過3秒之原因,以快速地排除造成延遲之因素。 For example, the default "pick up" action should be completed within 3 seconds, then the processor 21 determines whether the action cycle of all video clips corresponding to the "pick up" action exceeds the 3-second value. If so, the action type and the corresponding video clip are marked, and the action type, occurrence time and occurrence cycle corresponding to the video clip are recorded in the log file. The user can then use the log file to efficiently call out the marked video clips in the video 812, and further understand the reasons why the action cycle of the action type in these video clips exceeds 3 seconds, so as to quickly eliminate the factors causing the delay.

於一些實施例中,處理器21可針對所有儲存於儲存單元23之連續二視訊片段,判斷相應之二動作類型之發生時間之時間差是否超過時間門檻值。若是,則標註此二動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之二視訊片段,並進一步理解相應之二動作類型之發生時間之時間差超過時間門檻值之原因,以快速地排除造成延遲之因素。 In some embodiments, the processor 21 can determine whether the time difference between the occurrence times of the two corresponding action types exceeds the time threshold for all two consecutive video clips stored in the storage unit 23. If so, the two action types and the corresponding two video clips are marked, and the action types, occurrence times, and occurrence cycles of the two video clips are recorded in the log file. In this way, the user can use the log file to efficiently call out the two marked video clips in the video 812, and further understand the reason why the time difference between the occurrence times of the two corresponding action types exceeds the time threshold, so as to quickly eliminate the factors causing the delay.

舉例來說,預設連續發生之"拾起"動作以及"放下"動作間,相關之零件配置操作應在10秒內完成,則處理器21針對連續發生之"拾起"動作以及"放下"動作之二視訊片段,判斷時間差是否超過10秒。若是, 則標註此二動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之二視訊片段,並進一步理解相應之二動作類型之發生時間之時間差超過10秒之原因,以快速地排除造成延遲之因素。 For example, it is assumed that the related parts configuration operations between the consecutive "pick up" action and the "put down" action should be completed within 10 seconds. The processor 21 determines whether the time difference between the two video clips of the consecutive "pick up" action and the "put down" action exceeds 10 seconds. If so, the two action types and the corresponding two video clips are marked, and the action types, occurrence times, and occurrence cycles of the two video clips are recorded in the log file. In this way, the user can use the log file to efficiently call out the two marked video clips in the video 812, and further understand the reason why the time difference between the occurrence times of the corresponding two action types exceeds 10 seconds, so as to quickly eliminate the factors causing the delay.

請參考圖2C,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之示意圖。於一些實施例中,由於影像擷取裝置81所擷取之影像或視訊範圍較大,因此,處理器21利用影像辨識模型232處理影像或視訊時,將花費較多之硬體資源以及時間。 Please refer to FIG. 2C, which is a schematic diagram of a screen captured by the image capture device 81 of some embodiments of the present invention. In some embodiments, since the image or video captured by the image capture device 81 has a larger range, the processor 21 will consume more hardware resources and time when processing the image or video using the image recognition model 232.

惟影像擷取裝置81所擷取之影像或視訊並非全部需要被監視,因此,可針對所擷取之影像或視訊,定義需要監視之較小範圍之區域,而處理器21僅需針對較小範圍之區域,利用影像辨識模型232進行影像或視訊之處理,如此,便可大幅加快處理速度。 However, not all images or videos captured by the image capture device 81 need to be monitored. Therefore, a smaller area that needs to be monitored can be defined for the captured images or videos, and the processor 21 only needs to process the images or videos in the smaller area using the image recognition model 232. In this way, the processing speed can be greatly accelerated.

請參考圖2D,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,使用者可以透過輸入裝置25輸入使用者設定,用以於影像擷取裝置81所擷取之影像範圍上定義監視區域80A,而處理器21僅需針對監視區域80A之影像或視訊,利用影像辨識模型232進行處理。如此一來,由於監視區域80A之影像或視訊尺寸較小,因此,便可大幅可加快監視系統2之處理速度。 Please refer to FIG. 2D, which is another schematic diagram of the screen captured by the image capture device 81 of some embodiments of the present invention. Specifically, the user can input user settings through the input device 25 to define the monitoring area 80A on the image range captured by the image capture device 81, and the processor 21 only needs to process the image or video of the monitoring area 80A using the image recognition model 232. In this way, since the image or video size of the monitoring area 80A is smaller, the processing speed of the monitoring system 2 can be greatly accelerated.

於一些實施例中,當生產線機台82現場之環境發生變動時(例如:影像擷取裝置81角度被調整、操作員配置變動、操作員位置變動等等),可能造成原本欲監測之範圍偏離監視區域80A,並導致影像辨識模型232轉換誤差提升。此時,使用者可直接調整監視區域80A之位置,以降低生產線機台82現場之環境變動帶來之偏差。 In some embodiments, when the environment of the production line machine 82 changes (for example, the angle of the image capture device 81 is adjusted, the operator configuration changes, the operator position changes, etc.), the originally intended monitoring range may deviate from the monitoring area 80A, and cause the image recognition model 232 conversion error to increase. At this time, the user can directly adjust the position of the monitoring area 80A to reduce the deviation caused by the environmental changes of the production line machine 82.

請參考圖2E,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,由於生產線機台82現場之環境發生變動,使得監視區域80A內之影像或視訊並非需要監視之內容,因此,可能導致影像辨識模型232之轉換誤差提升。 Please refer to FIG. 2E, which is another schematic diagram of the image captured by the image capture device 81 of some embodiments of the present invention. Specifically, due to changes in the environment of the production line machine 82, the image or video in the monitoring area 80A is not the content that needs to be monitored, which may cause the conversion error of the image recognition model 232 to increase.

請參考圖2F,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,使用者可以透過輸入裝置25輸入另一使用者設定,用以於影像擷取裝置81所擷取之影像範圍上移動監視區域80A,使需要監視之區域恢復正常。 Please refer to FIG. 2F, which is another schematic diagram of the screen captured by the image capture device 81 of some embodiments of the present invention. Specifically, the user can input another user setting through the input device 25 to move the monitoring area 80A on the image range captured by the image capture device 81 so that the area to be monitored returns to normal.

於一些實施例中,影像擷取裝置81所擷取之畫面,可先傳送至監視系統2。隨後,監視系統2可透過一般顯示器(未繪示)顯示此些畫面,並透過如鍵盤或滑鼠之輸入裝置25接收使用者設定,俾利監視系統2完成相關操作。 In some embodiments, the images captured by the image capture device 81 may be first transmitted to the monitoring system 2. Subsequently, the monitoring system 2 may display these images through a general display (not shown) and receive user settings through an input device 25 such as a keyboard or a mouse, so that the monitoring system 2 can complete related operations.

於一些實施例中,影像擷取裝置81所擷取之畫面,可先傳送至監視系統2。隨後,監視系統2可透過網路將畫面傳送至遠端顯示器(例如:手持智慧裝置、筆記型電腦等)此些畫面,並透過如網路介面之輸入裝置25接收使用者設定,俾利監視系統2完成相關操作。 In some embodiments, the images captured by the image capture device 81 may be first transmitted to the monitoring system 2. Subsequently, the monitoring system 2 may transmit the images to a remote display (e.g., a handheld smart device, a laptop, etc.) via a network, and receive user settings via an input device 25 such as a network interface, so that the monitoring system 2 can complete related operations.

請參考圖3A,其係本發明之一些實施例之監視系統3之方塊圖。監視系統3包括處理器31、儲存單元33以及輸入裝置35。儲存單元33儲存程式330、影像辨識模型332A、影像辨識模型332B以及訓練資料334A、334B。其中,影像辨識模型332A、332B可以包含機器學習技術相關之模型,用於判斷視訊資料(即影像序列資料)中操作者之動作類型或物品之數量變化。 Please refer to FIG. 3A, which is a block diagram of a monitoring system 3 of some embodiments of the present invention. The monitoring system 3 includes a processor 31, a storage unit 33, and an input device 35. The storage unit 33 stores a program 330, an image recognition model 332A, an image recognition model 332B, and training data 334A, 334B. Among them, the image recognition models 332A, 332B can include models related to machine learning technology, which are used to judge the operator's action type or the quantity change of objects in the video data (i.e., image sequence data).

處理器31、儲存單元33以及輸入裝置35經由通信匯流排37電性連接。透過通信匯流排37,處理器31可執行儲存於儲存單元33 中的程式330。程式330執行時可以產生一或多個中斷,例如:軟體中斷,以使處理器31執行具有生產線監視功能之程式330。下文將進一步描述程式330之功能。 The processor 31, the storage unit 33, and the input device 35 are electrically connected via the communication bus 37. Through the communication bus 37, the processor 31 can execute the program 330 stored in the storage unit 33. When the program 330 is executed, one or more interrupts can be generated, such as software interrupts, so that the processor 31 executes the program 330 with the production line monitoring function. The function of the program 330 will be further described below.

於一些實施例中,影像辨識模型332A主要係根據機器學習演算法,利用複數訓練資料334A產生之機器學習模型。詳言之,一些視訊資料以及此些視訊資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型332A(亦即產生影像辨識模型332A)。 In some embodiments, the image recognition model 332A is mainly a machine learning model generated by using a plurality of training data 334A according to a machine learning algorithm. Specifically, some video data and the action types actually corresponding to these video data can be used as training data to train the image recognition model 332A based on the machine learning algorithm (i.e., to generate the image recognition model 332A).

更詳細來說,每一訓練資料334A可以包括:(1)視訊資料;及(2)與此視訊資料相對應之動作類型,而程式330執行時,使處理器31自儲存單元33中擷取訓練資料334A,並利用機器學習演算法,根據複數訓練資料334A訓練影像辨識模型332A。 In more detail, each training data 334A may include: (1) video data; and (2) an action type corresponding to the video data. When the program 330 is executed, the processor 31 extracts the training data 334A from the storage unit 33 and uses a machine learning algorithm to train the image recognition model 332A based on the multiple training data 334A.

換句話說,複數訓練資料334A之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料334A之動作類型可以在訓練階段期間用作訓練輸出資料。在處理器31產生影像辨識模型332A之後,可將影像辨識模型332A儲存於儲存單元33中以待後續使用。 In other words, the video data of the plurality of training data 334A can be used as training input data during the training phase, and the action types of the plurality of training data 334A can be used as training output data during the training phase. After the processor 31 generates the image recognition model 332A, the image recognition model 332A can be stored in the storage unit 33 for subsequent use.

於一些實施例中,複數訓練資料334A中,用作訓練輸入資料之視訊資料,包含操作員之動作之影像資料。而與視訊資料相對應之動作類型可以用作訓練輸出資料。接著,可以執行CNN演算法之程式碼以訓練影像辨識模型332A。在基於CNN演算法並利用訓練資料產生影像辨識模型332A之後,影像辨識模型332A可以用於判斷輸入視訊所對應之動作類型。 In some embodiments, the video data used as training input data in the plurality of training data 334A includes image data of the operator's actions. The action type corresponding to the video data can be used as training output data. Then, the code of the CNN algorithm can be executed to train the image recognition model 332A. After the image recognition model 332A is generated based on the CNN algorithm and using the training data, the image recognition model 332A can be used to determine the action type corresponding to the input video.

於一些實施例中,影像辨識模型332B主要係根據機器學習演算法,利用複數訓練資料334B產生之機器學習模型。詳言之,一些視 訊資料以及此些視訊資料所實際對應之物件數量變化可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型332B(亦即產生影像辨識模型332B)。 In some embodiments, the image recognition model 332B is mainly a machine learning model generated by a plurality of training data 334B according to a machine learning algorithm. Specifically, some video data and the actual change in the number of objects corresponding to the video data can be used as training data to train the image recognition model 332B based on the machine learning algorithm (i.e., to generate the image recognition model 332B).

更詳細來說,每一訓練資料334B可以包括:(1)視訊資料;及(2)與此視訊資料相對應之物件數量變化(例如:增加或減少),而程式330執行時,使處理器31自儲存單元33中擷取訓練資料334B,並利用機器學習演算法,根據複數訓練資料334B訓練影像辨識模型332B。 In more detail, each training data 334B may include: (1) video data; and (2) a change in the number of objects corresponding to the video data (e.g., increase or decrease). When the program 330 is executed, the processor 31 extracts the training data 334B from the storage unit 33 and uses a machine learning algorithm to train the image recognition model 332B based on the multiple training data 334B.

換句話說,複數訓練資料334B之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料334B之物件數量變化可以在訓練階段期間用作訓練輸出資料。在處理器31產生影像辨識模型332B之後,可將影像辨識模型332B儲存於儲存單元33中以待後續使用。 In other words, the video data of the plurality of training data 334B can be used as training input data during the training phase, and the change in the number of objects in the plurality of training data 334B can be used as training output data during the training phase. After the processor 31 generates the image recognition model 332B, the image recognition model 332B can be stored in the storage unit 33 for subsequent use.

於一些實施例中,複數訓練資料334B中,用作訓練輸入資料之視訊資料,包含物件數量變動之影像資料。而與視訊資料相對應之物件數量變化可以用作訓練輸出資料。接著,可以執行CNN演算法之程式碼以訓練影像辨識模型332B。在基於CNN演算法並利用訓練資料產生影像辨識模型332B之後,影像辨識模型332B可以用於判斷輸入視訊所對應之物件數量變化。 In some embodiments, the video data used as training input data in the plurality of training data 334B includes image data of object quantity changes. The object quantity changes corresponding to the video data can be used as training output data. Then, the CNN algorithm code can be executed to train the image recognition model 332B. After the image recognition model 332B is generated based on the CNN algorithm and using the training data, the image recognition model 332B can be used to determine the object quantity changes corresponding to the input video.

更詳細來說,視訊資料紀錄特定物件(例如:產品零組件)之數量變化,而特定物件之數量變化可代表不同之動作行為。舉例而言,當特定物件之數量在此視訊資料中發生減少之變化時,代表操作員之動作係"拾起"特定物件之機率較高。當特定物件之數量在此視訊資料中發生增加之變化時,代表操作員之動作係"放下"特定物件之機率較高。據此,利用影像資料中特定物件之數量變化,可協助提升動作類型判斷結果之準確率。 In more detail, the video data records the quantity changes of specific objects (e.g., product components), and the quantity changes of specific objects can represent different action behaviors. For example, when the quantity of a specific object decreases in the video data, it means that the operator's action is more likely to "pick up" the specific object. When the quantity of a specific object increases in the video data, it means that the operator's action is more likely to "put down" the specific object. Based on this, using the quantity changes of specific objects in the image data can help improve the accuracy of the action type judgment results.

請參考圖3B,其係本發明之一些實施例之監視系統3之使用示意圖。詳言之,當生產線機台72之操作需要被監視及分析時,可將影像擷取裝置71裝設於生產線機台72所在之環境,用以擷取與生產線機台72相關之視訊。其中,監視系統3可透過網路連線(有線網路或無線網路)與影像擷取裝置71連線。 Please refer to FIG. 3B, which is a schematic diagram of the use of the monitoring system 3 of some embodiments of the present invention. In detail, when the operation of the production line machine 72 needs to be monitored and analyzed, the image capture device 71 can be installed in the environment where the production line machine 72 is located to capture the video related to the production line machine 72. Among them, the monitoring system 3 can be connected to the image capture device 71 through a network connection (wired network or wireless network).

於一些實施例中,當操作者73於生產線機台72上進行操作時,影像擷取裝置71可針對生產線機台72之位置,即時擷取操作者73之視訊710(例如:視訊串流),並透過網路將視訊710傳送至監視系統3。換言之,監視系統3可透過網路自影像擷取裝置71獲取操作者73之視訊710。其中,視訊710包含複數視訊片段。 In some embodiments, when the operator 73 operates on the production line machine 72, the image capture device 71 can capture the video 710 of the operator 73 in real time (e.g., video streaming) based on the position of the production line machine 72, and transmit the video 710 to the monitoring system 3 via the network. In other words, the monitoring system 3 can obtain the video 710 of the operator 73 from the image capture device 71 via the network. The video 710 includes multiple video clips.

接著,使用者可以透過輸入裝置35輸入使用者設定,用以於影像擷取裝置71所擷取之影像範圍上定義監視區域70A以及70B,而處理器31僅需針對監視區域70A以及70B之影像或視訊,利用影像辨識模型332A、332B進行處理。 Then, the user can input user settings through the input device 35 to define the monitoring areas 70A and 70B on the image range captured by the image capture device 71, and the processor 31 only needs to process the images or videos of the monitoring areas 70A and 70B using the image recognition models 332A and 332B.

隨後,利用前述儲存於儲存單元33之影像辨識模型332A,監視系統3之處理器31可判斷視訊710之每一視訊片段中,監視區域70A以及70B之動作類型。其中,由於每一視訊片段帶有時間戳記之相關資訊,因此,處理器31可判斷各視訊片段之擷取時間,並進一步地決定各視訊片段中,監視區域70A以及70B所代表之動作類型之發生時間以及動作週期。處理器31可將動作類型以及動作週期記錄於儲存單元33,俾利後續使用。 Subsequently, using the aforementioned image recognition model 332A stored in the storage unit 33, the processor 31 of the monitoring system 3 can determine the action type of the monitoring areas 70A and 70B in each video segment of the video 710. Since each video segment carries relevant information of the timestamp, the processor 31 can determine the capture time of each video segment, and further determine the occurrence time and action cycle of the action type represented by the monitoring areas 70A and 70B in each video segment. The processor 31 can record the action type and action cycle in the storage unit 33 for subsequent use.

於一些實施例中,針對每一視訊片段之監視區域70A以及70B,監視系統3之處理器31可進一步利用影像辨識模型332B判斷物件數量之變化,並據以更新操作者73之動作類型。請參考圖3C至圖3D, 其係本發明之一些實施例之影像擷取裝置71所擷取之畫面之示意圖。舉例而言,針對特定視訊片段之監視區域70A,監視系統3之處理器31可先利用影像辨識模型332A判斷動作類型為"拾起"。 In some embodiments, for each monitoring area 70A and 70B of a video clip, the processor 31 of the monitoring system 3 can further use the image recognition model 332B to determine the change in the number of objects and update the action type of the operator 73 accordingly. Please refer to Figures 3C to 3D, which are schematic diagrams of the images captured by the image capture device 71 of some embodiments of the present invention. For example, for the monitoring area 70A of a specific video clip, the processor 31 of the monitoring system 3 can first use the image recognition model 332A to determine that the action type is "pick up".

接著,針對此特定視訊片段之監視區域70A,監視系統3之處理器31可進一步利用影像辨識模型332B判斷視訊片段中物件74數量減少。據此,由於監視區域70A中,特定視訊片段之動作類型為"拾起",且物件74數量之減少確實為"拾起"導致,因此,可將特定動作類型準確地確認為"拾起"。 Next, for the monitoring area 70A of this specific video clip, the processor 31 of the monitoring system 3 can further use the image recognition model 332B to determine that the number of objects 74 in the video clip has decreased. Accordingly, since the action type of the specific video clip in the monitoring area 70A is "pick up", and the decrease in the number of objects 74 is indeed caused by "pick up", the specific action type can be accurately confirmed as "pick up".

需說明,針對特定視訊片段之監視區域70A,當監視系統3之處理器31利用影像辨識模型332A判斷動作類型為"放下",惟利用影像辨識模型332B判斷視訊片段中物件74數量減少時,表示影像辨識模型332A之判斷可能有誤。據此,基於利用影像辨識模型332B判斷視訊片段中物件74數量減少,監視系統3之處理器31可將相應特定視訊片段之動作類型由"放下"更新為"拾起"。 It should be noted that for the monitoring area 70A of a specific video clip, when the processor 31 of the monitoring system 3 uses the image recognition model 332A to determine that the action type is "put down", but uses the image recognition model 332B to determine that the number of objects 74 in the video clip has decreased, it means that the judgment of the image recognition model 332A may be wrong. Accordingly, based on the use of the image recognition model 332B to determine that the number of objects 74 in the video clip has decreased, the processor 31 of the monitoring system 3 can update the action type of the corresponding specific video clip from "put down" to "pick up".

請參考圖3E至圖3F,其係本發明之一些實施例之影像擷取裝置71所擷取之畫面之示意圖。舉例而言,針對特定視訊片段之監視區域70B,監視系統3之處理器31可先利用影像辨識模型332A判斷動作類型為"放下"。 Please refer to Figures 3E to 3F, which are schematic diagrams of the images captured by the image capture device 71 of some embodiments of the present invention. For example, for the monitoring area 70B of a specific video clip, the processor 31 of the monitoring system 3 can first use the image recognition model 332A to determine that the action type is "put down".

接著,針對此特定視訊片段之監視區域70B,監視系統3之處理器31可進一步利用影像辨識模型332B判斷視訊片段中物件74數量增加。據此,由於監視區域70B中,特定視訊片段之動作類型為"放下",且物件74數量之增加確實為"放下"導致,因此,可將特定動作類型準確地確認為"放下"。 Next, for the monitoring area 70B of this specific video clip, the processor 31 of the monitoring system 3 can further use the image recognition model 332B to determine that the number of objects 74 in the video clip has increased. Accordingly, since the action type of the specific video clip in the monitoring area 70B is "dropping", and the increase in the number of objects 74 is indeed caused by "dropping", the specific action type can be accurately confirmed as "dropping".

同樣地,針對特定視訊片段之監視區域70B,當監視系統3 之處理器31利用影像辨識模型332A判斷動作類型為"拾起",惟利用影像辨識模型332B判斷視訊片段中物件74數量增加時,表示影像辨識模型332A之判斷可能有誤。據此,基於利用影像辨識模型332B判斷視訊片段中物件74數量增加,監視系統3之處理器31可將相應特定視訊片段之動作類型由"拾起"更新為"放下"。 Similarly, for the monitoring area 70B of a specific video clip, when the processor 31 of the monitoring system 3 uses the image recognition model 332A to determine that the action type is "pick up", but uses the image recognition model 332B to determine that the number of objects 74 in the video clip has increased, it means that the judgment of the image recognition model 332A may be wrong. Accordingly, based on the judgment that the number of objects 74 in the video clip has increased using the image recognition model 332B, the processor 31 of the monitoring system 3 can update the action type of the corresponding specific video clip from "pick up" to "put down".

需特別說明,前述實施例中,處理器使用影像辨識模型判斷影像或視訊資料之動作類型時,主要可先利用影像辨識模型辨識並追蹤操作者之手部,並進一步透過操作者之手部之動作,判斷影像或視訊之操作者之動作類型。 It should be noted that in the aforementioned embodiment, when the processor uses the image recognition model to determine the action type of the image or video data, the image recognition model can be used to first identify and track the operator's hand, and then further determine the action type of the operator in the image or video through the operator's hand action.

本發明之一些實施例包含生產線監視方法,其流程圖如圖4A至4B所示。這些實施例之生產線監視方法由一監視系統(如前述實施例之監視系統)實施。方法之詳細操作如下。 Some embodiments of the present invention include a production line monitoring method, the flow chart of which is shown in Figures 4A to 4B. The production line monitoring method of these embodiments is implemented by a monitoring system (such as the monitoring system of the aforementioned embodiment). The detailed operation of the method is as follows.

首先,由監視系統執行步驟S401,獲取操作者之複數影像。其中,監視系統可由設置於生產線機台之影像擷取裝置獲取操作者之複數影像。由監視系統執行步驟S402,基於影像辨識模型,判斷複數影像之操作者之動作類型。其中,影像辨識模型可以包含機器學習技術相關之模型,用於接收影像資料並輸出影像中操作者之動作類型。 First, the monitoring system executes step S401 to obtain multiple images of the operator. The monitoring system can obtain multiple images of the operator through an image capture device installed on the production line machine. The monitoring system executes step S402 to determine the action type of the operator in the multiple images based on the image recognition model. The image recognition model can include a model related to machine learning technology, which is used to receive image data and output the action type of the operator in the image.

接著,由於複數影像可帶有時間戳記之相關資訊,因此,監視系統執行步驟S403,根據複數影像之操作者之動作類型,決定動作類型之發生時間及動作週期。由監視系統執行步驟S404,記錄動作類型、發生時間及動作週期,俾利後續使用。 Next, since multiple images can carry relevant information with time stamps, the monitoring system executes step S403 to determine the occurrence time and action cycle of the action type according to the action type of the operator of the multiple images. The monitoring system executes step S404 to record the action type, occurrence time and action cycle for subsequent use.

於一些實施例中,於步驟S402後,為了增加判斷準確度,監視系統可執行步驟S402’,基於另一影像辨識模型,根據複數影像之物件數量變化更新動作類型。 In some embodiments, after step S402, in order to increase the accuracy of judgment, the monitoring system may execute step S402' to update the action type according to the change in the number of objects in the multiple images based on another image recognition model.

本發明之一些實施例包含生產線監視方法,其流程圖如圖5A至5F所示。這些實施例之生產線監視方法由一監視系統(如前述實施例之監視系統)實施。方法之詳細操作如下。 Some embodiments of the present invention include a production line monitoring method, the flow chart of which is shown in Figures 5A to 5F. The production line monitoring method of these embodiments is implemented by a monitoring system (such as the monitoring system of the aforementioned embodiment). The detailed operation of the method is as follows.

於一些實施例中,生產線監視方法須提供包含機器學習技術相關之影像辨識模型,用於接收影像資料並輸出影像中操作者之動作類型,因此,需先利用訓練資料訓練並產生影像辨識模型。 In some embodiments, the production line monitoring method needs to provide an image recognition model including machine learning technology to receive image data and output the operator's action type in the image. Therefore, it is necessary to first use training data to train and generate the image recognition model.

請參考圖5A,其係本發明之一些實施例之生產線監視方法之影像辨識模型產生之流程圖。由監視系統執行步驟S501,基於機器學習演算法,利用複數訓練資料產生影像辨識模型。其中,每一訓練資料包含訓練輸入及訓練輸出。訓練輸入包含訓練視訊片段,訓練輸出包含與訓練視訊片段相對應之訓練動作類型。由監視系統執行步驟S502,儲存影像辨識模型,俾利後續使用。 Please refer to FIG. 5A, which is a flowchart of the image recognition model generation of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S501 to generate an image recognition model based on a machine learning algorithm using multiple training data. Each training data includes a training input and a training output. The training input includes a training video clip, and the training output includes a training action type corresponding to the training video clip. The monitoring system executes step S502 to store the image recognition model for subsequent use.

於一些實施例中,為了增加影像辨識模型之準確度,可利用生產機台現場攝得之視訊作為反饋調整影像辨識模型。請參考圖5B,其係本發明之一些實施例之生產線監視方法之影像辨識模型更新之流程圖。由監視系統執行步驟S503,獲取視訊。其中,監視系統可由設置於生產線機台之影像擷取裝置獲取操作者之視訊,且視訊包含複數視訊片段。 In some embodiments, in order to increase the accuracy of the image recognition model, the video captured on-site at the production machine can be used as feedback to adjust the image recognition model. Please refer to Figure 5B, which is a flow chart of the image recognition model update of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S503 to obtain the video. Among them, the monitoring system can obtain the operator's video by the image capture device installed on the production line machine, and the video includes multiple video clips.

由監視系統執行步驟S504,基於前述產生之影像辨識模型,判斷各視訊片段之操作者之動作類型。由監視系統執行步驟S505,將視訊片段以及相應之動作類型提供予使用者,俾利使用者判斷是否有影像辨識模型之轉換偏誤。 The monitoring system executes step S504 to determine the action type of the operator in each video clip based on the image recognition model generated above. The monitoring system executes step S505 to provide the video clip and the corresponding action type to the user so that the user can determine whether there is a conversion error in the image recognition model.

當使用者判斷有影像辨識模型之轉換偏誤,導致特定視訊片段與其相應之動作類型不符,使用者便可針對特定視訊片段及其相應之動作類型進行變更。由監視系統執行步驟S506,接收使用者設定以變更此 視訊片段之動作類型。 When the user determines that there is a conversion error in the image recognition model, resulting in a mismatch between the specific video clip and its corresponding action type, the user can change the specific video clip and its corresponding action type. The monitoring system executes step S506 to receive the user's setting to change the action type of this video clip.

針對所有視訊片段判斷完畢後,由監視系統執行步驟S507,根據特定視訊片段及變更後之動作類型調整影像辨識模型。詳言之,監視系統根據原有之訓練資料、特定視訊片段、相應於特定視訊片段之動作類型以及機器學習演算法產生影像辨識模型。 After all video clips have been judged, the monitoring system executes step S507 to adjust the image recognition model according to the specific video clip and the changed action type. In detail, the monitoring system generates the image recognition model based on the original training data, the specific video clip, the action type corresponding to the specific video clip, and the machine learning algorithm.

如此一來,由於影像辨識模型重新進行之訓練具有針對生產線機台以及操作者之相關資訊(即特定視訊片段及相應於此特定視訊片段之動作類型),則更新後之影像辨識模型應用於生產線機台之判斷時將有更高之準確度。 In this way, since the retraining of the image recognition model contains relevant information about the production line machine and the operator (i.e. the specific video clip and the action type corresponding to this specific video clip), the updated image recognition model will have a higher accuracy when applied to the judgment of the production line machine.

於一些實施例中,可基於更新後之影像辨識模型監視生產線機台狀態。請參考圖5C,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S508,獲取生產線機台之操作者之視訊。其中,視訊包含複數視訊片段。由監視系統執行步驟S509,基於影像辨識模型,判斷各視訊片段之操作者之動作類型。 In some embodiments, the status of the production line machine can be monitored based on the updated image recognition model. Please refer to Figure 5C, which is a flow chart of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S508 to obtain the video of the operator of the production line machine. The video includes multiple video clips. The monitoring system executes step S509 to determine the action type of the operator in each video clip based on the image recognition model.

接著,由於複數視訊片段可帶有時間戳記之相關資訊,因此,監視系統執行步驟S510,根據各視訊片段之操作者之動作類型,決定動作類型之發生時間及動作週期。由監視系統執行步驟S511,記錄動作類型、發生時間及動作週期,俾利後續使用。 Next, since multiple video clips may carry relevant information of the timestamp, the monitoring system executes step S510 to determine the occurrence time and action cycle of the action type according to the action type of the operator in each video clip. The monitoring system executes step S511 to record the action type, occurrence time and action cycle for subsequent use.

於一些實施例中,監視系統可判斷視訊之動作是否發生延遲。請參考圖5D,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S512,針對各視訊片段之操作者之動作類型,判斷動作週期是否超過週期門檻值。若是,監視系統執行步驟S513,標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中,俾利使用者利用記錄檔有效率地於視訊中調出 被標註之視訊片段。若否,監視系統重複執行步驟S512,針對下一視訊片段之操作者之動作類型,判斷動作週期是否超過週期門檻值。 In some embodiments, the monitoring system can determine whether the action of the video is delayed. Please refer to Figure 5D, which is a flow chart of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S512 to determine whether the action cycle of the operator in each video clip exceeds the cycle threshold. If so, the monitoring system executes step S513 to mark the action type and the corresponding video clip, and record the action type, occurrence time and occurrence cycle corresponding to the video clip in the log file, so that the user can use the log file to efficiently call up the marked video clip in the video. If not, the monitoring system repeats step S512 to determine whether the action cycle of the operator in the next video clip exceeds the cycle threshold value.

於一些實施例中,監視系統可判斷視訊之動作與動作間是否發生延遲。請參考圖5E,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S514,計算二視訊片段之動作類型之發生時間之時間差。由監視系統執行步驟S515,判斷時間差是否超過時間門檻值。若是,監視系統執行步驟S516,標註此動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中,俾利使用者利用記錄檔有效率地於視訊中調出被標註之視訊片段。若否,監視系統重複執行步驟S514,針對下一對視訊片段計算相應動作類型之發生時間之時間差,判斷動作週期是否超過週期門檻值。 In some embodiments, the monitoring system can determine whether a delay occurs between actions in the video. Please refer to Figure 5E, which is a flow chart of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S514 to calculate the time difference between the occurrence times of the action types of the two video clips. The monitoring system executes step S515 to determine whether the time difference exceeds the time threshold. If so, the monitoring system executes step S516 to label this action type and the corresponding two video clips, and record the corresponding action type, occurrence time and occurrence cycle of the two video clips in the log file, so that the user can use the log file to efficiently call out the labeled video clip in the video. If not, the monitoring system repeats step S514 to calculate the time difference between the occurrence times of the corresponding action types for the next pair of video clips to determine whether the action cycle exceeds the cycle threshold.

於一些實施例中,可選擇性地增加以下步驟,以加速影像處理之速度及效率。請參考圖5F,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S517,接收使用者設定,用以於所需擷取之視訊上定義監視區域。換句話說,使用者設定係用以於影像擷取裝置所擷取之影像範圍上定義監視區域。其中,由於監視區域之影像或視訊尺寸較小,因此,可大幅可加快監視系統之處理速度。 In some embodiments, the following steps may be optionally added to speed up the speed and efficiency of image processing. Please refer to FIG. 5F, which is a flow chart of a production line monitoring method of some embodiments of the present invention. The monitoring system executes step S517 to receive user settings to define a monitoring area on the video to be captured. In other words, the user settings are used to define the monitoring area on the image range captured by the image capture device. Among them, since the image or video size of the monitoring area is smaller, the processing speed of the monitoring system can be greatly accelerated.

於一些實施例中,可選擇性地增加以下步驟,以降低生產線機台現場之環境變動帶來之偏差。請參考圖5G,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S518,接收使用者設定,用以移動監視區域。換句話說,使用者設定係用以於影像擷取裝置所擷取之影像範圍上移動監視區域。 In some embodiments, the following steps may be optionally added to reduce the deviation caused by environmental changes in the production line machine site. Please refer to Figure 5G, which is a flow chart of the production line monitoring method of some embodiments of the present invention. The monitoring system executes step S518 to receive user settings to move the monitoring area. In other words, the user setting is used to move the monitoring area on the image range captured by the image capture device.

前述本發明之監視系統及生產線監視方法,可透過自動化及人工智慧之方式,更快速且精確地找出造成生產線上失誤或延遲之因 素,進而提升生產線輸出之效率,並有效地改善生產線輸出之瓶頸。 The monitoring system and production line monitoring method of the present invention can more quickly and accurately identify the factors causing errors or delays on the production line through automation and artificial intelligence, thereby improving the efficiency of the production line output and effectively improving the bottleneck of the production line output.

應當特別理解,上述實施例中提到的處理器可以是中央處理單元(CPU)、能夠執行相關指令的其他硬體電路元件或者熟習此項技術者基於上文揭示內容熟知的計算電路的組合。此外,上述實施例中提到的儲存單元可以包括用於儲存資料的記憶體(諸如ROM、RAM等)或儲存裝置(諸如快閃記憶體、HDD、SSD等)。 It should be particularly understood that the processor mentioned in the above embodiment may be a central processing unit (CPU), other hardware circuit elements capable of executing relevant instructions, or a combination of computing circuits known to those skilled in the art based on the above disclosure. In addition, the storage unit mentioned in the above embodiment may include a memory (such as ROM, RAM, etc.) or a storage device (such as flash memory, HDD, SSD, etc.) for storing data.

進一步地,上述實施例中提到的通信匯流排可以包括用於在諸如處理器、儲存單元、輸入裝置等元件之間傳輸資料的通信介面,並且可以包括電匯流排介面、光學匯流排介面或者甚至無線匯流排介面。然而,此類描述並不意欲限制本發明的硬體實施方案實施例。 Furthermore, the communication bus mentioned in the above embodiments may include a communication interface for transmitting data between components such as a processor, a storage unit, an input device, etc., and may include an electrical bus interface, an optical bus interface, or even a wireless bus interface. However, such description is not intended to limit the hardware implementation of the present invention.

儘管已經對本發明及其優點進行詳細說明,但是應當理解的是,在不背離由所附申請專利範圍定義的本發明的精神及範疇的前提下,本文可以作出各種改變、替換及替代。例如,上文所討論的許多製程可以以不同的方法實施,並且由其他製程或其組合代替。 Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the spirit and scope of the present invention as defined by the appended claims. For example, many of the processes discussed above may be implemented in different ways and replaced by other processes or combinations thereof.

此外,本申請的範疇並不意欲限於本說明書中描述的製程、機器、製造、物質組合物、構件、方法及步驟的具體實施例。如一般熟習此項技術者將自本發明的揭示內容容易地理解,可以根據本發明利用執行與本文所述的對應實施例中的功能基本上相同的功能或實現與本文所述的對應實施例中的結果基本上相同的結果的當前存在或隨後待開發的製程、機器、製造、物質組合物、構件、方法或步驟。因此,所附申請專利範圍意欲在其範疇內包括此類製程、機器、製造、物質組合物、構件、方法或步驟。 In addition, the scope of this application is not intended to be limited to the specific embodiments of the processes, machines, manufactures, material compositions, components, methods, and steps described in this specification. As will be readily understood by those skilled in the art from the disclosure of this invention, currently existing or subsequently developed processes, machines, manufactures, material compositions, components, methods, or steps that perform substantially the same functions as those in the corresponding embodiments described herein or achieve substantially the same results as those in the corresponding embodiments described herein can be utilized according to the present invention. Therefore, the scope of the attached application is intended to include such processes, machines, manufactures, material compositions, components, methods, or steps within its scope.

S401~S404:步驟 S401~S404: Steps

Claims (16)

一種用於監視系統之生產線監視方法,包含:基於一機器學習(Machine Learning)演算法,利用複數訓練資料產生一影像辨識模型,其中每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含複數訓練影像,該訓練輸出包含與該等訓練影像相對應之一訓練動作類型;獲取一操作者之複數影像;基於該影像辨識模型,判斷該等影像之該操作者之一動作類型;基於另一影像辨識模型,根據該等影像之一物件數量變化更新該等影像之該操作者之該動作類型;決定根據該等影像之該物件數量變化而更新之該動作類型之一發生時間及一動作週期;以及記錄該動作類型、該發生時間及該動作週期。 A production line monitoring method for a monitoring system, comprising: based on a machine learning Learning) algorithm, using multiple training data to generate an image recognition model, wherein each training data includes a training input and a training output, the training input includes multiple training images, and the training output includes a training action type corresponding to the training images; obtaining multiple images of an operator; based on the image recognition model, determining an action type of the operator in the images; based on another image recognition model, updating the action type of the operator in the images according to a change in the quantity of an object in the images; determining an occurrence time and an action cycle of the action type updated according to the change in the quantity of the object in the images; and recording the action type, the occurrence time and the action cycle. 如請求項1之生產線監視方法,更包含:判斷該動作週期是否超過一週期門檻值;當該動作週期超過該週期門檻值,標記該動作類型。 The production line monitoring method of claim 1 further includes: determining whether the action cycle exceeds a cycle threshold value; when the action cycle exceeds the cycle threshold value, marking the action type. 如請求項1之生產線監視方法,更包含:計算該動作類型之該發生時間與另一動作類型之發生時間之一時間差;判斷該時間差是否超過一時間門檻值;當該時間差超過該時間門檻值,標記該動作類型以及該另一動作類 型。 The production line monitoring method of claim 1 further includes: calculating a time difference between the occurrence time of the action type and the occurrence time of another action type; determining whether the time difference exceeds a time threshold; when the time difference exceeds the time threshold, marking the action type and the other action type. 如請求項1之生產線監視方法,更包含:接收一使用者設定,用以於該等影像上定義一監視區域;其中,基於該影像辨識模型判斷該等影像之該操作者之該動作類型之步驟更包含:基於該影像辨識模型,判斷該等影像之該監視區域內之該動作類型。 The production line monitoring method of claim 1 further includes: receiving a user setting to define a monitoring area on the images; wherein the step of determining the action type of the operator of the images based on the image recognition model further includes: determining the action type within the monitoring area of the images based on the image recognition model. 如請求項4之生產線監視方法,更包含:接收另一使用者設定,用以移動該等影像上定義之該監視區域。 The production line monitoring method of claim 4 further includes: receiving another user setting to move the monitoring area defined on the images. 如請求項4之生產線監視方法,其中,該等影像具有一影像尺寸,該監視區域具有一區域尺寸,該監視區域尺寸係小於該影像尺寸。 As in claim 4, the production line monitoring method, wherein the images have an image size, the monitoring area has an area size, and the monitoring area size is smaller than the image size. 如請求項1之生產線監視方法,更包含:基於該影像辨識模型,辨識該等影像之該操作者之至少一手部,並判斷該操作者之該至少一手部之該動作類型。 The production line monitoring method of claim 1 further includes: based on the image recognition model, identifying at least one hand of the operator in the images, and determining the action type of the at least one hand of the operator. 一種用於監視系統之生產線監視方法,包含:利用複數訓練資料以及一機器學習(Machine Learning)演算法產生一影像辨識模型,其中,每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含一訓練視訊片段,該訓練輸出包含與該訓練視訊片段相對應之一訓練動作類型;獲取一視訊,其中,該視訊包含複數視訊片段; 基於該影像辨識模型,判斷各該視訊片段之一動作類型;基於另一影像辨識模型,根據各該視訊片段之一物件數量變化更新各該視訊片段之該動作類型;接收一使用者設定以變更該等視訊片段之一第一視訊片段之根據各該視訊片段之該物件數量變化而更新之該動作類型;以及根據該第一視訊片段之該動作類型調整該影像辨識模型。 A production line monitoring method for a monitoring system, comprising: using a plurality of training data and a machine learning (Machine Learning) algorithm to generate an image recognition model, wherein each training data includes a training input and a training output, wherein the training input includes a training video segment, and the training output includes a training action type corresponding to the training video segment; obtain a video, wherein the video includes a plurality of video segments; Based on the image recognition model, determine an action type of each of the video segments; Based on another image recognition model, update the action type of each of the video segments according to a change in the number of objects in each of the video segments; receive a user setting to change the action type of a first video segment of the video segments updated according to the change in the number of objects in each of the video segments; and adjust the image recognition model according to the action type of the first video segment. 如請求項8之生產線監視方法,其中,根據該第一視訊片段之該動作類型調整該影像辨識模型之步驟更包含:利用該等訓練資料、該第一視訊片段、相應於該第一視訊片段之該動作類型以及該機器學習演算法產生該影像辨識模型。 As in claim 8, the step of adjusting the image recognition model according to the action type of the first video clip further comprises: generating the image recognition model using the training data, the first video clip, the action type corresponding to the first video clip, and the machine learning algorithm. 一種用於生產線監視之監視系統,包含:一處理器;以及一儲存單元,儲存一程式,該程式執行時使該處理器:基於一機器學習(Machine Learning)演算法,利用複數訓練資料產生一影像辨識模型,並將該影像辨識模型儲存於該儲存單元,其中每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含複數訓練影像,該訓練輸出包含與該等訓練影像相對應之一訓練動作類型;獲取一操作者之複數影像;基於該影像辨識模型,判斷該等影像之該操作者之一動作類型;基於另一影像辨識模型,根據該等影像之一物件數量變化更新該等影像之該操作者之該動作類型;決定根據該等影像之該物件數量變化而更新之該動作類型之一發 生時間及一動作週期;以及記錄該動作類型、該發生時間及該動作週期。 A monitoring system for production line monitoring includes: a processor; and a storage unit storing a program, wherein when the program is executed, the processor: based on a machine learning algorithm, generates an image recognition model using a plurality of training data, and stores the image recognition model in the storage unit, wherein each training data includes a training input and a training output, wherein the training input includes a plurality of training images, and the training output includes a training action type corresponding to the training images; obtains a plurality of images of an operator; based on the images, generates an image recognition model using a plurality of training data, and stores the image recognition model in the storage unit. Recognition model, determine an action type of the operator of the images; based on another image recognition model, update the action type of the operator of the images according to a change in the quantity of an object in the images; determine an occurrence time and an action cycle of the action type updated according to the change in the quantity of the objects in the images; and record the action type, the occurrence time and the action cycle. 如請求項10之監視系統,其中,該程式被執行時進一步使該處理器:判斷該動作週期是否超過一週期門檻值;當該動作週期超過該週期門檻值,標記該動作類型。 A monitoring system as claimed in claim 10, wherein when the program is executed, the processor is further caused to: determine whether the action cycle exceeds a cycle threshold; when the action cycle exceeds the cycle threshold, mark the action type. 如請求項10之監視系統,其中,該程式被執行時進一步使該處理器:計算該動作類型之該發生時間與另一動作類型之發生時間之一時間差;判斷該時間差是否超過一時間門檻值;當該時間差超過該該時間門檻值,標記該動作類型以及該另一動作類型。 A monitoring system as claimed in claim 10, wherein when the program is executed, the processor is further caused to: calculate a time difference between the occurrence time of the action type and the occurrence time of another action type; determine whether the time difference exceeds a time threshold; when the time difference exceeds the time threshold, mark the action type and the other action type. 如請求項10之監視系統,更包含:一輸入裝置,用以接收一使用者設定;其中,該程式被執行時進一步使該處理器:根據該使用者設定於該等影像上定義一監視區域;基於該影像辨識模型,判斷該等影像之該監視區域內之該動作類型。 The monitoring system of claim 10 further comprises: an input device for receiving a user setting; wherein, when the program is executed, the processor further enables: defining a monitoring area on the images according to the user setting; and determining the action type within the monitoring area of the images based on the image recognition model. 如請求項13之監視系統,其中,該輸入裝置更用以接收另一使用者設定,該程式被執行時進一步使該處理器:根據該另一使用者設定移動該等影像上定義之該監視區域。 As in claim 13, the monitoring system, wherein the input device is further used to receive another user setting, and when the program is executed, the processor is further caused to: move the monitoring area defined on the images according to the other user setting. 如請求項10之監視系統,更包含:一輸入裝置,用以接收一使用者設定;其中,該程式被執行時進一步使該處理器:根據該使用者設定變更該等影像之該操作者之該動作類型;根據該等影像之該操作者之該動作類型調整該影像辨識模型。 The monitoring system of claim 10 further comprises: an input device for receiving a user setting; wherein when the program is executed, the processor is further caused to: change the action type of the operator of the images according to the user setting; and adjust the image recognition model according to the action type of the operator of the images. 如請求項10之監視系統,其中,該程式被執行時進一步使該處理器:基於該機器學習演算法,利用該等訓練資料、該等影像以及相應於該等影像之該操作者之該動作類型產生該影像辨識模型。 The monitoring system of claim 10, wherein when the program is executed, the processor is further caused to: generate the image recognition model based on the machine learning algorithm using the training data, the images, and the action type of the operator corresponding to the images.
TW109138721A 2020-11-05 2020-11-05 Product line monitoring method and monitoring system thereof TWI839583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109138721A TWI839583B (en) 2020-11-05 2020-11-05 Product line monitoring method and monitoring system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109138721A TWI839583B (en) 2020-11-05 2020-11-05 Product line monitoring method and monitoring system thereof

Publications (2)

Publication Number Publication Date
TW202219671A TW202219671A (en) 2022-05-16
TWI839583B true TWI839583B (en) 2024-04-21

Family

ID=82558878

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109138721A TWI839583B (en) 2020-11-05 2020-11-05 Product line monitoring method and monitoring system thereof

Country Status (1)

Country Link
TW (1) TWI839583B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681690A (en) * 2018-04-04 2018-10-19 浙江大学 A kind of assembly line personnel specification operation detecting system based on deep learning
CN110516636A (en) * 2019-08-30 2019-11-29 盈盛智创科技(广州)有限公司 A kind of monitoring method of process, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681690A (en) * 2018-04-04 2018-10-19 浙江大学 A kind of assembly line personnel specification operation detecting system based on deep learning
CN110516636A (en) * 2019-08-30 2019-11-29 盈盛智创科技(广州)有限公司 A kind of monitoring method of process, device, computer equipment and storage medium

Also Published As

Publication number Publication date
TW202219671A (en) 2022-05-16

Similar Documents

Publication Publication Date Title
JP7442550B2 (en) Inference calculation device, model training device, and inference calculation system
TWI693564B (en) Automatic equipment management system and method thereof
JP7320885B2 (en) Systems, methods and media for manufacturing processes
WO2024011601A1 (en) Industrial internet of things for early warning of functional degradation fault of device, and method and medium
US11080844B2 (en) System and method for testing an electronic device
US7640070B2 (en) Real-time fault detection and classification system in use with a semiconductor fabrication process
JP7103530B2 (en) Video analysis method, video analysis system and information processing equipment
US11500370B2 (en) System for predictive maintenance using generative adversarial networks for failure prediction
US12020444B2 (en) Production line monitoring method and monitoring system thereof
US11501200B2 (en) Generate alerts while monitoring a machine learning model in real time
CN108154230A (en) The monitoring method and monitoring device of deep learning processor
TWI839583B (en) Product line monitoring method and monitoring system thereof
JP7339321B2 (en) Machine learning model update method, computer program and management device
JP2018010368A (en) Process determination device, and process determination method
US20240061739A1 (en) Incremental causal discovery and root cause localization for online system fault diagnosis
Rosioru et al. Deep learning based parts classification in a cognitive robotic cell system
US11714721B2 (en) Machine learning systems for ETL data streams
US11200137B1 (en) System and methods for failure occurrence prediction and failure duration estimation
TWI818913B (en) Device and method for artificial intelligence controlling manufacturing apparatus
TWI543569B (en) Data transmission system, data transmission monitoring method, server device, and computer-readable media
Wright et al. Vesper: A real-time processing framework for vehicle perception augmentation
CN112907221B (en) Self-service method, device and system
JPWO2020178913A1 (en) Inspection system
CN118226786B (en) Self-adaptive dispensing control system and method based on machine vision
US20230359539A1 (en) Improved software monitoring of real-time services