TW202219671A - Product line monitoring method and monitoring system thereof - Google Patents

Product line monitoring method and monitoring system thereof Download PDF

Info

Publication number
TW202219671A
TW202219671A TW109138721A TW109138721A TW202219671A TW 202219671 A TW202219671 A TW 202219671A TW 109138721 A TW109138721 A TW 109138721A TW 109138721 A TW109138721 A TW 109138721A TW 202219671 A TW202219671 A TW 202219671A
Authority
TW
Taiwan
Prior art keywords
action type
recognition model
image recognition
training
action
Prior art date
Application number
TW109138721A
Other languages
Chinese (zh)
Other versions
TWI839583B (en
Inventor
健麟 羅
志恒 王
Original Assignee
英屬維爾京群島商百威雷科技控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英屬維爾京群島商百威雷科技控股有限公司 filed Critical 英屬維爾京群島商百威雷科技控股有限公司
Priority to TW109138721A priority Critical patent/TWI839583B/en
Priority claimed from TW109138721A external-priority patent/TWI839583B/en
Publication of TW202219671A publication Critical patent/TW202219671A/en
Application granted granted Critical
Publication of TWI839583B publication Critical patent/TWI839583B/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • General Factory Administration (AREA)

Abstract

The present disclosure provides a product line monitoring method and a monitoring system thereof. The monitoring system is configured to: obtain a plurality of images of an operator; based on an image recognition model, determine a motion type of the operation in the images; determine a time of occurrence and a motion period of the operator motion; and record the time of occurrence and the motion period of the operator motion.

Description

生產線監視方法及其監視系統Production line monitoring method and monitoring system

本發明係關於一種生產線監視方法及其監視系統,更特定言之,係關於一種基於機器學習技術之生產線監視方法及其監視系統。The present invention relates to a production line monitoring method and its monitoring system, more particularly, to a production line monitoring method and monitoring system based on machine learning technology.

傳統工業產品之製作流程中,各種裝置之零件組裝仍需要人工之協助。具體而言,單一裝置多需配置許多零件,而不同零件於裝置上之組裝,通常於工廠之生產線之各站台上,經由操作員人工完成。In the production process of traditional industrial products, the assembly of parts of various devices still requires human assistance. Specifically, a single device usually needs to be equipped with many parts, and the assembly of different parts on the device is usually done manually by operators on each platform of the production line of the factory.

惟人工操作之失誤或各種因素產生之延遲,常常造成生產線輸出之瓶頸,因此,生產線上需要監視裝置來記錄並確定造成生產線輸出瓶頸之原因,以利後續效率之改善。However, errors in manual operation or delays caused by various factors often cause bottlenecks in the output of the production line. Therefore, a monitoring device is required on the production line to record and determine the cause of the output bottleneck of the production line, so as to facilitate the improvement of subsequent efficiency.

然而,習知之監視裝置多僅有影像記錄之功能,因此,當得知生產線上發生狀況時,一般仍需針對記錄此生產線之影像,透過人工的方式對影像進行搜尋並判斷失誤或延遲之因素。However, most of the conventional monitoring devices only have the function of image recording. Therefore, when the situation on the production line is known, it is generally necessary to manually search the image for recording the image of the production line and determine the factors of error or delay. .

本發明之一些實施例提供了一種用於監視系統之生產線監視方法,包含:獲取操作者之複數影像;基於影像辨識模型,判斷複數影像之操作者之動作類型;決定動作類型之發生時間及動作週期;以及記錄動作類型、發生時間及動作週期。Some embodiments of the present invention provide a production line monitoring method for a monitoring system, including: acquiring a plurality of images of an operator; judging an action type of the operator in the plurality of images based on an image recognition model; determining the occurrence time and action of the action type period; and record the type of action, the time of occurrence, and the action period.

本發明之一些實施例提供了一種用於監視系統之生產線監視方法,包含:獲取視訊,其中,視訊包含複數視訊片段;基於影像辨識模型,判斷各視訊片段之動作類型;接收使用者設定以變更複數視訊片段之第一視訊片段之動作類型;以及根據第一視訊片段之動作類型調整影像辨識模型。Some embodiments of the present invention provide a production line monitoring method for a monitoring system, comprising: acquiring video information, wherein the video information includes a plurality of video video clips; judging the action type of each video clip based on an image recognition model; receiving user settings to change an action type of the first video segment of the plurality of video segments; and adjusting the image recognition model according to the action type of the first video segment.

本發明之一些實施例提供了一種用於生產線監視之監視系統,包含處理器以及儲存單元。儲存單元儲存程式以及影像辨識模型。程式執行時,使處理器:獲取操作者之複數影像;基於影像辨識模型,判斷複數影像之操作者之動作類型;決定動作類型之發生時間及動作週期;以及記錄動作類型、發生時間及動作週期。Some embodiments of the present invention provide a monitoring system for production line monitoring, including a processor and a storage unit. The storage unit stores the program and the image recognition model. When the program is executed, the processor is made to: acquire the multiple images of the operator; determine the action type of the operator in the multiple images based on the image recognition model; determine the occurrence time and action cycle of the action type; and record the action type, occurrence time and action cycle .

上述已經相當廣泛地概述了本發明的特徵及技術優點,以便可以更好地理解本發明的以下實施方式。下文中將對本發明的另外的特徵及優點進行描述,並且該等特徵及優點形成本發明的申請專利範圍的主題。熟習此項技術者應當理解的是,所揭示的概念及具體實施例可以容易地用作修改或設計用於實現本發明的相同目的的其他結構或製程的基礎。熟習此項技術者還應該認識到,此類等同的構造不脫離如所附申請專利範圍中所闡述的本發明的精神及範疇。The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the following embodiments of the invention may be better understood. Additional features and advantages of the invention will be described hereinafter that form the subject of the patentable scope of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the same purposes of the present invention. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.

現在使用特定語言描述了附圖中展出的本發明的實施例或實例。應當理解的是,在此不意欲限制本發明的範疇。對於本發明所關聯的一般熟習此項技術者,所描述的實施例的任何改變或修改以及本文件所描述的原理的任何進一步應用都被認為是通常發生的。參考數字可以在整個實施例中重複,但這並不一定意味著一個實施例的一或多個特徵適用於另一個實施例,即使此等實施例共用相同的參考數字。Specific language has now been used to describe the embodiments or examples of the invention illustrated in the accompanying drawings. It should be understood that no limitation of the scope of the invention is intended herein. Any alterations or modifications of the described embodiments and any further application of the principles described in this document are considered to be common occurrences to those of ordinary skill in the art to which the invention pertains. Reference numerals may be repeated throughout the embodiments, but this does not necessarily imply that one or more features of one embodiment are applicable to another embodiment, even if such embodiments share the same reference numerals.

應當理解,儘管本文可以使用術語第一、第二、第三等來描述各種元件、組件、區域、層或部分,但是此等元件、組件、區域、層或部分不受此等術語的限制。相反,此等術語僅用於將一個元件、組件、區域、層或部分與另一個元件、組件、區域、層或部分進行區分。因此,在不脫離本發明構思的教導的情況下,下文所討論的第一元件、組件、區域、層或部分可以被稱為第二元件、組件、區域、層或部分。It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers or sections, these elements, components, regions, layers or sections are not limited by these terms. Rather, these terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present inventive concept.

本文所使用的術語只是為了描述特定示例實施例並且不意欲限制本發明構思。如這裏所使用的,除非上下文另外清楚地指明,否則單數形式「一個/種(a/an)」及「該(the)」意欲還包括複數形式。應進一步理解的是,當在本說明書中使用時,術語「包含(comprises及comprising)」指出所陳述的特徵、整數、步驟、操作、元件或組件的存在,但不排除存在或添加一或多個其他特徵、整數、步驟、操作、元件、組件或其組。The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to limit the inventive concept. As used herein, the singular forms "a/an" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should be further understood that, when used in this specification, the terms "comprises and comprising" indicate the presence of stated features, integers, steps, operations, elements or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.

生產線上人工操作之失誤或其他因素造成之延遲,常導致生產線輸出之瓶頸,惟習知生產線上之監視裝置僅為單純影像記錄,因此,仍需透過人工的方式搜尋影像以找出失誤或延遲之因素,而此種偵錯方式之效率以及使用彈性非常低,並無法有效地改善生產線輸出之瓶頸。據此,為了更快速且精確地找出造成生產線上失誤或延遲之因素,進而提升生產線輸出之效率,需要創新的監視方法及監視系統。Errors caused by manual operations on the production line or delays caused by other factors often lead to bottlenecks in the output of the production line. However, the conventional monitoring devices on the production line are only simple image recordings. Therefore, it is still necessary to manually search the images to find out errors or delays. The efficiency and flexibility of this method of debugging are very low, and it cannot effectively improve the output bottleneck of the production line. Accordingly, in order to find out the factors that cause errors or delays on the production line more quickly and accurately, and thus improve the output efficiency of the production line, innovative monitoring methods and monitoring systems are required.

請參考圖1A,其係本發明之一些實施例之監視系統1之方塊圖。監視系統1包括處理器11及儲存單元13。儲存單元13儲存程式130以及影像辨識模型132。其中,影像辨識模型132可以包含機器學習(machine learning)技術相關之模型。進一步來說,影像辨識模型132主要係根據機器學習演算法,利用複數訓練資料產生之機器學習模型。Please refer to FIG. 1A , which is a block diagram of a monitoring system 1 according to some embodiments of the present invention. The monitoring system 1 includes a processor 11 and a storage unit 13 . The storage unit 13 stores the program 130 and the image recognition model 132 . The image recognition model 132 may include models related to machine learning technology. Further, the image recognition model 132 is mainly a machine learning model generated by using complex training data according to a machine learning algorithm.

具體而言,於一些實施例中,一些影像資料以及此些影像資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型132(亦即產生影像辨識模型132)。如此一來,影像辨識模型132便可用於接收影像資料,並輸出影像中操作者之動作類型。舉例來說,影像辨識模型132於接收操作者之影像序列後,判斷操作者正在進行"拾起"或"放下"之動作,並輸出動作類型為"拾起"或"放下"。Specifically, in some embodiments, some image data and the action types actually corresponding to the image data can be used as training data for training the image recognition model 132 (ie, generating the image recognition model 132 ) based on a machine learning algorithm. ). In this way, the image recognition model 132 can be used to receive the image data and output the action type of the operator in the image. For example, after receiving the image sequence of the operator, the image recognition model 132 determines that the operator is performing an action of "pick up" or "put down", and outputs the action type as "pick up" or "put down".

處理器11及儲存單元13經由通信匯流排(communication bus)17電性連接。透過通信匯流排17,處理器11可執行儲存於儲存單元13中的程式130。程式130執行時可以產生一或多個中斷(interrupt),例如:軟體中斷(software interrupt),以使處理器11執行具有生產線監視功能之程式130。下文將進一步描述程式130之功能。The processor 11 and the storage unit 13 are electrically connected via a communication bus 17 . Through the communication bus 17 , the processor 11 can execute the program 130 stored in the storage unit 13 . When the program 130 is executed, one or more interrupts, such as software interrupts, can be generated, so that the processor 11 can execute the program 130 with the production line monitoring function. The function of routine 130 is further described below.

請參考圖1B,其係本發明之一些實施例之監視系統1之使用示意圖。詳言之,當生產線機台92之操作需要被監視及分析時,可將影像擷取裝置91裝設於生產線機台92所在之環境,用以擷取與生產線機台92相關之影像。其中,監視系統1可透過網路連線(有線網路或無線網路)與影像擷取裝置91連線。Please refer to FIG. 1B , which is a schematic diagram of the use of the monitoring system 1 according to some embodiments of the present invention. Specifically, when the operation of the production line machine 92 needs to be monitored and analyzed, the image capture device 91 can be installed in the environment where the production line machine 92 is located to capture images related to the production line machine 92 . The monitoring system 1 can be connected to the image capturing device 91 through a network connection (wired network or wireless network).

於一些實施例中,當操作者93於生產線機台92上進行操作時,影像擷取裝置91可針對生產線機台92之位置,擷取操作者93之複數影像910,並透過網路將複數影像910傳送至監視系統1。換言之,監視系統1可透過網路自影像擷取裝置91獲取操作者93之複數影像910。In some embodiments, when the operator 93 operates on the production line machine 92, the image capture device 91 can capture the plurality of images 910 of the operator 93 according to the position of the production line machine 92, and record the plurality of images through the network. The image 910 is transmitted to the monitoring system 1 . In other words, the monitoring system 1 can acquire the plurality of images 910 of the operator 93 from the image capturing device 91 through the network.

接著,利用前述產生並儲存於儲存單元13之影像辨識模型132,監視系統1之處理器11可判斷複數影像910之操作者93之動作類型。其中,由於複數影像910帶有時間戳記(timestamp)之相關資訊,因此,處理器11可判斷複數影像910之擷取時間,並進一步地決定複數影像910所代表之動作類型之發生時間以及動作週期。處理器11可將動作類型、發生時間以及動作週期記錄於儲存單元13,俾利後續使用。Then, using the aforementioned image recognition model 132 generated and stored in the storage unit 13 , the processor 11 of the monitoring system 1 can determine the action type of the operator 93 of the plurality of images 910 . Wherein, since the plurality of images 910 carry relevant information of timestamps, the processor 11 can determine the capture time of the plurality of images 910, and further determine the occurrence time and the action period of the action type represented by the plurality of images 910 . The processor 11 can record the action type, occurrence time and action period in the storage unit 13 for subsequent use.

請參考圖2A,其係本發明之一些實施例之監視系統2之方塊圖。監視系統2包括處理器21、儲存單元23以及輸入裝置25。儲存單元23儲存程式230、影像辨識模型232以及訓練資料234。其中,影像辨識模型232可以包含機器學習技術相關之模型,用於接收視訊資料(即影像序列資料),並輸出視訊中操作者之動作類型。Please refer to FIG. 2A , which is a block diagram of a monitoring system 2 according to some embodiments of the present invention. The monitoring system 2 includes a processor 21 , a storage unit 23 and an input device 25 . The storage unit 23 stores the program 230 , the image recognition model 232 and the training data 234 . The image recognition model 232 may include a model related to machine learning technology for receiving video data (ie, image sequence data) and outputting the action type of the operator in the video.

處理器21、儲存單元23以及輸入裝置25經由通信匯流排27電性連接。透過通信匯流排27,處理器21可執行儲存於儲存單元23中的程式230。程式230執行時可以產生一或多個中斷,例如:軟體中斷,以使處理器21執行具有生產線監視功能之程式230。下文將進一步描述程式230之功能。The processor 21 , the storage unit 23 and the input device 25 are electrically connected via a communication bus 27 . Through the communication bus 27 , the processor 21 can execute the program 230 stored in the storage unit 23 . When the program 230 is executed, one or more interrupts, such as software interrupts, can be generated, so that the processor 21 can execute the program 230 with the function of monitoring the production line. The function of routine 230 is further described below.

於一些實施例中,影像辨識模型232主要係根據機器學習演算法,利用複數訓練資料234產生之機器學習模型。詳言之,一些視訊資料以及此些視訊資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型232(亦即產生影像辨識模型232)。In some embodiments, the image recognition model 232 is mainly a machine learning model generated using the complex training data 234 according to a machine learning algorithm. Specifically, some video data and the action types actually corresponding to the video data can be used as training data for training the image recognition model 232 (ie, generating the image recognition model 232 ) based on a machine learning algorithm.

更詳細來說,每一訓練資料234可以包括:(1)視訊資料;及(2)與此視訊資料相對應之動作類型,而程式230執行時,使處理器21自儲存單元23中擷取訓練資料234,並利用機器學習演算法,根據複數訓練資料234訓練影像辨識模型232。More specifically, each training data 234 may include: (1) video data; and (2) an action type corresponding to the video data, and when the program 230 is executed, the processor 21 retrieves from the storage unit 23 The training data 234 is used to train the image recognition model 232 according to the complex training data 234 by using a machine learning algorithm.

換句話說,複數訓練資料234之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料234之動作類型可以在訓練階段期間用作訓練輸出資料。在處理器21產生影像辨識模型232之後,可將影像辨識模型232儲存於儲存單元23中以待後續使用。In other words, the video data of the complex training data 234 may be used as training input data during the training phase, and the motion types of the complex training data 234 may be used as the training output data during the training phase. After the processor 21 generates the image recognition model 232, the image recognition model 232 can be stored in the storage unit 23 for subsequent use.

須說明,在一些實施例中,機器學習演算法主要係引入卷積神經網路(Convolutional Neural Network, CNN)演算法,以基於訓練資料234構建用於判斷動作類型之影像辨識模型232。於一些範例中,CNN演算法可包含YOLO(you only look once)演算法、R3D(ResNet 3D)演算法等影像處理及影像辨識演算法,惟其並非用以限制本發明中機器學習演算法之態樣。It should be noted that, in some embodiments, the machine learning algorithm mainly introduces a Convolutional Neural Network (CNN) algorithm to construct an image recognition model 232 for judging an action type based on the training data 234 . In some examples, the CNN algorithm may include image processing and image recognition algorithms such as YOLO (you only look once) algorithm, R3D (ResNet 3D) algorithm, etc., but it is not intended to limit the state of the machine learning algorithm in the present invention. Sample.

於一些實施例中,在用於訓練影像辨識模型232之CNN演算法之程式碼中,存在用於訓練影像辨識模型232之訓練函式(function)。在影像辨識模型232之訓練期間,訓練函式可以包括用於接收訓練資料234之部分。In some embodiments, in the code used to train the CNN algorithm of the image recognition model 232, there is a training function for training the image recognition model 232. During training of image recognition model 232 , the training function may include a portion for receiving training data 234 .

進一步地,視訊資料可以用作訓練輸入資料,與視訊資料相對應之動作類型可以用作訓練輸出資料。接著,可以在執行CNN演算法之程式碼之主函式(main function)後執行訓練函式,以訓練影像辨識模型232。在基於CNN演算法並利用訓練資料產生影像辨識模型232之後,影像辨識模型232可以用於判斷輸入視訊所對應之動作類型。Further, video data can be used as training input data, and action types corresponding to the video data can be used as training output data. Next, the training function can be executed after executing the main function of the code of the CNN algorithm to train the image recognition model 232 . After generating the image recognition model 232 based on the CNN algorithm and using the training data, the image recognition model 232 can be used to determine the action type corresponding to the input video.

請參考圖2B,其係本發明之一些實施例之監視系統2之使用示意圖。詳言之,當生產線機台82之操作需要被監視及分析時,可將影像擷取裝置81裝設於生產線機台82所在之環境,用以擷取與生產線機台82相關之視訊。其中,監視系統2可透過網路連線(有線網路或無線網路)與影像擷取裝置81連線。Please refer to FIG. 2B , which is a schematic diagram of the use of the monitoring system 2 according to some embodiments of the present invention. Specifically, when the operation of the production line machine 82 needs to be monitored and analyzed, the image capture device 81 can be installed in the environment where the production line machine 82 is located to capture video related to the production line machine 82 . The monitoring system 2 can be connected to the image capturing device 81 through a network connection (wired network or wireless network).

於一些實施例中,當操作者83於生產線機台82上進行操作時,影像擷取裝置81可針對生產線機台82之位置,即時(real-time)擷取操作者83之視訊810(例如:視訊串流),並透過網路將視訊810傳送至監視系統2。換言之,監視系統2可透過網路自影像擷取裝置81獲取操作者83之視訊810。In some embodiments, when the operator 83 operates on the production line machine 82 , the image capture device 81 can capture the video 810 of the operator 83 in real-time according to the position of the production line machine 82 (eg, : video streaming), and transmit the video 810 to the monitoring system 2 through the network. In other words, the monitoring system 2 can acquire the video 810 of the operator 83 from the image capturing device 81 through the network.

於一些實施例中,為了增加影像辨識模型232轉換之準確度,可利用生產機台82現場攝得之視訊作為反饋資料調整影像辨識模型232。詳細來說,視訊810可包含多個視訊片段,利用前述產生並儲存於儲存單元23之影像辨識模型232,監視系統2之處理器21可判斷每一視訊片段之操作者83之動作類型。In some embodiments, in order to increase the conversion accuracy of the image recognition model 232 , the video captured on site by the production machine 82 can be used as feedback data to adjust the image recognition model 232 . Specifically, the video 810 may include a plurality of video segments. Using the aforementioned image recognition model 232 generated and stored in the storage unit 23, the processor 21 of the monitoring system 2 can determine the action type of the operator 83 for each video segment.

當處理器21利用影像辨識模型232判斷完視訊810之每一視訊片段之操作者83之動作類型後,監視系統2可將視訊片段以及相應之動作類型提供予使用者,俾利使用者判斷是否有影像辨識模型232之轉換偏誤。於一些實施例中,監視系統2可透過顯示器(未繪示)以及圖形使用者介面(graphical user interface, GUI)提供視訊片段以及相應之動作類型提供予使用者。After the processor 21 uses the image recognition model 232 to determine the action type of the operator 83 of each video clip of the video 810, the monitoring system 2 can provide the video clip and the corresponding action type to the user, so that the user can determine whether There is a conversion bias of the image recognition model 232. In some embodiments, the monitoring system 2 may provide video clips and corresponding action types to the user through a display (not shown) and a graphical user interface (GUI).

接著,若使用者判斷特定視訊片段及其相應之動作類型係影像辨識模型232之轉換偏誤,使用者便可透過輸入裝置25輸入使用者設定,藉以將此特定視訊片段之動作類型變更為正確之動作。Then, if the user determines that the specific video segment and its corresponding action type are the conversion errors of the image recognition model 232, the user can input the user settings through the input device 25, so as to change the action type of the specific video segment to the correct one action.

隨後,處理器21便可利用此特定視訊片段以及更正後動作類型更新訓練資料234,並重新利用更新後之複數訓練資料234產生影像辨識模型232。更詳細來說,處理器21可利用原有之訓練資料234、至少一特定視訊片段、相應於此至少一特定視訊片段之至少一動作類型,基於機器學習演算法產生影像辨識模型234。Then, the processor 21 can update the training data 234 using the specific video segment and the corrected motion type, and reuse the updated complex training data 234 to generate the image recognition model 232 . More specifically, the processor 21 can generate the image recognition model 234 based on a machine learning algorithm by using the original training data 234, at least one specific video segment, and at least one action type corresponding to the at least one specific video segment.

如此一來,由於影像辨識模型232重新用以進行之訓練資料中,包含針對生產線機台82以及操作者83之相關資料(即至少一特定視訊片段及相應於此至少一特定視訊片段之至少一動作類型),則更新後之影像辨識模型232應用於生產線機台82之環境時將有更高之轉換準確度。In this way, since the training data re-used by the image recognition model 232 includes the relevant data for the production line machine 82 and the operator 83 (ie, at least one specific video segment and at least one corresponding to the at least one specific video segment) action type), the updated image recognition model 232 will have higher conversion accuracy when applied to the environment of the production line machine 82 .

利用生產機台82現場攝得之視訊作為反饋資料調整影像辨識模型232之技術,可透過下述範例更清楚地理解。舉例來說,視訊810包含十個視訊片段"C1~C10",利用前述產生並儲存於儲存單元23之影像辨識模型232,監視系統2之處理器21可判斷視訊片段"C1~C10"中每一視訊片段之操作者83之動作類型(例如:"拾起"動作或"放下"動作)。The technique of adjusting the image recognition model 232 using the video captured by the production machine 82 on-site as feedback data can be better understood through the following examples. For example, the video 810 includes ten video segments "C1-C10". Using the aforementioned image recognition model 232 generated and stored in the storage unit 23, the processor 21 of the monitoring system 2 can determine each of the video segments "C1-C10". The action type of the operator 83 of a video clip (eg "pick up" action or "drop down" action).

當處理器21利用影像辨識模型232判斷完視訊片段"C1~C10"之動作類型後,監視系統2便將視訊片段"C1~C10"及其各自相應之動作類型,透過顯示器以及GUI提供予使用者,俾利使用者判斷是否有影像辨識模型232之轉換偏誤。After the processor 21 uses the image recognition model 232 to determine the action types of the video clips "C1-C10", the monitoring system 2 provides the video clips "C1-C10" and their corresponding action types for use through the display and GUI That is, it is helpful for the user to judge whether there is a conversion error of the image recognition model 232 .

於此範例中,視訊片段"C1"及"C8"之動作類型分別被監視系統2判斷為"拾起動作"及"放下動作"。惟使用者判斷視訊片段"C1"及"C8"之動作類型應分別為"放下動作"及"拾起動作",因此,使用者透過輸入裝置25輸入使用者設定,分別將視訊片段"C1"及"C8"之動作類型修正為"放下動作"及"拾起動作"。隨後,處理器21將視訊片段"C1"及"C8"以及更正後動作類型更新訓練資料234,並重新利用更新後之複數訓練資料234產生影像辨識模型232。In this example, the action types of the video clips "C1" and "C8" are determined by the monitoring system 2 as "pick-up action" and "drop action", respectively. However, the user judges that the action types of the video clips "C1" and "C8" should be "drop action" and "pick up action" respectively. Therefore, the user inputs the user settings through the input device 25, and sets the video clip "C1" respectively. And the action type of "C8" is corrected to "drop action" and "pick up action". Then, the processor 21 updates the training data 234 with the video clips " C1 " and " C8 " and the corrected action type, and reuses the updated complex training data 234 to generate the image recognition model 232 .

於一些實施例中,透過前述步驟更新影像辨識模型232後,當操作者83於生產線機台82上持續進行操作時,影像擷取裝置81可針對生產線機台82之位置,擷取操作者83之視訊812,並透過網路將視訊812傳送至監視系統2。換言之,監視系統2可透過網路自影像擷取裝置81獲取操作者83之視訊812。其中,視訊812包含複數視訊片段。In some embodiments, after the image recognition model 232 is updated through the aforementioned steps, when the operator 83 continues to operate on the production line machine 82 , the image capture device 81 can capture the position of the production line machine 82 to capture the operator 83 . and transmit the video 812 to the monitoring system 2 through the network. In other words, the monitoring system 2 can acquire the video 812 of the operator 83 from the image capturing device 81 through the network. The video 812 includes a plurality of video segments.

接著,利用前述已更新並儲存於儲存單元23之影像辨識模型232,監視系統2之處理器21可判斷視訊812之每一視訊片段之動作類型。其中,由於每一視訊片段帶有時間戳記之相關資訊,因此,處理器21可判斷各視訊片段之擷取時間,並進一步地決定各視訊片段所代表之動作類型之發生時間以及動作週期。處理器21可將動作類型以及動作週期記錄於儲存單元23,俾利後續使用。Then, using the image recognition model 232 that has been updated and stored in the storage unit 23 , the processor 21 of the monitoring system 2 can determine the action type of each video segment of the video 812 . Wherein, since each video clip carries relevant information of a time stamp, the processor 21 can determine the capture time of each video clip, and further determine the occurrence time and action cycle of the action type represented by each video clip. The processor 21 can record the action type and action period in the storage unit 23 for subsequent use.

於一些實施例中,處理器21可針對所有儲存於儲存單元23之每一視訊片段,判斷相應之動作類型之動作週期是否超過週期門檻值。若是,則標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之視訊片段,並進一步理解此些視訊片段中動作類型之動作週期超過週期門檻值之原因,以快速地排除造成延遲之因素。In some embodiments, the processor 21 may determine whether the action period of the corresponding action type exceeds the period threshold for each video segment stored in the storage unit 23 . If so, mark the action type and the corresponding video clip, and record the action type, occurrence time and occurrence cycle corresponding to the video clip in the record file. In this way, the user can use the log file to efficiently call up the marked video clips in the video 812, and further understand the reason why the action cycle of the action types in these video clips exceeds the cycle threshold, so as to quickly Exclude factors causing delays.

舉例來說,預設"拾起"動作應在3秒內完成,則處理器21針對所有相應於"拾起"之動作之視訊片段,判斷其動作週期是否超過3秒值。若是,則標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。則使用者便可利用記錄檔,有效率地於視訊812中調出被標註之視訊片段,並進一步理解此些視訊片段中動作類型之動作週期超過3秒之原因,以快速地排除造成延遲之因素。For example, it is assumed that the "pick-up" action should be completed within 3 seconds, and the processor 21 determines whether the action period exceeds the 3-second value for all video clips corresponding to the "pick-up" action. If so, mark the action type and the corresponding video clip, and record the action type, occurrence time and occurrence cycle corresponding to the video clip in the record file. Then the user can use the log file to efficiently call up the marked video clips in the video 812, and further understand the reason why the action period of the action types in these video clips exceeds 3 seconds, so as to quickly eliminate the delay caused by the video clip. factor.

於一些實施例中,處理器21可針對所有儲存於儲存單元23之連續二視訊片段,判斷相應之二動作類型之發生時間之時間差是否超過時間門檻值。若是,則標註此二動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之二視訊片段,並進一步理解相應之二動作類型之發生時間之時間差超過時間門檻值之原因,以快速地排除造成延遲之因素。In some embodiments, the processor 21 may determine whether the time difference between the occurrence times of the corresponding two action types exceeds a time threshold for all the two consecutive video segments stored in the storage unit 23 . If so, mark the two action types and the corresponding two video clips, and record the corresponding action type, occurrence time and occurrence cycle of the two video clips in the record file. In this way, the user can use the record file to efficiently call up the two marked video clips in the video 812, and further understand the reason why the time difference between the occurrence times of the corresponding two action types exceeds the time threshold, so as to quickly factors that cause delays are excluded.

舉例來說,預設連續發生之"拾起"動作以及"放下"動作間,相關之零件配置操作應在10秒內完成,則處理器21針對連續發生之"拾起"動作以及"放下"動作之二視訊片段,判斷時間差是否超過10秒。若是,則標註此二動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中。如此一來,使用者便可利用記錄檔,有效率地於視訊812中調出被標註之二視訊片段,並進一步理解相應之二動作類型之發生時間之時間差超過10秒之原因,以快速地排除造成延遲之因素。For example, it is assumed that between the "pick-up" action and the "pick-up" action that occurs continuously, the related part configuration operation should be completed within 10 seconds, then the processor 21 is directed to the "pick-up" action and the "pock-down" action that occurs continuously. The second video clip of the action determines whether the time difference exceeds 10 seconds. If so, mark the two action types and the corresponding two video clips, and record the corresponding action type, occurrence time and occurrence cycle of the two video clips in the record file. In this way, the user can use the record file to efficiently call up the two marked video clips in the video 812, and further understand the reason why the time difference between the occurrence times of the corresponding two action types exceeds 10 seconds, so as to quickly Exclude factors causing delays.

請參考圖2C,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之示意圖。於一些實施例中,由於影像擷取裝置81所擷取之影像或視訊範圍較大,因此,處理器21利用影像辨識模型232處理影像或視訊時,將花費較多之硬體資源以及時間。Please refer to FIG. 2C , which is a schematic diagram of a frame captured by the image capturing device 81 according to some embodiments of the present invention. In some embodiments, since the image or video captured by the image capturing device 81 has a larger range, the processor 21 will spend more hardware resources and time when processing the image or video using the image recognition model 232 .

惟影像擷取裝置81所擷取之影像或視訊並非全部需要被監視,因此,可針對所擷取之影像或視訊,定義需要監視之較小範圍之區域,而處理器21僅需針對較小範圍之區域,利用影像辨識模型232進行影像或視訊之處理,如此,便可大幅加快處理速度。However, not all the images or videos captured by the image capturing device 81 need to be monitored. Therefore, a smaller area to be monitored can be defined for the captured images or videos, and the processor 21 only needs to monitor a smaller area. In the area of the range, the image recognition model 232 is used for image or video processing, so that the processing speed can be greatly accelerated.

請參考圖2D,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,使用者可以透過輸入裝置25輸入使用者設定,用以於影像擷取裝置81所擷取之影像範圍上定義監視區域80A,而處理器21僅需針對監視區域80A之影像或視訊,利用影像辨識模型232進行處理。如此一來,由於監視區域80A之影像或視訊尺寸較小,因此,便可大幅可加快監視系統2之處理速度。Please refer to FIG. 2D , which is another schematic diagram of a frame captured by the image capturing device 81 according to some embodiments of the present invention. Specifically, the user can input user settings through the input device 25 to define the monitoring area 80A on the image range captured by the image capturing device 81 , and the processor 21 only needs to target the image or video of the monitoring area 80A , using the image recognition model 232 for processing. In this way, since the size of the image or video of the monitoring area 80A is small, the processing speed of the monitoring system 2 can be greatly accelerated.

於一些實施例中,當生產線機台82現場之環境發生變動時(例如:影像擷取裝置81角度被調整、操作員配置變動、操作員位置變動等等),可能造成原本欲監測之範圍偏離監視區域80A,並導致影像辨識模型232轉換誤差提升。此時,使用者可直接調整監視區域80A之位置,以降低生產線機台82現場之環境變動帶來之偏差。In some embodiments, when the environment of the production line machine 82 is changed (for example, the angle of the image capture device 81 is adjusted, the operator configuration is changed, the operator position is changed, etc.), it may cause a deviation from the range originally intended to be monitored. The area 80A is monitored and the conversion error of the image recognition model 232 is increased. At this time, the user can directly adjust the position of the monitoring area 80A, so as to reduce the deviation caused by the environmental change of the production line machine 82 site.

請參考圖2E,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,由於生產線機台82現場之環境發生變動,使得監視區域80A內之影像或視訊並非需要監視之內容,因此,可能導致影像辨識模型232之轉換誤差提升。Please refer to FIG. 2E , which is another schematic diagram of a frame captured by the image capturing device 81 according to some embodiments of the present invention. Specifically, since the environment of the production line machine 82 changes, the image or video in the monitoring area 80A is not required to be monitored, and therefore, the conversion error of the image recognition model 232 may increase.

請參考圖2F,其係本發明之一些實施例之影像擷取裝置81所擷取之畫面之另一示意圖。具體而言,使用者可以透過輸入裝置25輸入另一使用者設定,用以於影像擷取裝置81所擷取之影像範圍上移動監視區域80A,使需要監視之區域恢復正常。Please refer to FIG. 2F , which is another schematic diagram of a frame captured by the image capturing device 81 according to some embodiments of the present invention. Specifically, the user can input another user setting through the input device 25 to move the monitoring area 80A on the image range captured by the image capturing device 81 , so that the area to be monitored returns to normal.

於一些實施例中,影像擷取裝置81所擷取之畫面,可先傳送至監視系統2。隨後,監視系統2可透過一般顯示器(未繪示)顯示此些畫面,並透過如鍵盤或滑鼠之輸入裝置25接收使用者設定,俾利監視系統2完成相關操作。In some embodiments, the images captured by the image capturing device 81 may be sent to the monitoring system 2 first. Then, the monitoring system 2 can display these pictures through a common display (not shown), and receive user settings through the input device 25 such as a keyboard or a mouse, so that the monitoring system 2 can complete the related operations.

於一些實施例中,影像擷取裝置81所擷取之畫面,可先傳送至監視系統2。隨後,監視系統2可透過網路將畫面傳送至遠端顯示器(例如:手持智慧裝置、筆記型電腦等)此些畫面,並透過如網路介面之輸入裝置25接收使用者設定,俾利監視系統2完成相關操作。In some embodiments, the images captured by the image capturing device 81 may be sent to the monitoring system 2 first. Then, the monitoring system 2 can transmit the pictures to the remote displays (such as handheld smart devices, notebook computers, etc.) through the network, and receive user settings through the input device 25 such as a network interface, so as to facilitate monitoring System 2 completes the relevant operations.

請參考圖3A,其係本發明之一些實施例之監視系統3之方塊圖。監視系統3包括處理器31、儲存單元33以及輸入裝置35。儲存單元33儲存程式330、影像辨識模型332A、影像辨識模型332B以及訓練資料334A、334B。其中,影像辨識模型332A、332B可以包含機器學習技術相關之模型,用於判斷視訊資料(即影像序列資料)中操作者之動作類型或物品之數量變化。Please refer to FIG. 3A , which is a block diagram of a monitoring system 3 according to some embodiments of the present invention. The monitoring system 3 includes a processor 31 , a storage unit 33 and an input device 35 . The storage unit 33 stores the program 330, the image recognition model 332A, the image recognition model 332B, and the training data 334A and 334B. The image recognition models 332A and 332B may include models related to machine learning technology, which are used to determine the operator's action type or the quantity change of objects in the video data (ie, the image sequence data).

處理器31、儲存單元33以及輸入裝置35經由通信匯流排37電性連接。透過通信匯流排37,處理器31可執行儲存於儲存單元33中的程式330。程式330執行時可以產生一或多個中斷,例如:軟體中斷,以使處理器31執行具有生產線監視功能之程式330。下文將進一步描述程式330之功能。The processor 31 , the storage unit 33 and the input device 35 are electrically connected via the communication bus 37 . Through the communication bus 37 , the processor 31 can execute the program 330 stored in the storage unit 33 . One or more interrupts, such as software interrupts, may be generated when the program 330 is executed, so that the processor 31 can execute the program 330 with the function of monitoring the production line. The function of routine 330 is further described below.

於一些實施例中,影像辨識模型332A主要係根據機器學習演算法,利用複數訓練資料334A產生之機器學習模型。詳言之,一些視訊資料以及此些視訊資料所實際對應之動作類型可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型332A(亦即產生影像辨識模型332A)。In some embodiments, the image recognition model 332A is primarily a machine learning model generated using the complex training data 334A according to a machine learning algorithm. Specifically, some video data and the action types actually corresponding to the video data can be used as training data for training the image recognition model 332A (ie, generating the image recognition model 332A) based on a machine learning algorithm.

更詳細來說,每一訓練資料334A可以包括:(1)視訊資料;及(2)與此視訊資料相對應之動作類型,而程式330執行時,使處理器31自儲存單元33中擷取訓練資料334A,並利用機器學習演算法,根據複數訓練資料334A訓練影像辨識模型332A。More specifically, each training data 334A may include: (1) video data; and (2) an action type corresponding to the video data, and when the program 330 is executed, the processor 31 retrieves from the storage unit 33 The training data 334A is used to train the image recognition model 332A according to the complex training data 334A using a machine learning algorithm.

換句話說,複數訓練資料334A之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料334A之動作類型可以在訓練階段期間用作訓練輸出資料。在處理器31產生影像辨識模型332A之後,可將影像辨識模型332A儲存於儲存單元33中以待後續使用。In other words, the video data of the complex training data 334A may be used as training input data during the training phase, and the motion types of the complex training data 334A may be used as the training output data during the training phase. After the processor 31 generates the image recognition model 332A, the image recognition model 332A can be stored in the storage unit 33 for subsequent use.

於一些實施例中,複數訓練資料334A中,用作訓練輸入資料之視訊資料,包含操作員之動作之影像資料。而與視訊資料相對應之動作類型可以用作訓練輸出資料。接著,可以執行CNN演算法之程式碼以訓練影像辨識模型332A。在基於CNN演算法並利用訓練資料產生影像辨識模型332A之後,影像辨識模型332A可以用於判斷輸入視訊所對應之動作類型。In some embodiments, among the plurality of training data 334A, the video data used as training input data includes video data of the operator's actions. The action types corresponding to the video data can be used as training output data. Next, the code of the CNN algorithm can be executed to train the image recognition model 332A. After generating the image recognition model 332A based on the CNN algorithm and using the training data, the image recognition model 332A can be used to determine the action type corresponding to the input video.

於一些實施例中,影像辨識模型332B主要係根據機器學習演算法,利用複數訓練資料334B產生之機器學習模型。詳言之,一些視訊資料以及此些視訊資料所實際對應之物件數量變化可做為訓練資料,用以基於機器學習演算法訓練影像辨識模型332B(亦即產生影像辨識模型332B)。In some embodiments, the image recognition model 332B is mainly a machine learning model generated using the complex training data 334B according to a machine learning algorithm. In detail, some video data and changes in the number of objects actually corresponding to the video data can be used as training data for training the image recognition model 332B (ie, generating the image recognition model 332B) based on a machine learning algorithm.

更詳細來說,每一訓練資料334B可以包括:(1)視訊資料;及(2)與此視訊資料相對應之物件數量變化(例如:增加或減少),而程式330執行時,使處理器31自儲存單元33中擷取訓練資料334B,並利用機器學習演算法,根據複數訓練資料334B訓練影像辨識模型332B。More specifically, each training data 334B may include: (1) video data; and (2) a change (eg, increase or decrease) in the number of objects corresponding to the video data, and when the program 330 is executed, the processor 31. The training data 334B is retrieved from the storage unit 33, and a machine learning algorithm is used to train the image recognition model 332B according to the complex training data 334B.

換句話說,複數訓練資料334B之視訊資料可以在訓練階段期間用作訓練輸入資料,複數訓練資料334B之物件數量變化可以在訓練階段期間用作訓練輸出資料。在處理器31產生影像辨識模型332B之後,可將影像辨識模型332B儲存於儲存單元33中以待後續使用。In other words, the video data of the complex training data 334B can be used as training input data during the training phase, and the object quantity changes of the complex training data 334B can be used as training output data during the training phase. After the processor 31 generates the image recognition model 332B, the image recognition model 332B can be stored in the storage unit 33 for subsequent use.

於一些實施例中,複數訓練資料334B中,用作訓練輸入資料之視訊資料,包含物件數量變動之影像資料。而與視訊資料相對應之物件數量變化可以用作訓練輸出資料。接著,可以執行CNN演算法之程式碼以訓練影像辨識模型332B。在基於CNN演算法並利用訓練資料產生影像辨識模型332B之後,影像辨識模型332B可以用於判斷輸入視訊所對應之物件數量變化。In some embodiments, among the plurality of training data 334B, the video data used as training input data includes image data of changes in the number of objects. The change in the number of objects corresponding to the video data can be used as training output data. Next, the code of the CNN algorithm can be executed to train the image recognition model 332B. After generating the image recognition model 332B based on the CNN algorithm and using the training data, the image recognition model 332B can be used to determine the change in the number of objects corresponding to the input video.

更詳細來說,視訊資料紀錄特定物件(例如:產品零組件)之數量變化,而特定物件之數量變化可代表不同之動作行為。舉例而言,當特定物件之數量在此視訊資料中發生減少之變化時,代表操作員之動作係"拾起"特定物件之機率較高。當特定物件之數量在此視訊資料中發生增加之變化時,代表操作員之動作係"放下"特定物件之機率較高。據此,利用影像資料中特定物件之數量變化,可協助提升動作類型判斷結果之準確率。More specifically, the video data records the quantity changes of specific objects (eg, product components), and the quantity changes of specific objects can represent different behaviors. For example, when the number of specific objects decreases in the video data, it means that the operator's action is more likely to "pick up" the specific object. When the number of specific objects increases in this video data, it means that the operator's action is more likely to "drop" the specific object. Accordingly, using the change in the number of specific objects in the image data can help improve the accuracy of the action type judgment result.

請參考圖3B,其係本發明之一些實施例之監視系統3之使用示意圖。詳言之,當生產線機台72之操作需要被監視及分析時,可將影像擷取裝置71裝設於生產線機台72所在之環境,用以擷取與生產線機台72相關之視訊。其中,監視系統3可透過網路連線(有線網路或無線網路)與影像擷取裝置71連線。Please refer to FIG. 3B , which is a schematic diagram of the use of the monitoring system 3 according to some embodiments of the present invention. Specifically, when the operation of the production line machine 72 needs to be monitored and analyzed, the image capture device 71 can be installed in the environment where the production line machine 72 is located to capture video related to the production line machine 72 . The monitoring system 3 can be connected to the image capturing device 71 through a network connection (wired network or wireless network).

於一些實施例中,當操作者73於生產線機台72上進行操作時,影像擷取裝置71可針對生產線機台72之位置,即時擷取操作者73之視訊710(例如:視訊串流),並透過網路將視訊710傳送至監視系統3。換言之,監視系統3可透過網路自影像擷取裝置71獲取操作者73之視訊710。其中,視訊710包含複數視訊片段。In some embodiments, when the operator 73 operates on the production line machine 72 , the image capture device 71 can capture the video 710 (eg, video stream) of the operator 73 in real time according to the position of the production line machine 72 . , and transmit the video 710 to the monitoring system 3 through the network. In other words, the monitoring system 3 can acquire the video 710 of the operator 73 from the image capturing device 71 through the network. The video 710 includes a plurality of video segments.

接著,使用者可以透過輸入裝置35輸入使用者設定,用以於影像擷取裝置71所擷取之影像範圍上定義監視區域70A以及70B,而處理器31僅需針對監視區域70A以及70B之影像或視訊,利用影像辨識模型332A、332B進行處理。Next, the user can input user settings through the input device 35 to define the monitoring areas 70A and 70B on the image range captured by the image capturing device 71 , and the processor 31 only needs to target the images of the monitoring areas 70A and 70B Or video, which is processed using the image recognition models 332A and 332B.

隨後,利用前述儲存於儲存單元33之影像辨識模型332A,監視系統3之處理器31可判斷視訊710之每一視訊片段中,監視區域70A以及70B之動作類型。其中,由於每一視訊片段帶有時間戳記之相關資訊,因此,處理器31可判斷各視訊片段之擷取時間,並進一步地決定各視訊片段中,監視區域70A以及70B所代表之動作類型之發生時間以及動作週期。處理器31可將動作類型以及動作週期記錄於儲存單元33,俾利後續使用。Then, using the aforementioned image recognition model 332A stored in the storage unit 33 , the processor 31 of the monitoring system 3 can determine the action types of the monitoring areas 70A and 70B in each video segment of the video 710 . Among them, since each video clip carries relevant information of a time stamp, the processor 31 can determine the capture time of each video clip, and further determine the action types represented by the monitoring areas 70A and 70B in each video clip. Occurrence time and action cycle. The processor 31 can record the action type and action cycle in the storage unit 33 for subsequent use.

於一些實施例中,針對每一視訊片段之監視區域70A以及70B,監視系統3之處理器31可進一步利用影像辨識模型332B判斷物件數量之變化,並據以更新操作者73之動作類型。請參考圖3C至圖3D,其係本發明之一些實施例之影像擷取裝置71所擷取之畫面之示意圖。舉例而言,針對特定視訊片段之監視區域70A,監視系統3之處理器31可先利用影像辨識模型332A判斷動作類型為"拾起"。In some embodiments, for the monitoring areas 70A and 70B of each video segment, the processor 31 of the monitoring system 3 may further use the image recognition model 332B to determine the change in the number of objects, and update the action type of the operator 73 accordingly. Please refer to FIG. 3C to FIG. 3D , which are schematic diagrams of images captured by the image capturing device 71 according to some embodiments of the present invention. For example, for the monitoring area 70A of a specific video segment, the processor 31 of the monitoring system 3 can first determine that the action type is "pick up" by using the image recognition model 332A.

接著,針對此特定視訊片段之監視區域70A,監視系統3之處理器31可進一步利用影像辨識模型332B判斷視訊片段中物件74數量減少。據此,由於監視區域70A中,特定視訊片段之動作類型為"拾起",且物件74數量之減少確實為"拾起"導致,因此,可將特定動作類型準確地確認為"拾起"。Next, for the monitoring area 70A of the specific video segment, the processor 31 of the monitoring system 3 may further use the image recognition model 332B to determine that the number of objects 74 in the video segment is reduced. Accordingly, since the action type of the specific video segment in the monitoring area 70A is "pick-up", and the decrease in the number of objects 74 is indeed caused by "pick-up", the specific action type can be accurately confirmed as "pick-up" .

需說明,針對特定視訊片段之監視區域70A,當監視系統3之處理器31利用影像辨識模型332A判斷動作類型為"放下",惟利用影像辨識模型332B判斷視訊片段中物件74數量減少時,表示影像辨識模型332A之判斷可能有誤。據此,基於利用影像辨識模型332B判斷視訊片段中物件74數量減少,監視系統3之處理器31可將相應特定視訊片段之動作類型由"放下"更新為"拾起"。It should be noted that, for the monitoring area 70A of a specific video clip, when the processor 31 of the monitoring system 3 uses the image recognition model 332A to determine that the action type is "put down", but uses the image recognition model 332B to determine that the number of objects 74 in the video clip decreases, it means The judgment of the image recognition model 332A may be wrong. Accordingly, the processor 31 of the monitoring system 3 can update the action type of the corresponding specific video segment from "drop" to "pick up" based on determining that the number of objects 74 in the video segment is reduced by using the image recognition model 332B.

請參考圖3E至圖3F,其係本發明之一些實施例之影像擷取裝置71所擷取之畫面之示意圖。舉例而言,針對特定視訊片段之監視區域70B,監視系統3之處理器31可先利用影像辨識模型332A判斷動作類型為"放下"。Please refer to FIG. 3E to FIG. 3F , which are schematic diagrams of images captured by the image capturing device 71 according to some embodiments of the present invention. For example, for the monitoring area 70B of a specific video segment, the processor 31 of the monitoring system 3 can first determine that the action type is "put down" by using the image recognition model 332A.

接著,針對此特定視訊片段之監視區域70B,監視系統3之處理器31可進一步利用影像辨識模型332B判斷視訊片段中物件74數量增加。據此,由於監視區域70B中,特定視訊片段之動作類型為"放下",且物件74數量之增加確實為"放下"導致,因此,可將特定動作類型準確地確認為"放下"。Next, for the monitoring area 70B of the specific video segment, the processor 31 of the monitoring system 3 can further use the image recognition model 332B to determine that the number of objects 74 in the video segment is increased. Accordingly, since the action type of the specific video segment in the monitoring area 70B is "drop", and the increase in the number of objects 74 is indeed caused by "drop", the specific action type can be accurately confirmed as "drop".

同樣地,針對特定視訊片段之監視區域70B,當監視系統3之處理器31利用影像辨識模型332A判斷動作類型為"拾起",惟利用影像辨識模型332B判斷視訊片段中物件74數量增加時,表示影像辨識模型332A之判斷可能有誤。據此,基於利用影像辨識模型332B判斷視訊片段中物件74數量增加,監視系統3之處理器31可將相應特定視訊片段之動作類型由"拾起"更新為"放下"。Similarly, for the monitoring area 70B of a specific video clip, when the processor 31 of the monitoring system 3 uses the image recognition model 332A to determine that the action type is "pick up", but uses the image recognition model 332B to determine that the number of objects 74 in the video clip increases, It indicates that the judgment of the image recognition model 332A may be wrong. Accordingly, the processor 31 of the monitoring system 3 can update the action type of the corresponding specific video segment from "pick up" to "put down" based on judging by the image recognition model 332B that the number of objects 74 in the video segment increases.

需特別說明,前述實施例中,處理器使用影像辨識模型判斷影像或視訊資料之動作類型時,主要可先利用影像辨識模型辨識並追蹤操作者之手部,並進一步透過操作者之手部之動作,判斷影像或視訊之操作者之動作類型。It should be noted that, in the foregoing embodiment, when the processor uses the image recognition model to determine the action type of the image or video data, the image recognition model can be used to identify and track the operator's hand, and further use the image recognition model to identify and track the operator's hand. Action, to determine the action type of the operator of the image or video.

本發明之一些實施例包含生產線監視方法,其流程圖如圖4A至4B所示。這些實施例之生產線監視方法由一監視系統(如前述實施例之監視系統)實施。方法之詳細操作如下。Some embodiments of the present invention include a production line monitoring method, the flowchart of which is shown in FIGS. 4A-4B. The production line monitoring methods of these embodiments are implemented by a monitoring system (such as the monitoring system of the aforementioned embodiments). The detailed operation of the method is as follows.

首先,由監視系統執行步驟S401,獲取操作者之複數影像。其中,監視系統可由設置於生產線機台之影像擷取裝置獲取操作者之複數影像。由監視系統執行步驟S402,基於影像辨識模型,判斷複數影像之操作者之動作類型。其中,影像辨識模型可以包含機器學習技術相關之模型,用於接收影像資料並輸出影像中操作者之動作類型。First, step S401 is executed by the monitoring system to acquire multiple images of the operator. Among them, the monitoring system can acquire multiple images of the operator through the image capturing device installed on the machine of the production line. Step S402 is executed by the monitoring system, and based on the image recognition model, the action type of the operator of the plurality of images is determined. The image recognition model may include a model related to machine learning technology for receiving image data and outputting the action type of the operator in the image.

接著,由於複數影像可帶有時間戳記之相關資訊,因此,監視系統執行步驟S403,根據複數影像之操作者之動作類型,決定動作類型之發生時間及動作週期。由監視系統執行步驟S404,記錄動作類型、發生時間及動作週期,俾利後續使用。Next, since the plurality of images may carry relevant information of time stamps, the monitoring system executes step S403 to determine the occurrence time of the action type and the action period according to the action type of the operator of the plurality of images. Step S404 is executed by the monitoring system, and the action type, occurrence time and action period are recorded for subsequent use.

於一些實施例中,於步驟S402後,為了增加判斷準確度,監視系統可執行步驟S402’,基於另一影像辨識模型,根據複數影像之物件數量變化更新動作類型。In some embodiments, after step S402, in order to increase the judgment accuracy, the monitoring system may perform step S402', based on another image recognition model, to update the action type according to the change in the number of objects in the plurality of images.

本發明之一些實施例包含生產線監視方法,其流程圖如圖5A至5F所示。這些實施例之生產線監視方法由一監視系統(如前述實施例之監視系統)實施。方法之詳細操作如下。Some embodiments of the present invention include a production line monitoring method, the flow chart of which is shown in FIGS. 5A to 5F . The production line monitoring methods of these embodiments are implemented by a monitoring system (such as the monitoring system of the aforementioned embodiments). The detailed operation of the method is as follows.

於一些實施例中,生產線監視方法須提供包含機器學習技術相關之影像辨識模型,用於接收影像資料並輸出影像中操作者之動作類型,因此,需先利用訓練資料訓練並產生影像辨識模型。In some embodiments, the production line monitoring method needs to provide an image recognition model including machine learning technology for receiving the image data and outputting the operator's action type in the image. Therefore, the training data is used to train and generate the image recognition model.

請參考圖5A,其係本發明之一些實施例之生產線監視方法之影像辨識模型產生之流程圖。由監視系統執行步驟S501,基於機器學習演算法,利用複數訓練資料產生影像辨識模型。其中,每一訓練資料包含訓練輸入及訓練輸出。訓練輸入包含訓練視訊片段,訓練輸出包含與訓練視訊片段相對應之訓練動作類型。由監視系統執行步驟S502,儲存影像辨識模型,俾利後續使用。Please refer to FIG. 5A , which is a flow chart of image recognition model generation of the production line monitoring method according to some embodiments of the present invention. Step S501 is executed by the monitoring system, and based on the machine learning algorithm, the image recognition model is generated by using the complex training data. Wherein, each training data includes training input and training output. The training input includes training video clips, and the training output includes training action types corresponding to the training video clips. Step S502 is executed by the monitoring system to store the image recognition model for subsequent use.

於一些實施例中,為了增加影像辨識模型之準確度,可利用生產機台現場攝得之視訊作為反饋調整影像辨識模型。請參考圖5B,其係本發明之一些實施例之生產線監視方法之影像辨識模型更新之流程圖。由監視系統執行步驟S503,獲取視訊。其中,監視系統可由設置於生產線機台之影像擷取裝置獲取操作者之視訊,且視訊包含複數視訊片段。In some embodiments, in order to increase the accuracy of the image recognition model, the video captured on site of the production machine can be used as feedback to adjust the image recognition model. Please refer to FIG. 5B , which is a flowchart of an image recognition model update of a production line monitoring method according to some embodiments of the present invention. Step S503 is executed by the monitoring system to acquire video information. Wherein, the monitoring system can obtain the video of the operator by the image capture device arranged on the machine of the production line, and the video includes a plurality of video segments.

由監視系統執行步驟S504,基於前述產生之影像辨識模型,判斷各視訊片段之操作者之動作類型。由監視系統執行步驟S505,將視訊片段以及相應之動作類型提供予使用者,俾利使用者判斷是否有影像辨識模型之轉換偏誤。Step S504 is executed by the monitoring system to determine the action type of the operator of each video segment based on the generated image recognition model. Step S505 is executed by the monitoring system, and the video clip and the corresponding action type are provided to the user, so that the user can judge whether there is a conversion error of the image recognition model.

當使用者判斷有影像辨識模型之轉換偏誤,導致特定視訊片段與其相應之動作類型不符,使用者便可針對特定視訊片段及其相應之動作類型進行變更。由監視系統執行步驟S506,接收使用者設定以變更此視訊片段之動作類型。When the user determines that there is a conversion error of the image recognition model, causing a specific video segment and its corresponding action type to be inconsistent, the user can change the specific video segment and its corresponding action type. Step S506 is executed by the monitoring system to receive user settings to change the action type of the video clip.

針對所有視訊片段判斷完畢後,由監視系統執行步驟S507,根據特定視訊片段及變更後之動作類型調整影像辨識模型。詳言之,監視系統根據原有之訓練資料、特定視訊片段、相應於特定視訊片段之動作類型以及機器學習演算法產生影像辨識模型。After all the video clips are judged, the monitoring system executes step S507 to adjust the image recognition model according to the specific video clip and the changed action type. Specifically, the surveillance system generates an image recognition model according to the original training data, a specific video clip, an action type corresponding to the specific video clip, and a machine learning algorithm.

如此一來,由於影像辨識模型重新進行之訓練具有針對生產線機台以及操作者之相關資訊(即特定視訊片段及相應於此特定視訊片段之動作類型),則更新後之影像辨識模型應用於生產線機台之判斷時將有更高之準確度。In this way, since the re-training of the image recognition model has relevant information for the production line machine and operator (ie a specific video segment and the action type corresponding to the specific video segment), the updated image recognition model is applied to the production line. The machine's judgment will have higher accuracy.

於一些實施例中,可基於更新後之影像辨識模型監視生產線機台狀態。請參考圖5C,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S508,獲取生產線機台之操作者之視訊。其中,視訊包含複數視訊片段。由監視系統執行步驟S509,基於影像辨識模型,判斷各視訊片段之操作者之動作類型。In some embodiments, the production line machine status can be monitored based on the updated image recognition model. Please refer to FIG. 5C , which is a flowchart of a production line monitoring method according to some embodiments of the present invention. Step S508 is executed by the monitoring system to obtain the video of the operator of the production line machine. Wherein, the video includes a plurality of video segments. Step S509 is executed by the monitoring system to determine the action type of the operator of each video segment based on the image recognition model.

接著,由於複數視訊片段可帶有時間戳記之相關資訊,因此,監視系統執行步驟S510,根據各視訊片段之操作者之動作類型,決定動作類型之發生時間及動作週期。由監視系統執行步驟S511,記錄動作類型、發生時間及動作週期,俾利後續使用。Next, since the plurality of video clips may carry relevant information of time stamps, the monitoring system executes step S510 to determine the occurrence time and the action period of the action type according to the action type of the operator of each video clip. Step S511 is executed by the monitoring system, and the action type, occurrence time and action period are recorded for subsequent use.

於一些實施例中,監視系統可判斷視訊之動作是否發生延遲。請參考圖5D,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S512,針對各視訊片段之操作者之動作類型,判斷動作週期是否超過週期門檻值。若是,監視系統執行步驟S513,標註此動作類型及相應之視訊片段,並將此視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中,俾利使用者利用記錄檔有效率地於視訊中調出被標註之視訊片段。若否,監視系統重複執行步驟S512,針對下一視訊片段之操作者之動作類型,判斷動作週期是否超過週期門檻值。In some embodiments, the monitoring system can determine whether the action of the video is delayed. Please refer to FIG. 5D , which is a flowchart of a production line monitoring method according to some embodiments of the present invention. Step S512 is executed by the monitoring system to determine whether the action period exceeds the period threshold value according to the action type of the operator of each video segment. If so, the monitoring system executes step S513, marking the action type and the corresponding video clip, and recording the action type, occurrence time and occurrence cycle corresponding to the video clip in the record file, so that the user can use the record file to efficiently record in the record file. The marked video clip is called up in the video. If not, the monitoring system repeatedly executes step S512 to determine whether the action period exceeds the period threshold for the action type of the operator of the next video segment.

於一些實施例中,監視系統可判斷視訊之動作與動作間是否發生延遲。請參考圖5E,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S514,計算二視訊片段之動作類型之發生時間之時間差。由監視系統執行步驟S515,判斷時間差是否超過時間門檻值。若是,監視系統執行步驟S516,標註此動作類型及相應之二視訊片段,並將此二視訊片段相應之動作類型、發生時間以及發生週期記錄於記錄檔中,俾利使用者利用記錄檔有效率地於視訊中調出被標註之視訊片段。若否,監視系統重複執行步驟S514,針對下一對視訊片段計算相應動作類型之發生時間之時間差,判斷動作週期是否超過週期門檻值。In some embodiments, the monitoring system can determine whether a delay occurs between actions of the video. Please refer to FIG. 5E , which is a flowchart of a production line monitoring method according to some embodiments of the present invention. Step S514 is executed by the monitoring system to calculate the time difference between the occurrence times of the action types of the two video segments. Step S515 is executed by the monitoring system to determine whether the time difference exceeds the time threshold. If so, the monitoring system executes step S516, marking the action type and the corresponding two video clips, and recording the action type, occurrence time and occurrence cycle corresponding to the two video clips in the log file, so that the user can use the log file efficiently. The marked video clip is called up in the video. If not, the monitoring system executes step S514 repeatedly, calculates the time difference between the occurrence times of the corresponding action types for the next pair of video clips, and determines whether the action period exceeds the period threshold.

於一些實施例中,可選擇性地增加以下步驟,以加速影像處理之速度及效率。請參考圖5F,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S517,接收使用者設定,用以於所需擷取之視訊上定義監視區域。換句話說,使用者設定係用以於影像擷取裝置所擷取之影像範圍上定義監視區域。其中,由於監視區域之影像或視訊尺寸較小,因此,可大幅可加快監視系統之處理速度。In some embodiments, the following steps can be selectively added to accelerate the speed and efficiency of image processing. Please refer to FIG. 5F , which is a flowchart of a production line monitoring method according to some embodiments of the present invention. Step S517 is executed by the monitoring system to receive user settings for defining a monitoring area on the video to be captured. In other words, the user setting is used to define the monitoring area on the range of the image captured by the image capturing device. Among them, since the image or video size of the monitoring area is small, the processing speed of the monitoring system can be greatly accelerated.

於一些實施例中,可選擇性地增加以下步驟,以降低生產線機台現場之環境變動帶來之偏差。請參考圖5G,其係本發明之一些實施例之生產線監視方法之流程圖。由監視系統執行步驟S518,接收使用者設定,用以移動監視區域。換句話說,使用者設定係用以於影像擷取裝置所擷取之影像範圍上移動監視區域。In some embodiments, the following steps can be selectively added to reduce deviations caused by environmental changes at the production line machine site. Please refer to FIG. 5G , which is a flowchart of a production line monitoring method according to some embodiments of the present invention. Step S518 is executed by the monitoring system to receive user settings for moving the monitoring area. In other words, the user setting is used to move the monitoring area on the range of the image captured by the image capturing device.

前述本發明之監視系統及生產線監視方法,可透過自動化及人工智慧之方式,更快速且精確地找出造成生產線上失誤或延遲之因素,進而提升生產線輸出之效率,並有效地改善生產線輸出之瓶頸。The aforementioned monitoring system and production line monitoring method of the present invention can more quickly and accurately find out the factors that cause errors or delays on the production line by means of automation and artificial intelligence, thereby improving the output efficiency of the production line and effectively improving the output of the production line. bottleneck.

應當特別理解,上述實施例中提到的處理器可以是中央處理單元(CPU)、能夠執行相關指令的其他硬體電路元件或者熟習此項技術者基於上文揭示內容熟知的計算電路的組合。此外,上述實施例中提到的儲存單元可以包括用於儲存資料的記憶體(諸如ROM、RAM等)或儲存裝置(諸如快閃記憶體、HDD、SSD等)。It should be particularly understood that the processor mentioned in the above embodiments may be a central processing unit (CPU), other hardware circuit elements capable of executing relevant instructions, or a combination of computing circuits known to those skilled in the art based on the above disclosure. In addition, the storage unit mentioned in the above embodiments may include a memory (such as ROM, RAM, etc.) or a storage device (such as flash memory, HDD, SSD, etc.) for storing data.

進一步地,上述實施例中提到的通信匯流排可以包括用於在諸如處理器、儲存單元、輸入裝置等元件之間傳輸資料的通信介面,並且可以包括電匯流排介面、光學匯流排介面或者甚至無線匯流排介面。然而,此類描述並不意欲限制本發明的硬體實施方案實施例。Further, the communication bus mentioned in the above embodiments may include a communication interface for transferring data between elements such as a processor, a storage unit, an input device, etc., and may include an electrical bus interface, an optical bus interface, or Even a wireless bus interface. However, such descriptions are not intended to limit the examples of hard implementations of the invention.

儘管已經對本發明及其優點進行詳細說明,但是應當理解的是,在不背離由所附申請專利範圍定義的本發明的精神及範疇的前提下,本文可以作出各種改變、替換及替代。例如,上文所討論的許多製程可以以不同的方法實施,並且由其他製程或其組合代替。Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. For example, many of the processes discussed above can be implemented in different ways and replaced by other processes or combinations thereof.

此外,本申請的範疇並不意欲限於本說明書中描述的製程、機器、製造、物質組合物、構件、方法及步驟的具體實施例。如一般熟習此項技術者將自本發明的揭示內容容易地理解,可以根據本發明利用執行與本文所述的對應實施例中的功能基本上相同的功能或實現與本文所述的對應實施例中的結果基本上相同的結果的當前存在或隨後待開發的製程、機器、製造、物質組合物、構件、方法或步驟。因此,所附申請專利範圍意欲在其範疇內包括此類製程、機器、製造、物質組合物、構件、方法或步驟。Furthermore, the scope of this application is not intended to be limited to the specific embodiments of the process, machine, manufacture, composition of matter, means, methods, and steps described in this specification. As will be readily understood by those of ordinary skill in the art from the disclosure of the present invention, the corresponding embodiments described herein may be utilized in accordance with the present invention to perform substantially the same functions as those of the corresponding embodiments described herein. A process, machine, manufacture, composition of matter, component, method or step currently existing or subsequently to be developed that results in substantially the same result. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

1:監視系統 2:監視系統 3:監視系統 11:處理器 13:儲存單元 17:通信匯流排 21:處理器 23:儲存單元 25:輸入裝置 27:通信匯流排 31:處理器 33:儲存單元 35:輸入裝置 37:通信匯流排 70A:監視區域 70B:監視區域 71:影像擷取裝置 72:生產線機台 73:操作者 74:物件 80A:監視區域 81:影像擷取裝置 82:生產線機台 83:操作者 91:影像擷取裝置 92:生產線機台 93:操作者 130:程式 132:影像辨識模型 230:程式 232:影像辨識模型 234:訓練資料 330:程式 332A:影像辨識模型 332B:影像辨識模型 334:訓練資料 710:視訊 810:視訊 812:視訊 910:影像 S401~S404:步驟 S501~S518:步驟 1: Monitoring system 2: Monitoring system 3: Monitoring system 11: Processor 13: Storage unit 17: Communication bus 21: Processor 23: Storage unit 25: Input device 27: Communication bus 31: Processor 33: Storage unit 35: Input device 37: Communication bus 70A: Surveillance area 70B: Surveillance area 71: Image capture device 72: Production line machine 73: Operator 74: Object 80A: Surveillance area 81: Image capture device 82: Production line machine 83: Operator 91: Image capture device 92: Production line machine 93: Operator 130: Program 132: Image recognition model 230: Program 232: Image recognition model 234: Training Materials 330: Program 332A: Image Recognition Model 332B: Image Recognition Model 334: Training Materials 710: Video 810: Video 812: Video 910: Video S401~S404: Steps S501~S518: Steps

當與附圖一起閱讀以下實施方式時,可以根據以下實施方式最好地理解本發明的各態樣。應注意,根據行業中的標準實踐,各種特徵不是按比例繪製的。實際上,為了討論的清晰起見,可以任意地增大或減小各種特徵的尺寸。Aspects of the present invention are best understood in light of the following embodiments when read in conjunction with the accompanying drawings. It should be noted that in accordance with standard practice in the industry, the various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or decreased for clarity of discussion.

當結合附圖考慮時,可以藉由參考實施方式及申請專利範圍得出對本發明更徹底的理解,其中貫穿附圖,相似的參考數字係指類似的元件。A more thorough understanding of the present invention may be obtained by reference to the embodiments and the scope of the claims, when considered in conjunction with the accompanying drawings, wherein like reference numerals refer to like elements throughout.

圖1A係本發明之一些實施例之監視系統之方塊圖。FIG. 1A is a block diagram of a monitoring system according to some embodiments of the present invention.

圖1B係本發明之一些實施例之監視系統之使用示意圖。FIG. 1B is a schematic diagram of the use of the monitoring system according to some embodiments of the present invention.

圖2A係本發明之一些實施例之監視系統之方塊圖。2A is a block diagram of a monitoring system according to some embodiments of the present invention.

圖2B係本發明之一些實施例之監視系統之使用示意圖。FIG. 2B is a schematic diagram of the use of the monitoring system according to some embodiments of the present invention.

圖2C至2F係本發明之一些實施例之影像擷取裝置所擷取之畫面之示意圖。2C to 2F are schematic diagrams of images captured by the image capturing device according to some embodiments of the present invention.

圖3A係本發明之一些實施例之監視系統之方塊圖。3A is a block diagram of a monitoring system according to some embodiments of the present invention.

圖3B係本發明之一些實施例之監視系統之使用示意圖。FIG. 3B is a schematic diagram of the use of the monitoring system according to some embodiments of the present invention.

圖3C至3F係本發明之一些實施例之影像擷取裝置所擷取之畫面之示意圖。3C to 3F are schematic diagrams of frames captured by the image capturing device according to some embodiments of the present invention.

圖4A至4B係本發明之一些實施例之生產線監視方法之流程圖。4A-4B are flowcharts of production line monitoring methods according to some embodiments of the present invention.

圖5A至5G係本發明之一些實施例之生產線監視方法之流程圖。5A to 5G are flowcharts of production line monitoring methods according to some embodiments of the present invention.

S401~S404:步驟 S401~S404: Steps

Claims (20)

一種用於監視系統之生產線監視方法,包含: 獲取一操作者之複數影像; 基於一影像辨識模型,判斷該等影像之該操作者之一動作類型; 決定該動作類型之一發生時間及一動作週期;以及 記錄該動作類型、該發生時間及該動作週期。 A production line monitoring method for a monitoring system, comprising: Obtain multiple images of an operator; Based on an image recognition model, determine an action type of the operator in the images; determine an occurrence time and an action period of the action type; and Record the action type, the occurrence time and the action period. 如請求項1之生產線監視方法,更包含: 判斷該動作週期是否超過一週期門檻值; 當該動作週期超過該週期門檻值,標記該動作類型。 Such as the production line monitoring method of request item 1, it also includes: Determine whether the action cycle exceeds a cycle threshold; When the action cycle exceeds the cycle threshold, the action type is marked. 如請求項1之生產線監視方法,更包含: 計算該動作類型之該發生時間與另一動作類型之發生時間之一時間差; 判斷該時間差是否超過一時間門檻值; 當該時間差超過該時間門檻值,標記該動作類型以及該另一動作類型。 Such as the production line monitoring method of request item 1, it also includes: Calculate a time difference between the occurrence time of the action type and the occurrence time of another action type; Determine whether the time difference exceeds a time threshold; When the time difference exceeds the time threshold, the action type and the other action type are marked. 如請求項1之生產線監視方法,更包含: 接收一使用者設定,用以於該等影像上定義一監視區域; 其中,基於該影像辨識模型判斷該等影像之該操作者之該動作類型之步驟更包含: 基於該影像辨識模型,判斷該等影像之該監視區域內之該動作類型。 Such as the production line monitoring method of request item 1, it also includes: receiving a user setting for defining a surveillance area on the images; Wherein, the step of judging the action type of the operator of the images based on the image recognition model further includes: Based on the image recognition model, the action type in the surveillance area of the images is determined. 如請求項4之生產線監視方法,更包含: 接收另一使用者設定,用以移動該等影像上定義之該監視區域。 For example, the production line monitoring method of claim 4, further includes: Another user setting is received for moving the surveillance area defined on the images. 如請求項4之生產線監視方法,其中,該等影像具有一影像尺寸,該監視區域具有一區域尺寸,該監視區域尺寸係小於該影像尺寸。The production line monitoring method of claim 4, wherein the images have an image size, the monitoring area has an area size, and the monitoring area size is smaller than the image size. 如請求項1之生產線監視方法,更包含: 基於該影像辨識模型,辨識該等影像之該操作者之至少一手部,並判斷該操作者之該至少一手部之該動作類型。 Such as the production line monitoring method of request item 1, it also includes: Based on the image recognition model, at least one hand of the operator in the images is identified, and the action type of the at least one hand of the operator is determined. 如請求項1之生產線監視方法,更包含: 基於一機器學習(Machine Learning)演算法,利用複數訓練資料產生該影像辨識模型,其中每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含複數訓練影像,該訓練輸出包含與該等訓練影像相對應之一訓練動作類型。 Such as the production line monitoring method of request item 1, it also includes: Based on a Machine Learning algorithm, the image recognition model is generated using plural training data, wherein each training data includes a training input and a training output, the training input includes a plurality of training images, and the training output includes and the The training image corresponds to one of the training action types. 如請求項1之生產線監視方法,更包含: 基於另一影像辨識模型,根據該等影像之一物件數量變化更新該等影像之該操作者之該動作類型。 Such as the production line monitoring method of request item 1, it also includes: Based on another image recognition model, the action type of the operator of the images is updated according to a change in the number of objects in the images. 一種用於監視系統之生產線監視方法,包含: 獲取一視訊,其中,該視訊包含複數視訊片段; 基於一影像辨識模型,判斷各該視訊片段之一動作類型; 接收一使用者設定以變更該等視訊片段之一第一視訊片段之該動作類型;以及 根據該第一視訊片段之該動作類型調整該影像辨識模型。 A production line monitoring method for a monitoring system, comprising: obtaining a video, wherein the video includes a plurality of video segments; determining an action type of each of the video clips based on an image recognition model; receiving a user setting to change the action type for a first video clip of one of the video clips; and The image recognition model is adjusted according to the action type of the first video segment. 如請求項10之生產線監視方法,更包含: 利用複數訓練資料以及之一機器學習(Machine Learning)演算法產生該影像辨識模型,其中,每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含一訓練視訊片段,該訓練輸出包含與該訓練視訊片段相對應之一訓練動作類型。 For example, the production line monitoring method of claim 10 further includes: The image recognition model is generated by using plural training data and a machine learning algorithm, wherein each training data includes a training input and a training output, the training input includes a training video segment, and the training output includes and The training video clip corresponds to a training action type. 如請求項11之生產線監視方法,其中,根據該第一視訊片段之該動作類型調整該影像辨識模型之步驟更包含: 利用該等訓練資料、該第一視訊片段、相應於該第一視訊片段之該動作類型以及該機器學習演算法產生該影像辨識模型。 The production line monitoring method of claim 11, wherein the step of adjusting the image recognition model according to the action type of the first video clip further comprises: The image recognition model is generated using the training data, the first video segment, the action type corresponding to the first video segment, and the machine learning algorithm. 一種用於生產線監視之監視系統,包含: 一處理器;以及 一儲存單元,儲存一程式以及一影像辨識模型,該程式執行時使該處理器: 獲取一操作者之複數影像; 基於該影像辨識模型,判斷該等影像之該操作者之一動作類型; 決定該動作類型之一發生時間及一動作週期;以及 記錄該動作類型、該發生時間及該動作週期。 A monitoring system for production line monitoring, comprising: a processor; and a storage unit, storing a program and an image recognition model, and when the program is executed, the processor: Obtain multiple images of an operator; based on the image recognition model, determine an action type of the operator in the images; determine an occurrence time and an action period of the action type; and Record the action type, the occurrence time and the action period. 如請求項13之監視系統,其中,該程式被執行時進一步使該處理器: 判斷該動作週期是否超過一週期門檻值; 當該動作週期超過該週期門檻值,標記該動作類型。 The monitoring system of claim 13, wherein the program, when executed, further causes the processor to: Determine whether the action cycle exceeds a cycle threshold; When the action cycle exceeds the cycle threshold, the action type is marked. 如請求項13之監視系統,其中,該程式被執行時進一步使該處理器: 計算該動作類型之該發生時間與另一動作類型之發生時間之一時間差; 判斷該時間差是否超過一時間門檻值; 當該時間差超過該該時間門檻值,標記該動作類型以及該另一動作類型。 The monitoring system of claim 13, wherein the program, when executed, further causes the processor to: Calculate a time difference between the occurrence time of the action type and the occurrence time of another action type; Determine whether the time difference exceeds a time threshold; When the time difference exceeds the time threshold, the action type and the other action type are marked. 如請求項13之監視系統,更包含: 一輸入裝置,用以接收一使用者設定; 其中,該程式被執行時進一步使該處理器: 根據該使用者設定於該等影像上定義一監視區域; 基於該影像辨識模型,判斷該等影像之該監視區域內之該動作類型。 As in the monitoring system of claim 13, it further includes: an input device for receiving a user setting; where the program, when executed, further causes the processor to: define a surveillance area on the images according to the user settings; Based on the image recognition model, the action type in the surveillance area of the images is determined. 如請求項16之監視系統,其中,該輸入裝置更用以接收另一使用者設定,該程式被執行時進一步使該處理器: 根據該另一使用者設定移動該等影像上定義之該監視區域。 The monitoring system of claim 16, wherein the input device is further configured to receive settings from another user, the program, when executed, further causes the processor to: Move the surveillance area defined on the images according to the other user settings. 如請求項13之監視系統,其中,該程式被執行時進一步使該處理器: 基於一機器學習(Machine Learning)演算法,利用複數訓練資料產生該影像辨識模型,其中每一訓練資料包含一訓練輸入及一訓練輸出,該訓練輸入包含複數訓練影像,該訓練輸出包含與該等訓練影像相對應之一訓練動作類型。 The monitoring system of claim 13, wherein the program, when executed, further causes the processor to: Based on a Machine Learning algorithm, the image recognition model is generated using plural training data, wherein each training data includes a training input and a training output, the training input includes a plurality of training images, and the training output includes and the The training image corresponds to one of the training action types. 如請求項18之監視系統,更包含: 一輸入裝置,用以接收一使用者設定; 其中,該程式被執行時進一步使該處理器: 根據該使用者設定變更該等影像之該操作者之該動作類型; 根據該等影像之該操作者之該動作類型調整該影像辨識模型。 As in the monitoring system of claim 18, it further includes: an input device for receiving a user setting; where the program, when executed, further causes the processor to: The action type of the operator who changes the images according to the user settings; The image recognition model is adjusted according to the action type of the operator of the images. 如請求項18之監視系統,其中,該程式被執行時進一步使該處理器: 基於該機器學習演算法,利用該等訓練資料、該等影像以及相應於該等影像之該操作者之該動作類型產生該影像辨識模型。 The monitoring system of claim 18, wherein the program, when executed, further causes the processor to: Based on the machine learning algorithm, the image recognition model is generated using the training data, the images, and the action type of the operator corresponding to the images.
TW109138721A 2020-11-05 Product line monitoring method and monitoring system thereof TWI839583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109138721A TWI839583B (en) 2020-11-05 Product line monitoring method and monitoring system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109138721A TWI839583B (en) 2020-11-05 Product line monitoring method and monitoring system thereof

Publications (2)

Publication Number Publication Date
TW202219671A true TW202219671A (en) 2022-05-16
TWI839583B TWI839583B (en) 2024-04-21

Family

ID=

Similar Documents

Publication Publication Date Title
US20230024006A1 (en) Embedded system, fast structured light based 3d camera system and method for obtaining 3d images using the same
JP7442550B2 (en) Inference calculation device, model training device, and inference calculation system
JP2018170003A (en) Detection device and method for event in video, and image processor
JP7111444B2 (en) Judgment unit, process judgment device and process judgment method
TW202134808A (en) Systems, methods, and media for manufacturing processes
US20160239769A1 (en) Methods for determining manufacturing waste to optimize productivity and devices thereof
JP2022141969A (en) Simulation device, simulation program and simulation method
WO2019187228A1 (en) Quality monitoring system
JP7103530B2 (en) Video analysis method, video analysis system and information processing equipment
JP7302769B2 (en) Production line monitoring method and production line monitoring system
JP6618349B2 (en) Video search system
JP2019169021A (en) Equipment monitoring system
TW202219671A (en) Product line monitoring method and monitoring system thereof
TWI839583B (en) Product line monitoring method and monitoring system thereof
JP2012257173A (en) Tracking device, tracking method, and program
US12020444B2 (en) Production line monitoring method and monitoring system thereof
TWI543569B (en) Data transmission system, data transmission monitoring method, server device, and computer-readable media
CN113033475B (en) Target object tracking method, related device and computer program product
US11200137B1 (en) System and methods for failure occurrence prediction and failure duration estimation
Wright et al. Vesper: A real-time processing framework for vehicle perception augmentation
KR102097354B1 (en) Monitoring device for automated system
TWI818913B (en) Device and method for artificial intelligence controlling manufacturing apparatus
TW202333014A (en) Systems and methods for manufacturing processes
TWI718437B (en) Object location estimating method with timestamp alignment function and related object location estimating device
WO2023132131A1 (en) Inspection device, inspection method, and program