TWI835011B - A method and apparatus for simulating the acting track of an object - Google Patents

A method and apparatus for simulating the acting track of an object Download PDF

Info

Publication number
TWI835011B
TWI835011B TW110138215A TW110138215A TWI835011B TW I835011 B TWI835011 B TW I835011B TW 110138215 A TW110138215 A TW 110138215A TW 110138215 A TW110138215 A TW 110138215A TW I835011 B TWI835011 B TW I835011B
Authority
TW
Taiwan
Prior art keywords
coordinates
tracking object
frame
area
image
Prior art date
Application number
TW110138215A
Other languages
Chinese (zh)
Other versions
TW202316314A (en
Inventor
黃莘揚
Original Assignee
友達光電股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 友達光電股份有限公司 filed Critical 友達光電股份有限公司
Priority to TW110138215A priority Critical patent/TWI835011B/en
Priority to CN202211013373.3A priority patent/CN115239939A/en
Publication of TW202316314A publication Critical patent/TW202316314A/en
Application granted granted Critical
Publication of TWI835011B publication Critical patent/TWI835011B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/23Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on positionally close patterns or neighbourhood relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method for simulating the acting track of an object include capturing a plurality frame of images from a video; determining a plurality of outline coordinates of a first tracking object according to a first feature of the first tracking object in each image; identifying a first frame according to the outline coordinates of the first tracking object; generating a vector according to the relative relationship of the reference coordinate and the center coordinate of the first frame, and stimulating the acting track of the target object according to the vector, wherein the target object related to the first tracking object , or directly; and quantifying the acting track, and drawing to the corresponding coordinates of a data visualization draw.

Description

模擬物件作用軌跡之方法與裝置Method and device for simulating the trajectory of objects

本發明涉及影像追蹤之技術;具體言之,本發明涉及一種模擬特定物件作用軌跡之方法與裝置。 The present invention relates to image tracking technology; specifically, the present invention relates to a method and device for simulating the trajectory of a specific object.

隨著人工智慧的發展,影像辨識以及物件偵測之技術也大幅提升(例如物件偵測演算法YOLO(You Only Look Once)),並且應用在例如自動駕駛、智慧醫療、人臉辨識等各式領域中。透過機器學習之技術,AI可以藉由訓練集不斷學習辨識特定物件之特徵,藉此在不同影像中迅速且精確地捕捉到目標物件,並且在目標物件移動之情況下持續追蹤。 With the development of artificial intelligence, image recognition and object detection technologies have also been greatly improved (such as the object detection algorithm YOLO (You Only Look Once)), and are used in various applications such as autonomous driving, smart medical care, and face recognition. in the field. Through machine learning technology, AI can continuously learn to identify the characteristics of specific objects through training sets, thereby quickly and accurately capturing target objects in different images, and continuously tracking the target objects as they move.

目前物件偵測技術雖然已經可以做到例如辨識馬路上之車輛、對車輛種類進行分類並且在車輛移動過程持續追蹤等物件追蹤之技術。惟當所欲追蹤之目標形體不固定(例如為氣流、水柱等)時,由於AI無法辨識目標之形態、難以透過訓練集學習其特徵,甚至無法從影像中感測其存在,因此現有之物件追蹤技術實難以應用在追蹤此類目標上。 At present, object detection technology can already achieve object tracking technology such as identifying vehicles on the road, classifying vehicle types, and continuously tracking the vehicle during its movement. However, when the shape of the target to be tracked is not fixed (such as air flow, water column, etc.), because the AI cannot identify the shape of the target, learn its characteristics through the training set, or even sense its existence from the image, existing objects Tracking technology is difficult to apply to track such targets.

本發明之一目的在於提供一種模擬物件作用軌跡之方法與裝置,可以追蹤目標物件,進而模擬其作用(例如噴射水柱或氣流等)之軌跡或範圍。 One object of the present invention is to provide a method and device for simulating the trajectory of an object, which can track a target object and then simulate the trajectory or range of its action (such as water jets or airflow, etc.).

本發明之一目的在於提供一種模擬物件作用軌跡之方法或裝置,可以簡易之方式估測影像中已受到物件作用之範圍,並呈現給使用者。 One object of the present invention is to provide a method or device for simulating the trajectory of an object, which can easily estimate the range of the image that has been affected by the object and present it to the user.

模擬物件作用軌跡之方法包括從影片中擷取複數幀影像;在每一影像中根據第一追蹤物件之第一特徵判定該第一追蹤物件之複數輪廓座標;根據該些輪廓座標標識出第一框形;根據參考座標與第一框形之中心點座標之相對關係產生向量,且根據該向量模擬目標物件之作用軌跡,其中該目標物件關聯於第一追蹤物件;以及將該作用軌跡數值化,且繪至一資料可視化圖之對應座標。藉由此方法,可以藉由追蹤有形體之物件(例如手套、噴槍等)等模擬出其作用軌跡(例如射出空氣之方向及射程),進而估測出無形物質(例如空氣、水等)之分布範圍。且使用者亦可以根據該資料可視化圖簡易地瞭解到固定區域中已受到作用(例如已清潔)之範圍以及尚待作用之範圍。 The method of simulating the trajectory of the object includes capturing a plurality of frames of images from the video; determining the plurality of outline coordinates of the first tracking object based on the first characteristics of the first tracking object in each image; and identifying the first tracking object based on the outline coordinates. Frame; generate a vector based on the relative relationship between the reference coordinates and the coordinates of the center point of the first frame, and simulate the action trajectory of the target object based on the vector, where the target object is associated with the first tracking object; and digitize the action trajectory , and plotted to the corresponding coordinates of a data visualization diagram. Through this method, it is possible to simulate its trajectory (such as the direction and range of the air shot) by tracking tangible objects (such as gloves, spray guns, etc.), and then estimate the impact of invisible substances (such as air, water, etc.) Distribution range. And the user can also easily understand the scope of the fixed area that has been affected (for example, cleaned) and the scope that has yet to be affected based on the data visualization diagram.

模擬物件作用軌跡之裝置包括至少一記憶體;以及耦接到該記憶體的至少一處理器,該處理器被配置成:在每一該些影像中根據第一追蹤物件之第一特徵判定定位出該第一追蹤物件之複數輪廓座標;根據該第一追蹤物件之該些輪廓座標標識定位出第一框形,其中該第一框形圍繞該第一追蹤物件之該些輪廓座標;根據從參考座標與朝向該第一框形之中心點座標之相對關係連線產生向量,且根據該向量模擬目標物件之作用軌跡,其中該目標物件可以相同於該第一追蹤物件或者直接或間接連接該關聯於第一追蹤物件;以及將該作用軌跡數值化,且繪至一資料可視化圖之對應座標。藉由此裝置,可以藉由追 蹤有形體之物件(例如手套、噴槍等)等模擬出其作用軌跡(例如射出空氣之方向及射程),進而估測出無形物質(例如空氣、水等)之分布範圍。且使用者亦可以根據該資料可視化圖簡易地瞭解到固定區域中已受到作用(例如已清潔)之範圍以及尚待作用之範圍。 The device for simulating the trajectory of an object includes at least one memory; and at least one processor coupled to the memory, the processor being configured to: determine positioning according to the first characteristic of the first tracking object in each of the images. Find the plurality of outline coordinates of the first tracking object; locate the first frame shape according to the outline coordinates of the first tracking object, wherein the first frame shape surrounds the outline coordinates of the first tracking object; according to the The relative relationship between the reference coordinates and the coordinates of the center point toward the first frame generates a vector, and the trajectory of the target object is simulated according to the vector, wherein the target object can be the same as the first tracking object or directly or indirectly connected to the Associated with the first tracking object; and digitizing the action trajectory and drawing it to the corresponding coordinates of a data visualization diagram. With this device, you can track Trace tangible objects (such as gloves, spray guns, etc.) to simulate their trajectory (such as the direction and range of the ejected air), and then estimate the distribution range of intangible substances (such as air, water, etc.). And the user can also easily understand the scope of the fixed area that has been affected (for example, cleaned) and the scope that has yet to be affected based on the data visualization diagram.

100:步驟圖 100: Step chart

101:步驟 101: Steps

103:步驟 103: Steps

105:步驟 105: Steps

107:步驟 107: Steps

108:步驟 108: Steps

109:步驟 109: Steps

201:影像 201:Image

202:感興趣區域(ROI) 202:Region of Interest (ROI)

203:第一追蹤物件 203:First tracking object

204:部分影像 204: Partial image

205:第一框形 205: First frame shape

207:參考座標 207:Reference coordinates

303:倒影 303:Reflection

305:錯誤辨識框 305: Error identification box

307:正確辨識框 307: Correct identification frame

403:第一框形之中心點座標 403:Coordinates of the center point of the first frame

405:向量 405:Vector

407a:座標 407a:Coordinates

407b:座標 407b:Coordinates

407c:座標 407c:Coordinates

500:資料可視化圖 500:Data visualization chart

503:參考指標 503: Reference indicator

503a:即時覆蓋率 503a: Instant Coverage

503b:軌跡指標 503b:Trajectory indicator

700:裝置 700:Device

710:記憶體 710:Memory

720:處理器 720: Processor

721:影像擷取單元 721:Image capture unit

722:影像辨識模組 722:Image recognition module

723:計算單元 723:Computing unit

725:資料轉換單元 725: Data conversion unit

730:顯示單元 730:Display unit

740:影像來源 740:Image source

T:作用軌跡 T: action trajectory

R:射程 R: range

A:展開角 A: Expansion angle

VN:變數 V N :variable

圖1係根據本發明之一實施例之模擬物件作用軌跡之方法之步驟程序圖。 FIG. 1 is a step sequence diagram of a method for simulating the trajectory of an object according to an embodiment of the present invention.

圖2A係根據本發明之一實施例之擷取影像示意圖。 FIG. 2A is a schematic diagram of capturing an image according to an embodiment of the present invention.

圖2B係根據本發明之一實施例之包括感興趣區域(ROI)之擷取影像示意圖。 FIG. 2B is a schematic diagram of a captured image including a region of interest (ROI) according to an embodiment of the present invention.

圖3係根據本發明又一實施例之影像及其識別結果示意圖。 Figure 3 is a schematic diagram of an image and its recognition results according to another embodiment of the present invention.

圖4係根據本發明之一實施例之模擬物件作用軌跡之示意圖。 FIG. 4 is a schematic diagram of the trajectory of a simulated object according to an embodiment of the present invention.

圖5A係根據本發明之一實施例中作用軌跡數值之示意圖。 FIG. 5A is a schematic diagram of action trajectory values according to an embodiment of the present invention.

圖5B係根據本發明又實施例中作用軌跡數值之示意圖。 FIG. 5B is a schematic diagram of action trajectory values according to another embodiment of the present invention.

圖5C係根據本發明之一實施例之資料可視化圖之示意圖。 FIG. 5C is a schematic diagram of a data visualization diagram according to an embodiment of the present invention.

圖6係根據本發明另一實施例之包括多個目標物件時之模擬物件作用軌跡示意圖。 FIG. 6 is a schematic diagram of the trajectory of a simulated object when multiple target objects are included according to another embodiment of the present invention.

圖7為根據本發明再一實施例之模擬物件作用軌跡之裝置之示意圖。 Figure 7 is a schematic diagram of a device for simulating the trajectory of an object according to yet another embodiment of the present invention.

以下本文使用的”約”、”近似”、或”實質上”包括所述值和在本領域一般技術人員確定的特定值的可接受的偏差範圍內的平均值,考慮到所討論的測量和與測量相關的誤差的特定數量(即,測量系統的限制)。例如,”約”可以表示在所述值的一個或多個標準偏差內,或±30%、±20%、±10%、±5%內。再者,本文使用的“約”、”近似”或“實質上”可依光學性質、蝕刻性質或其它性質,來選擇較可接受的偏差範圍或標準偏差,而可不用一個標準偏差適用全部性質。 As used herein below, "about," "approximately," or "substantially" includes the stated value and an average within an acceptable range of deviations from the particular value as determined by one of ordinary skill in the art, taking into account the measurements in question and The specific amount of error associated with a measurement (i.e., the limitations of the measurement system). For example, "about" can mean within one or more standard deviations of the stated value, or within ±30%, ±20%, ±10%, ±5%. Furthermore, the terms "about", "approximately" or "substantially" used in this article can be used to select a more acceptable deviation range or standard deviation based on optical properties, etching properties or other properties, and one standard deviation may not apply to all properties. .

應當理解的是,儘管術語”第一”、”第二”、”第三”等在本文中可以用於描述各種元件、部件、區域、層及/或部分,但是這些元件、部件、區域、及/或部分不應受這些術語的限制。這些術語僅用於將一個元件、部件、區域、層或部分與另一個元件、部件、區域、層或部分區分開。因此,下面討論的”第一元件”、”部件”、”區域”、”層”或”部分”可以被稱為第二元件、部件、區域、層或部分而不脫離本文的教導。 It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, and/or parts shall not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a "first element", "component", "region", "layer" or "section" discussed below could be termed a second element, component, region, layer or section without departing from the teachings herein.

除非另有定義,本文使用的所有術語(包括技術和科學術語)具有與本發明所屬領域的一般技術人員通常理解的相同的含義。將進一步理解的是,諸如在通常使用的字典中定義的那些術語應當被解釋為具有與它們在相關技術和本發明的上下文中的含義一致的含義,並且將不被解釋為理想化的或過度正式的意義,除非本文中明確地這樣定義。 Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms such as those defined in commonly used dictionaries should be construed to have meanings consistent with their meanings in the context of the relevant technology and the present invention, and are not to be construed as idealistic or excessive Formal meaning, unless expressly defined as such herein.

本發明關於如何模擬目標物件(例如噴槍)之作用軌跡以評估無固定型態或不可見物質(例如空氣、水)之作用範圍。舉例言之,進行工廠之設備清潔時可能使用水柱或空氣噴槍等器具執行作業,目前的影像辨識技術雖然能夠做到辨識有形物體(例如辨識出噴槍)之程度,卻難以辨識噴槍所射出 之水柱或氣流的分布範圍。因此,本發明旨在提供一種噴槍射出之氣流或水柱之作用軌跡,藉此評估設備之清潔狀況,並能快速區別出已清潔/未清潔之設備區域。以下將藉由各實施例詳細說明本發明內容,應理解的是,各實施例僅為使本領域一般技術人員能夠理解本發明之內容之例示性內容,在不悖離本發明精神之範圍下,可以對以下例示性實施例進行修改,本發明內容意涵蓋此類修改。 The present invention relates to how to simulate the trajectory of a target object (such as a spray gun) to evaluate the scope of action of unfixed or invisible substances (such as air, water). For example, equipment such as water jets or air spray guns may be used to clean factory equipment. Although current image recognition technology can identify tangible objects (such as spray guns), it is difficult to identify the objects ejected by the spray guns. The distribution range of water column or air flow. Therefore, the present invention aims to provide a trajectory of air flow or water column ejected from a spray gun, thereby assessing the cleaning status of the equipment and quickly distinguishing between cleaned and uncleaned equipment areas. The present invention will be described in detail below through various embodiments. It should be understood that each embodiment is only illustrative to enable those of ordinary skill in the art to understand the content of the present invention, without departing from the spirit of the present invention. , the following exemplary embodiments may be modified, and this disclosure is intended to cover such modifications.

參考圖1及圖2A說明本發明之一實施例之模擬物件作用軌跡之方法,在步驟101處,模擬物件作用軌跡之方法包括從影片擷取複數幀影像201。該影片為錄製目標物件(例如噴槍)以及目標物件可能作用之範圍(例如欲清潔之設備)之場景。在較佳之實施例中,複數幀影像201可以為連續擷取之影像201,在其他實施例中,影像201也可以是間隔擷取的。例如在一段時間內每隔0.5秒從該影片中擷取影像201,根據需求該擷取影像201之時間亦可以有更長(例如每秒)或更短(例如每0.3秒)之間距,間距長短之不同可能影響模擬物件作用軌跡之精準度,例如當間格時間較長,複數幀影像201間之時序差異較大,模擬出之軌跡亦存在較大的間斷,因此較不精確。 A method of simulating an object's trajectory according to an embodiment of the present invention is described with reference to FIG. 1 and FIG. 2A. At step 101, the method of simulating an object's trajectory includes capturing a plurality of frames of images 201 from a video. The video records the scene of the target object (such as a spray gun) and the possible range of the target object (such as the equipment to be cleaned). In a preferred embodiment, the plurality of frame images 201 may be continuously captured images 201. In other embodiments, the images 201 may also be captured at intervals. For example, images 201 are captured from the video every 0.5 seconds within a period of time. The time for capturing the images 201 can also have a longer (for example, every second) or shorter (for example, every 0.3 seconds) interval according to the needs. The difference in length may affect the accuracy of the trajectory of the simulated object. For example, when the interval time is longer, the timing difference between the plurality of frame images 201 is larger, and the simulated trajectory also has larger discontinuities, so it is less accurate.

參考圖2B繼續說明本實施例,從影片中擷取複數幀影像201可以進一步包括僅擷取由使用者定義之感興趣區域(region of interest,ROI)202(以斜線示出之區域)之影像。舉例言之,當使用者欲瞭解某設備目前清潔之情況時,其可以僅擷取影片中涵蓋該設備部分之影像201。擷取該ROI 202之方式可以進一步包括對該擷取之ROI 202進行形態學之處理,例如對該擷取之ROI 202進行膨脹處理,增加ROI 202之面積。藉由此處理,可以避免欲辨識之物件太靠近ROI 202之邊際或是稍微超出邊際時被漏掉之可能性。此外,也可 以利用邏輯運算處理該影像201以獲得特定形狀之ROI 202,例如可以在影像201中減去部分影像204,以取得中空之框形ROI 202)。 Continuing to describe this embodiment with reference to FIG. 2B , capturing multiple frame images 201 from the video may further include capturing only images of a region of interest (ROI) 202 (area shown with diagonal lines) defined by the user. . For example, when the user wants to know the current cleaning status of a certain equipment, he or she can capture only the image 201 covering the part of the equipment in the video. The method of capturing the ROI 202 may further include performing morphological processing on the captured ROI 202, such as performing expansion processing on the captured ROI 202 to increase the area of the ROI 202. Through this processing, the possibility of being missed when the object to be identified is too close to the edge of ROI 202 or slightly beyond the edge can be avoided. In addition, it is also possible The image 201 is processed using logical operations to obtain the ROI 202 of a specific shape. For example, a portion of the image 204 can be subtracted from the image 201 to obtain a hollow frame-shaped ROI 202).

應注意的是,該影像201可以為任何格式(例如JPEG檔等)之檔案,且可以對影像201執行影像處理(例如降維),以提升影像處理之效率以及減少所需之效能。或是將影像201從RGB模型轉換為HSV模型,以利於後續以色彩做影像辨識,且使用者在設定顏色相關之特徵時能夠更加直觀的對色彩進行調整。前述對影像201之格式說明以及影像處理等皆為示例性說明,在不與本發明內容產生衝突之前提下,影像201可以為其他任意之格式或處理。 It should be noted that the image 201 can be a file in any format (such as a JPEG file, etc.), and image processing (such as dimensionality reduction) can be performed on the image 201 to improve the efficiency of image processing and reduce required performance. Or the image 201 is converted from the RGB model to the HSV model to facilitate subsequent image recognition using color, and the user can adjust the color more intuitively when setting color-related features. The foregoing format description and image processing of the image 201 are all exemplary descriptions. The image 201 can be in any other format or process as long as it does not conflict with the content of the present invention.

參考圖2A繼續說明本發明之一實施例之模擬物件作用軌跡之方法。在步驟103處,本方法包括在影像201中根據第一特徵判定第一追蹤物件203之輪廓座標。第一追蹤物件203可以是例如手套等具有形體之物件,且可以具有特定之第一特徵(例如為紅色)。根據該第一特徵,可以從影像201中識別出該第一追蹤物件203(例如透過圖像顏色過濾技術),並進而取得該第一追蹤物件203所在位置之複數輪廓座標。在不同實施例中,該第一特徵亦可以由使用者自行設定(例如使用者可以設定其第一特徵為藍色、綠色等不同顏色)。在步驟105處,該方法可以根據該些輪廓座標標識圍繞該些輪廓座標之第一框形205(即該第一框形205所圍繞之範圍可以涵蓋所有輪廓座標)。應注意的是,在此實施例中該第一框形205具有可以圍繞該些輪廓座標之最小面積,然而在不同實施例中該第一框形205亦可以具有不同的面積範圍(例如具有可以圍繞該些輪廓座標範圍之兩倍面積等),本發明並未設限。 The method of simulating the trajectory of an object according to one embodiment of the present invention will be continued to be described with reference to FIG. 2A . At step 103, the method includes determining the outline coordinates of the first tracking object 203 in the image 201 based on the first feature. The first tracking object 203 may be a physical object such as a glove, and may have a specific first characteristic (for example, red). According to the first feature, the first tracking object 203 can be identified from the image 201 (for example, through image color filtering technology), and then the plural outline coordinates of the location of the first tracking object 203 can be obtained. In different embodiments, the first feature can also be set by the user (for example, the user can set the first feature to different colors such as blue, green, etc.). At step 105, the method can identify a first frame 205 surrounding the outline coordinates according to the outline coordinates (that is, the range surrounded by the first frame 205 can cover all outline coordinates). It should be noted that in this embodiment, the first frame 205 has a minimum area that can surround the outline coordinates. However, in different embodiments, the first frame 205 can also have different area ranges (for example, it can have The present invention does not set limits around twice the area of these contour coordinate ranges, etc.).

本發明之另一實施例之模擬物件作用軌跡之方法可以包括對識別出之第一追蹤物件203之輪廓座標進行形態學影像處理(例如腐蝕、膨脹等),以排除雜訊、還原所辨識物體破碎之輪廓等,使該些輪廓座標更貼近於該第一追蹤物件203之真實形狀。其後,可以計算輪廓座標之面積,並與第一追蹤物件203之預估面積(即該第一追蹤物件203在影像201中呈現之面積)做比較,藉此排除誤差範圍(例如±5%)外之輪廓座標。同樣地,可以計算第一框形205之面積與第一框形205之預估面積(即圍繞該第一追蹤物件203之框形在影像201中呈現之面積)做比較,藉此找出誤差範圍(例如±5%)外之第一框形205,進而排除該第一框形205所對應之輪廓座標。參考圖3具體言之,圖3左側為影像201之示例圖,右側則為該影像201經辨識後之結果示意圖。當影像201中之場景內具有例如金屬等光滑之物件時,第一追蹤物件203可能會在該光滑物件上產生倒影303,而影像201經辨識後可能將該第一追蹤物件203及該倒影303同時辨識為第一追蹤物件203(右圖中錯誤辨識框305所示為誤將倒影303辨識為第一追蹤物件203之錯誤辨識結果,正確辨識框307所示為正確辨識出第一追蹤物件203之正確辨識結果),進而影響到辨識之結果。由於物體之倒影(如錯誤辨識框305所示)通常不具有與原物體(如正確辨識框307所示)相同之完整形體,因此透過本實施例排除面積在誤差範圍外之輪廓座標,可以避免將第一追蹤物件203之倒影303誤判為第一追蹤物件203之機會。 In another embodiment of the present invention, a method for simulating the trajectory of an object may include performing morphological image processing (such as corrosion, expansion, etc.) on the identified outline coordinates of the first tracking object 203 to eliminate noise and restore the identified object. Broken outlines, etc., make the outline coordinates closer to the true shape of the first tracking object 203. Thereafter, the area of the outline coordinates can be calculated and compared with the estimated area of the first tracking object 203 (that is, the area of the first tracking object 203 presented in the image 201), thereby eliminating the error range (for example, ±5%). ) outside the contour coordinates. Similarly, the area of the first frame 205 can be calculated and compared with the estimated area of the first frame 205 (ie, the area of the frame surrounding the first tracking object 203 presented in the image 201) to find out the error. The first frame shape 205 outside the range (for example, ±5%) is excluded, and the contour coordinates corresponding to the first frame shape 205 are excluded. Referring to FIG. 3 specifically, the left side of FIG. 3 is an example diagram of the image 201, and the right side is a schematic diagram of the recognition result of the image 201. When the scene in the image 201 has smooth objects such as metal, the first tracking object 203 may produce a reflection 303 on the smooth object, and the image 201 may separate the first tracking object 203 and the reflection 303 after recognition. At the same time, it is recognized as the first tracking object 203 (the wrong recognition box 305 in the right picture shows the wrong recognition result of mistakenly recognizing the reflection 303 as the first tracking object 203, and the correct recognition box 307 shows the correct recognition of the first tracking object 203 the correct identification result), which in turn affects the identification result. Since the reflection of an object (as shown in the error recognition frame 305) usually does not have the same complete shape as the original object (as shown in the correct recognition frame 307), by excluding the outline coordinates whose area is outside the error range, this embodiment can avoid The opportunity to misjudge the reflection 303 of the first tracking object 203 as the first tracking object 203 .

參考圖2及圖4繼續說明本發明之一實施例之模擬物件作用軌跡之方法。在步驟107處,該方法包括根據參考座標207與第一框形205之中心點座標403之相對關係產生向量405。在此實施例中,參考座標207為事先設定好之固定座標(例如可以為影像之中心點座標),該向量405係從參考座標207朝 向第一框形205之中心點座標403連線而產生。其中,目標物件可以是第一追蹤物件203(例如為同一噴槍),或者當目標物件較難識別與追蹤時(例如特徵較不明顯或是受到遮蔽),目標物件可以為與第一追蹤物件203直接或間接連接之物件(例如第一追蹤物件203為手套,目標物件為握於手套中之噴槍)。藉由此配置,即使目標物件難以識別與追蹤時(例如清潔人員將噴槍握於手中時,噴槍本身體積小且受到清潔人員手部之遮擋),亦可以藉由識別與追蹤與其直接或間接連接之第一追蹤物件203達到追蹤該目標物件之目的。 The method of simulating the trajectory of an object according to one embodiment of the present invention will be continued to be described with reference to FIGS. 2 and 4 . At step 107 , the method includes generating a vector 405 based on the relative relationship between the reference coordinate 207 and the center point coordinate 403 of the first frame 205 . In this embodiment, the reference coordinate 207 is a preset fixed coordinate (for example, it can be the center point coordinate of the image), and the vector 405 is directed from the reference coordinate 207 toward It is generated by connecting a line to the center point coordinate 403 of the first frame 205. The target object may be the first tracking object 203 (for example, the same spray gun), or when the target object is difficult to identify and track (for example, the features are less obvious or obscured), the target object may be the same as the first tracking object 203 Directly or indirectly connected objects (for example, the first tracking object 203 is a glove, and the target object is a spray gun held in the glove). With this configuration, even when the target object is difficult to identify and track (for example, when the cleaning staff holds the spray gun in their hands, the spray gun itself is small and blocked by the cleaning staff's hands), it can still be directly or indirectly connected to it through identification and tracking. The first tracking object 203 achieves the purpose of tracking the target object.

應注意的是,雖然在本實施例中參考座標207為事先設定好之固定座標,但在不同實施例中,也可以以第二追蹤物件判定中心點座標207。該第二追蹤物件包括第二特徵(例如為藍色),且該第二特徵不同於第一特徵,以避免在辨識過程產生混淆。其中該第二追蹤物件之第二特徵亦可以由使用者自行設定。根據該第二特徵可以從影像201中識別出該第二追蹤物件、取得該第二追蹤物件之複數輪廓座標,並進而產生圍繞該些輪廓座標之第二框形。其中可以以該第二框形之中心點作為參考座標207,進而從該參考座標207朝向第一框形205之中心座標403連線產生向量405。 It should be noted that although the reference coordinates 207 are fixed coordinates set in advance in this embodiment, in different embodiments, the second tracking object can also be used to determine the center point coordinates 207 . The second tracking object includes a second feature (for example, blue), and the second feature is different from the first feature to avoid confusion during the identification process. The second characteristic of the second tracking object can also be set by the user. According to the second feature, the second tracking object can be identified from the image 201, a plurality of outline coordinates of the second tracking object can be obtained, and a second frame surrounding the outline coordinates can be generated. The center point of the second frame shape can be used as the reference coordinate 207, and a vector 405 can be generated by connecting the reference coordinate 207 toward the center coordinate 403 of the first frame shape 205.

以下參考圖4繼續說明本實施例之模擬物件作用軌跡之方法。在步驟108處包括根據該向量405模擬目標物件之作用軌跡T。此實施例中模擬目標物件之作用軌跡T除了根據向量405外還包括目標物件之作用軌跡參數,且作用軌跡參數可以由使用者自行設定。其中向量405關聯於目標物件的基本作用方向,作用軌跡參數則關聯於作用軌跡T之範圍。舉例言之,假設該作用軌跡T為一扇形範圍,則該作用軌跡參數可以包括該目標物件作用之射程R(例如噴槍噴射之空氣或水可以達到之直線距離,即扇形之半徑長)、展開角A(即扇 形之圓心角)等。向量405可以初步模擬目標物件之作用方向,結合該作用軌跡參數(射程R、分散角度A等)可以使該目標物件作用軌跡之模擬更加精確。應注意的是,雖然在本實施例中該作用軌跡T為一扇形範圍,惟在不同實施例中該作用軌跡T亦可以模擬為其他形狀(例如三角形等),本發明並未對此設限。 The method of simulating the trajectory of an object in this embodiment will be described below with reference to FIG. 4 . Step 108 includes simulating the action trajectory T of the target object according to the vector 405 . In this embodiment, the simulated target object's action trajectory T not only includes vector 405 but also includes the action trajectory parameters of the target object, and the action trajectory parameters can be set by the user. The vector 405 is associated with the basic action direction of the target object, and the action trajectory parameter is associated with the range of the action trajectory T. For example, assuming that the action trajectory T is a fan-shaped range, the action trajectory parameters may include the range R of the target object (for example, the straight-line distance that the air or water sprayed by the spray gun can reach, that is, the radius of the fan), expansion Angle A (i.e. fan The central angle of the circle), etc. The vector 405 can initially simulate the action direction of the target object. Combining the action trajectory parameters (range R, dispersion angle A, etc.) can make the simulation of the target object's action trajectory more accurate. It should be noted that although the action trajectory T is a sector-shaped range in this embodiment, the action trajectory T can also be simulated as other shapes (such as triangles, etc.) in different embodiments, and the present invention does not limit this. .

以下根據圖4、圖5A、圖5B及圖5C繼續說明本發明之一實施例之模擬物件作用軌跡之方法。在步驟109包括將作用軌跡T數值化,且繪至資料可視化圖500之對應座標。參考圖5A,根據前述之內容可以模擬出目標物件之作用軌跡T,其中將作用軌跡T數值化可以包括例如給予同一幀影像201中作用軌跡T分布之每一座標相同之數值(即該扇形範圍分布之每一點皆具有相同值),例如可以給予座標407a、407b及407c相同之數值(例如10)。複數幀影像201皆經過相同之處理後,由於每一幀影像201中作用軌跡T分布之範圍可能不同,代表各個點座標所累積數值之變數Vn會隨著每一幀影像201之時序,累加該點座標該幀取得之數值。因此對應於複數幀影像201中作用軌跡T的分布,每一座標點之變數Vn可以累積不同大小之數值(例如圖5A中,兩個作用軌跡T重疊之部分具有數值20之變數Vn),進而可以透過資料可視化圖500呈現每一座標點受到目標物件作用之程度。 The method of simulating the trajectory of an object according to one embodiment of the present invention will be described below based on FIG. 4 , FIG. 5A , FIG. 5B and FIG. 5C . Step 109 includes digitizing the action trajectory T and drawing it to the corresponding coordinates of the data visualization diagram 500 . Referring to FIG. 5A , the action trajectory T of the target object can be simulated according to the foregoing content, wherein digitizing the action trajectory T may include, for example, giving the same numerical value to each coordinate of the action trajectory T distribution in the same frame image 201 (i.e., the sector range). Each point of the distribution has the same value), for example, the coordinates 407a, 407b and 407c can be given the same value (for example, 10). After multiple frames of images 201 undergo the same processing, since the range of the distribution of the active trajectory T in each frame of image 201 may be different, the variable V n representing the accumulated value of each point coordinate will accumulate with the timing of each frame of image 201 . The coordinate value of this point in this frame. Therefore, corresponding to the distribution of the action trajectories T in the plural frame image 201, the variable V n at each coordinate point can accumulate values of different sizes (for example, in Figure 5A, the overlapping part of the two action trajectories T has a variable V n with a value of 20), Then, the degree to which each coordinate point is affected by the target object can be displayed through the data visualization diagram 500 .

參考圖5B,在不同實施例中,影像201中作用軌跡T分布之不同座標點也可以給予不同大小之數值。舉例言之,當水柱或氣流從噴槍口噴出後會受到空氣阻力以及萬有引力的影響而隨著距離的增加減少分布之密度。因此越靠近目標物件之座標可以給予較大之數值(例如40)、越遠離目標物件之座標則可能給予依次遞減之數值(即座標407a可以給予大於座標407b之數值,座 標407b可以給予大於座標407c之數值)。同樣對複數影像201進行相同處理後,各個座標點亦會累積各自之數值,藉此模擬例如當目標物件為水柱噴槍時,隨著距離的增加該區域受到水柱作用之程度亦會遞減。應注意的是,前述關於數值大小之設定皆為例示性的,可以在不悖離本發明精神之前提下,有其他不同的數值設定,本發明並未設限。 Referring to FIG. 5B , in different embodiments, different coordinate points of the distribution of the action trajectory T in the image 201 can also be given values of different sizes. For example, when a water column or airflow is ejected from the nozzle of a spray gun, it will be affected by air resistance and gravity, and the density of the distribution will decrease as the distance increases. Therefore, the coordinates closer to the target object can be given a larger value (for example, 40), and the coordinates further away from the target object can be given decreasing values (that is, coordinate 407a can be given a value greater than coordinate 407b. The coordinate 407b can be given a value greater than the coordinate 407c). After the same processing is performed on the plurality of images 201, each coordinate point will also accumulate its own value, thereby simulating that, for example, when the target object is a water jet spray gun, the extent to which the area is affected by the water jet will also decrease as the distance increases. It should be noted that the above-mentioned numerical settings are all exemplary, and other different numerical settings can be made without departing from the spirit of the present invention, and the present invention does not set limits.

所述資料可視化圖500係指透過顏色、區塊、線條等以圖示呈現出數值之方式,參考圖5C,在本實施例中可以依據數值之高低以熱力圖之方式呈現每一座標點之數值(即受到目標物件作用程度越高之座標點在熱力圖上之呈現越接近紅色,而受到作用程度越低之座標點越接近紫色。應注意的是,雖然在此實施例中係以熱力圖之方式呈現數值,惟在不同之實施例中亦可以其他類型之資料可視化圖500(例如等值線圖等)呈現,本發明並未對此設限。 The data visualization diagram 500 refers to a way of graphically presenting numerical values through colors, blocks, lines, etc. Referring to Figure 5C, in this embodiment, the value of each coordinate point can be presented in the form of a heat map according to the level of the numerical value. (That is, the coordinate points that are affected by the target object to a higher degree are closer to red on the heat map, and the coordinate points that are affected to a lower degree are closer to purple. It should be noted that although in this embodiment, the heat map is used However, in different embodiments, the numerical values may also be presented in other types of data visualization diagrams 500 (such as contour diagrams, etc.), and the present invention does not limit this.

參考圖5C繼續說明本發明一實施例之模擬物件作用軌跡之方法,在本實施例中,資料可視化圖500可以僅呈現出ROI 202之區域。舉例言之,當使用者欲清潔之設備在影像中呈現一框型區域時,該資料可視化圖500可以僅呈現該框形區域之顏色分布狀況,以利於使用者快速瞭解設備清潔之情況。除此之外,該資料可視化圖500可以進一步包括參考指標503,參考指標503可以包括作用軌跡T之即時覆蓋率503a、軌跡指標503b。其中,計算該資料可視化圖500上之該作用軌跡T所覆蓋(即曾受到目標物件作用)之範圍面積,除以該影像201或ROI 202之總面積,以取得該作用軌跡T之即時覆蓋率503a;計算該資料可視化圖500上數值化之該作用軌跡T之總數值,除以該影像201或ROI 202之總面積,以取得該作用軌跡T之一軌跡指標503b。由於該即時覆蓋率503a以及該軌跡指標503b會隨著時序而改變,且關聯於作用軌跡T之分布,使 用者可以藉由這些參考指標503判斷目標物件作用之即時情況。應注意的是,前述參考指標503之說明僅為例示性,在不同實施例中,該參考指標503還可以包括其他能夠提供關聯於作用軌跡T之參考資訊之參考指標503,例如作用軌跡T之累積作用時間等。 Referring to FIG. 5C , the method of simulating the trajectory of an object according to an embodiment of the present invention will be continued to be described. In this embodiment, the data visualization diagram 500 may only present the region of ROI 202 . For example, when the equipment that the user wants to clean presents a frame-shaped area in the image, the data visualization diagram 500 can only present the color distribution of the frame-shaped area to facilitate the user to quickly understand the cleaning status of the equipment. In addition, the data visualization diagram 500 may further include reference indicators 503, which may include real-time coverage 503a and trajectory indicators 503b of the action trajectory T. Among them, the area covered by the action trajectory T on the data visualization map 500 (that is, acted upon by the target object) is calculated and divided by the total area of the image 201 or ROI 202 to obtain the real-time coverage of the action trajectory T. 503a; Calculate the total value of the action trajectory T digitized on the data visualization diagram 500, and divide it by the total area of the image 201 or ROI 202 to obtain the trajectory index 503b of the action trajectory T. Since the real-time coverage 503a and the trajectory indicator 503b will change with time series and are related to the distribution of the action trajectory T, so The user can use these reference indicators 503 to determine the real-time status of the target object. It should be noted that the foregoing description of the reference index 503 is only illustrative. In different embodiments, the reference index 503 may also include other reference indexes 503 that can provide reference information related to the action trajectory T, such as Cumulative action time, etc.

參考圖6,在不同實施例中,同一幀之影像201中可以包括複數具有相同第一特徵(例如為紅色)之第一追蹤物件203,其中根據該第一特徵,可以從影像201中識別出該些第一追蹤物件203,進而分別取得該些第一追蹤物件203之複數分組之輪廓座標(同一組別之輪廓座標對應相同之第一追蹤物件203),並產生圍繞該些第一輪廓座標之複數第一框形205(同一組別之輪廓座標對應一個第一框形205)。在本實施例中,可以從參考座標207朝向該些第一框形205之中心點座標連線產生複數向量405,並藉由該些向量405產生複數作用軌跡T,藉此模擬同時有複數目標物件作用之狀況。具體言之,在監控設備之清潔狀況時,可能會有同時執行多個清潔任務之狀況(例如有多個清潔人員同時持噴槍噴射水柱),藉由此配置,可以同時模擬該多個噴槍的作用軌跡T,以瞭解設備的即時清潔狀態。 Referring to FIG. 6 , in different embodiments, the image 201 of the same frame may include a plurality of first tracking objects 203 with the same first characteristics (for example, red), wherein the first tracking objects 203 can be identified from the image 201 based on the first characteristics. these first tracking objects 203, and then respectively obtain the outline coordinates of plural groups of these first tracking objects 203 (the outline coordinates of the same group correspond to the same first tracking object 203), and generate a set of outline coordinates around the first tracking objects 203 A plurality of first frame shapes 205 (the outline coordinates of the same group correspond to one first frame shape 205). In this embodiment, complex vectors 405 can be generated from the reference coordinates 207 to the center point coordinates of the first frames 205, and complex action trajectories T can be generated by these vectors 405, thereby simulating multiple targets at the same time. The status of the object's function. Specifically, when monitoring the cleaning status of equipment, there may be situations where multiple cleaning tasks are performed at the same time (for example, multiple cleaning personnel hold spray guns to spray water jets at the same time). With this configuration, the conditions of multiple spray guns can be simulated at the same time. Act on the track T to understand the immediate cleaning status of the equipment.

參考圖7說明本發明另一實施例之模擬物件作用軌跡之裝置700,其中該裝置包括記憶體710、耦接至該記憶體之處理器720以及顯示單元730。其中,該記憶體710儲存有複數指令,當執行該些指令時,處理器720被配置成執行前述任一實施例中之模擬物件作用軌跡之方法。具體言之,處理器720包含影像擷取單元721、影像辨識模組722、計算單元723以及資料轉換單元725。其中擷取單元721從影像來源740(即包含目標物件及其作用範圍場景之影片)中擷取複數幀影像201,並傳遞至影像辨識模組722。影像辨識模組722 透過擷取之影像201中判定第一追蹤物件203之輪廓座標,並標識出第一框形205。其後,影像辨識模組722將辨識之結果傳遞給計算單元723,計算單元723根據影像辨識模組722標識出之第一框形205與參考座標207之相對關係產生向量405。藉由作用軌跡參數以及向量405,計算單元723可以模擬出每一幀影像201中目標物件之作用軌跡T,並將該作用軌跡T數值化。其中,代表每一座標點累積數值之變數VN可以儲存於暫存器中,隨著複數幀影像201之時序,變數VN持續累加計算單元723數值化作用軌跡T之結果。以及資料轉換單元725會將代表每一座標點之變數VN之數值圖像化(例如以顏色表示數值),使每一座標點具有各自之圖像特徵(例如顏色),並將訊號傳遞至顯示單元730,以呈現給使用者觀看。 Referring to FIG. 7 , a device 700 for simulating the trajectory of an object according to another embodiment of the present invention is described, in which the device includes a memory 710 , a processor 720 coupled to the memory, and a display unit 730 . The memory 710 stores a plurality of instructions. When executing these instructions, the processor 720 is configured to execute the method of simulating the trajectory of an object in any of the foregoing embodiments. Specifically, the processor 720 includes an image capture unit 721, an image recognition module 722, a calculation unit 723 and a data conversion unit 725. The capture unit 721 captures a plurality of frames of images 201 from the image source 740 (ie, the video containing the target object and its scope scene), and passes them to the image recognition module 722 . The image recognition module 722 determines the outline coordinates of the first tracking object 203 from the captured image 201 and identifies the first frame 205 . Thereafter, the image recognition module 722 passes the recognition result to the calculation unit 723, and the calculation unit 723 generates a vector 405 based on the relative relationship between the first frame 205 identified by the image recognition module 722 and the reference coordinates 207. By using the action trajectory parameters and the vector 405, the calculation unit 723 can simulate the action trajectory T of the target object in each frame of image 201, and digitize the action trajectory T. Among them, the variable V N representing the accumulated value of each coordinate point can be stored in the register. Along with the timing of the plurality of frame images 201 , the variable V N continues to accumulate the result of the digitized action trajectory T by the calculation unit 723 . And the data conversion unit 725 will image the value of the variable V N representing each coordinate point (such as expressing the value with a color), so that each coordinate point has its own image characteristics (such as color), and transmit the signal to the display unit 730, to present it to the user for viewing.

100…步驟圖 101…步驟 103…步驟 105…步驟 107…步驟 108…步驟 109…步驟 100…step diagram 101…steps 103…steps 105…steps 107…steps 108…steps 109…steps

Claims (24)

一種用以模擬物件作用軌跡之方法,包括:從一影片中擷取一影像幀;在該影像中根據一第一追蹤物件之一第一特徵判定該第一追蹤物件之複數輪廓座標;根據該些輪廓座標標識一第一框形,其中該第一框形之位置及尺寸關聯於該些輪廓座標;根據一參考座標與該第一框形之中心點座標之相對關係產生一向量,且根據該向量模擬一目標物件之一作用軌跡,其中該目標物件係關連於該第一追蹤物件;以及針對該作用軌跡之每一座標給予數值,且繪至一資料可視化圖之對應座標。 A method for simulating the trajectory of an object, including: capturing an image frame from a video; determining the plurality of contour coordinates of a first tracking object in the image based on a first feature of the first tracking object; These outline coordinates identify a first frame shape, where the position and size of the first frame shape are associated with the outline coordinates; a vector is generated according to the relative relationship between a reference coordinate and the coordinates of the center point of the first frame shape, and according to The vector simulates an action trajectory of a target object, wherein the target object is related to the first tracking object; and a value is given to each coordinate of the action trajectory and plotted to the corresponding coordinates of a data visualization diagram. 如請求項1所述之方法,其中該目標物件可以相同於該第一追蹤物件或者直接或間接連接該第一追蹤物件。 The method of claim 1, wherein the target object can be the same as the first tracking object or directly or indirectly connected to the first tracking object. 如請求項1所述之方法,其中該影像擷取步驟包含設定一欲監控區域範圍,以在擷取該些影片時僅擷取該欲監控區域範圍;該繪至該資料可視化圖步驟包含僅產生該欲監控區域範圍內之該資料可視化圖。 The method described in claim 1, wherein the image capturing step includes setting an area to be monitored so that only the area to be monitored is captured when capturing the videos; the step of drawing to the data visualization map includes only Generate a visualization of the data within the area to be monitored. 如請求項3所述之方法,其中擷取該影像幀可以進一步包括膨脹該欲監控區域範圍,以截取一膨脹欲監控區域範圍。 The method of claim 3, wherein capturing the image frame may further include expanding the area to be monitored to intercept an expanded area to be monitored. 如請求項1所述之方法,其中在該影像中判定該第一追蹤物件之該些輪廓座標包括對該第一追蹤物件之該輪廓進行膨脹及侵蝕。 The method of claim 1, wherein determining the contour coordinates of the first tracking object in the image includes dilating and eroding the contour of the first tracking object. 如請求項1所述之方法,其中模擬該作用軌跡步驟包含根據一作用軌跡參數及該向量來模擬該作用軌跡。 The method of claim 1, wherein the step of simulating the action trajectory includes simulating the action trajectory according to an action trajectory parameter and the vector. 如請求項1所述之方法,進一步包含:根據該些輪廓座標計算一輪廓面積;比較該輪廓面積及該第一追蹤物件之一實際面積以產生一比較結果;以及根據該比較結果排除誤差範圍外之該些輪廓座標。 The method of claim 1, further comprising: calculating an outline area based on the outline coordinates; comparing the outline area with an actual area of the first tracking object to generate a comparison result; and eliminating an error range based on the comparison result. In addition, these outline coordinates. 如請求項1所述之方法,進一步包含:計算該第一框形之一第一框形面積;比較該第一框形面積及該第一框形之一預估面積以產生一比較結果;以及根據該比較結果排除誤差範圍外之該第一框形,進而排除對應該第一框形之誤差範圍外之該些輪廓座標。 The method as described in claim 1, further comprising: calculating an area of a first frame of the first frame; comparing the area of the first frame with an estimated area of the first frame to generate a comparison result; and excluding the first frame shape outside the error range according to the comparison result, and further excluding the contour coordinates corresponding to the first frame shape outside the error range. 如請求項1所述之方法,其中該繪至該資料可視化圖步驟進一步包括:關聯該作用軌跡與至少一參考指標;以及合併該參考指標於該資料可視化圖。 The method of claim 1, wherein the step of drawing to the data visualization diagram further includes: associating the action trajectory with at least one reference indicator; and merging the reference indicator in the data visualization diagram. 如請求項1所述之方法,進一步包含:判定一第二追蹤物件之中心點座標,其中該第二追蹤物件具有與該第一追蹤物件之該第一特徵不同之一第二特徵;以及設定該第二追蹤物件之該中心點座標為該參考座標。 The method of claim 1, further comprising: determining the center point coordinates of a second tracking object, wherein the second tracking object has a second characteristic different from the first characteristic of the first tracking object; and setting The coordinates of the center point of the second tracking object are the reference coordinates. 如請求項1所述之方法,其中針對該作用軌跡中之每一座標給予數值之步驟包括根據該作用軌跡之不同位置座標設定相同或不同之數值。 The method of claim 1, wherein the step of assigning a numerical value to each coordinate in the action trajectory includes setting the same or different values according to different position coordinates of the action trajectory. 如請求項1所述之方法,其中可以將該影像幀之維度降低或從RGB色彩模型轉為HSV色彩模型。 The method of claim 1, wherein the dimension of the image frame can be reduced or converted from the RGB color model to the HSV color model. 一種模擬物件作用軌跡之裝置,包括:至少一記憶體,其上儲存有複數指令;以及至少一處理器耦接到該記憶體,當執行該些指令時,該處理器被配置成:從一影片中擷取一影像幀;在該影像中根據一第一追蹤物件之一第一特徵判定該第一追蹤物件之複數輪廓座標;根據該些輪廓座標標識一第一框形,其中該第一框形之位置及尺寸關聯於該些輪廓座標;根據一參考座標與該第一框形之中心點座標之相對關係產生一向量,且根據該向量模擬一目標物件之一作用軌跡,其中該目標物件係關聯於該第一追蹤物件;以及針對該作用軌跡中之每一座標給予數值,且繪至一資料可視化圖之對應座標。 A device for simulating the action trajectory of an object, including: at least one memory on which a plurality of instructions are stored; and at least one processor coupled to the memory. When executing the instructions, the processor is configured to: from a An image frame is captured in the video; in the image, a plurality of outline coordinates of the first tracking object are determined according to a first characteristic of the first tracking object; and a first frame shape is identified according to the outline coordinates, wherein the first The position and size of the frame are associated with the outline coordinates; a vector is generated based on the relative relationship between a reference coordinate and the coordinates of the center point of the first frame, and an action trajectory of a target object is simulated according to the vector, wherein the target The object is associated with the first tracking object; and each coordinate in the action trajectory is given a value and plotted to the corresponding coordinate in a data visualization diagram. 如請求項13所述之裝置,其中該目標物件可以相同於該第一追蹤物件或者直接或間接連接該第一追蹤物件。 The device of claim 13, wherein the target object can be the same as the first tracking object or directly or indirectly connected to the first tracking object. 如請求項13所述之裝置,其中該處理器被配置成在該影像擷取包含設定一欲監控區域範圍,以在擷取該些影片時僅擷取該欲監控區域範圍;在該繪至該資料可視化圖包含僅產生該欲監控區域範圍內之該資料可視化圖。 The device of claim 13, wherein the processor is configured to include setting an area to be monitored in the image capture, so that only the area to be monitored is captured when capturing the videos; in the image capture, The data visualization diagram includes only generating the data visualization diagram within the area to be monitored. 如請求項15所述之裝置,其中擷取該影像幀可以進一步包括膨脹該欲監控區域範圍,以截取一膨脹欲監控區域範圍。 The device of claim 15, wherein capturing the image frame may further include expanding the area to be monitored to intercept an expanded area to be monitored. 如請求項13所述之裝置,其中在該影像中判定出該第一追蹤物件之該些輪廓座標包括對該第一追蹤物件之該輪廓進行膨脹及侵蝕。 The device of claim 13, wherein determining the contour coordinates of the first tracking object in the image includes expanding and eroding the contour of the first tracking object. 如請求項13所述之裝置,其中該處理器被配置成模擬該作用軌跡包含根據一作用軌跡參數及該向量來模擬該作用軌跡。 The device of claim 13, wherein the processor is configured to simulate the action trajectory including simulating the action trajectory according to an action trajectory parameter and the vector. 如請求項13所述之裝置,該處理器進一步被配置成:根據該些輪廓座標計算一輪廓面積;比較該輪廓面積及該第一追蹤物件之一實際面積以產生一比較結果;以及根據該比較結果排除誤差範圍外之該些輪廓座標。 The device of claim 13, the processor is further configured to: calculate an outline area based on the outline coordinates; compare the outline area with an actual area of the first tracking object to generate a comparison result; and generate a comparison result based on the outline coordinates. The comparison results exclude those contour coordinates outside the error range. 如請求項13所述之裝置,該處理器進一步被配置成:計算該第一框形之一第一框形面積;比較該第一框形面積及該第一框形之一預估面積以產生一比較結果;以及根據該比較結果排除誤差範圍外之該第一框形,進而排除對應該第一框形之誤差範圍外之該些輪廓座標。 The device of claim 13, the processor is further configured to: calculate an area of a first frame of the first frame; compare the area of the first frame with an estimated area of the first frame to determine Generate a comparison result; and exclude the first frame shape outside the error range based on the comparison result, and further exclude the contour coordinates corresponding to the first frame shape outside the error range. 如請求項13所述之裝置,其中該處理器進一步被配置成:關聯該作用軌跡與至少一參考指標;以及合併該參考指標於該資料可視化圖。 The device of claim 13, wherein the processor is further configured to: associate the action trajectory with at least one reference index; and merge the reference index into the data visualization graph. 如請求項13所述之裝置,其中該處理器進一步被配置成: 判定一第二追蹤物件之中心點座標,其中該第二追蹤物件具有與該第一追蹤物件之該第一特徵不同之一第二特徵;以及根據該第二追蹤物件之該中心點座標為參考座標。 The device of claim 13, wherein the processor is further configured to: Determine the center point coordinates of a second tracking object, wherein the second tracking object has a second characteristic different from the first characteristic of the first tracking object; and use the center point coordinates of the second tracking object as a reference coordinates. 如請求項13所述之裝置,其中針對該作用軌跡中之每一座標給予數值時,該處理器可以被配置成依據該作用軌跡之不同位置座標設定相同或不同之數值。 The device of claim 13, wherein when a value is given to each coordinate in the action trajectory, the processor can be configured to set the same or different values according to different position coordinates of the action trajectory. 如請求項13所述之裝置,其中可以將該影像幀之維度降低或從RGB色彩模型轉為HSV色彩模型。 The device of claim 13, wherein the dimension of the image frame can be reduced or converted from the RGB color model to the HSV color model.
TW110138215A 2021-10-14 2021-10-14 A method and apparatus for simulating the acting track of an object TWI835011B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW110138215A TWI835011B (en) 2021-10-14 2021-10-14 A method and apparatus for simulating the acting track of an object
CN202211013373.3A CN115239939A (en) 2021-10-14 2022-08-23 Method and device for simulating action track of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110138215A TWI835011B (en) 2021-10-14 2021-10-14 A method and apparatus for simulating the acting track of an object

Publications (2)

Publication Number Publication Date
TW202316314A TW202316314A (en) 2023-04-16
TWI835011B true TWI835011B (en) 2024-03-11

Family

ID=83681505

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110138215A TWI835011B (en) 2021-10-14 2021-10-14 A method and apparatus for simulating the acting track of an object

Country Status (2)

Country Link
CN (1) CN115239939A (en)
TW (1) TWI835011B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI294107B (en) * 2006-04-28 2008-03-01 Univ Nat Kaohsiung 1St Univ Sc A pronunciation-scored method for the application of voice and image in the e-learning
TWI507919B (en) * 2013-08-23 2015-11-11 Univ Kun Shan Method for tracking and recordingfingertip trajectory by image processing
TWI633521B (en) * 2016-02-04 2018-08-21 南韓商高爾縱新維度控股有限公司 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same
TW201843650A (en) * 2016-01-14 2018-12-16 南韓商高爾縱新維度控股有限公司 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI294107B (en) * 2006-04-28 2008-03-01 Univ Nat Kaohsiung 1St Univ Sc A pronunciation-scored method for the application of voice and image in the e-learning
TWI507919B (en) * 2013-08-23 2015-11-11 Univ Kun Shan Method for tracking and recordingfingertip trajectory by image processing
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
TW201843650A (en) * 2016-01-14 2018-12-16 南韓商高爾縱新維度控股有限公司 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same
TWI633521B (en) * 2016-02-04 2018-08-21 南韓商高爾縱新維度控股有限公司 Apparatus for base-ball practice, sensing device and sensing method used to the same and control method for the same

Also Published As

Publication number Publication date
CN115239939A (en) 2022-10-25
TW202316314A (en) 2023-04-16

Similar Documents

Publication Publication Date Title
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
Du et al. Accurate dynamic SLAM using CRF-based long-term consistency
CN103677274B (en) A kind of interaction method and system based on active vision
CN110443210A (en) A kind of pedestrian tracting method, device and terminal
US20220206575A1 (en) Method and apparatus for identifying gaze behavior in three-dimensional space, and storage medium
US20150071492A1 (en) Abnormal behaviour detection
Salehi et al. An automatic video-based drowning detection system for swimming pools using active contours
US11170246B2 (en) Recognition processing device, recognition processing method, and program
CN106650628B (en) Fingertip detection method based on three-dimensional K curvature
CN111986237A (en) Real-time multi-target tracking algorithm irrelevant to number of people
TWI835011B (en) A method and apparatus for simulating the acting track of an object
CN110580708B (en) Rapid movement detection method and device and electronic equipment
JP6982865B2 (en) Moving image distance calculation device and moving image distance calculation program
JP2002366958A (en) Method and device for recognizing image
Chen et al. Lane detection by trajectory clustering in urban environments
CN104766330B (en) A kind of image processing method and electronic equipment
Xu et al. Denoising for dynamic vision sensor based on augmented spatiotemporal correlation
Yang et al. Method for building recognition from FLIR images
CN115222778A (en) Moving object detection method and device based on optical flow, electronic device and medium
Hiremath et al. Implementation of low cost vision based measurement system: motion analysis of indoor robot
CN109766012B (en) Sight line calculation method and device
Chen et al. A 3-D surveillance system using multiple integrated cameras
JP2021077177A (en) Operation recognition apparatus, operation recognition method, and operation recognition program
CN111507192A (en) Appearance instrument monitoring method and device
TWI739203B (en) A method and system of evaluating the valid analysis region of images