TW202121332A - Method of acquiring detection zone in image and method of determining zone usage - Google Patents

Method of acquiring detection zone in image and method of determining zone usage Download PDF

Info

Publication number
TW202121332A
TW202121332A TW108142605A TW108142605A TW202121332A TW 202121332 A TW202121332 A TW 202121332A TW 108142605 A TW108142605 A TW 108142605A TW 108142605 A TW108142605 A TW 108142605A TW 202121332 A TW202121332 A TW 202121332A
Authority
TW
Taiwan
Prior art keywords
detection area
computing device
detected
obtaining
detection
Prior art date
Application number
TW108142605A
Other languages
Chinese (zh)
Other versions
TWI730509B (en
Inventor
陳幼剛
鍾俊魁
Original Assignee
英業達股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英業達股份有限公司 filed Critical 英業達股份有限公司
Priority to TW108142605A priority Critical patent/TWI730509B/en
Publication of TW202121332A publication Critical patent/TW202121332A/en
Application granted granted Critical
Publication of TWI730509B publication Critical patent/TWI730509B/en

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A method of acquiring a detection zone in an image comprises by a camera, sequentially acquiring a plurality of images associated with an image capturing scene, by a computing device, computing a plurality of moving traces of a plurality of objects in the images, and displaying the detection zone and another image that is associated with the image capturing scene but different from the images.

Description

影像偵測區域取得方法及空間使用情況的判定方法Method for obtaining image detection area and judging method for space usage

本發明係關於一種影像偵測區域取得方法及空間使用情況的判定方法,特別係關於一種基於影像中多個物件的多條移動軌跡以取得偵測區域及判定空間使用情況的方法。The present invention relates to a method for obtaining an image detection area and a method for determining space usage, and more particularly to a method for obtaining a detection area and judging space usage based on multiple movement tracks of multiple objects in an image.

為了監控路上車子的行進方向、行進時間及停留時間等,已發展出多種監控方法來監控車子的行進行為,然大多數的監控方法仍是藉由設置監視攝影機拍攝多個場景,並再由監控人員依據監視攝影機所拍攝的影像判斷該些場景是否有異常狀況。此外,每當安裝監視攝影機於新的場域時,仍需要人工判斷不同方向的車道,並據此繪製每個車道中的感興趣區域(Region of Interest,ROI)或偵測區域。In order to monitor the direction of travel, travel time and stay time of vehicles on the road, a variety of monitoring methods have been developed to monitor the behavior of vehicles. However, most monitoring methods are still by setting surveillance cameras to shoot multiple scenes and then monitoring them. The personnel judge whether there are abnormal conditions in these scenes based on the images taken by the surveillance cameras. In addition, whenever a surveillance camera is installed in a new field, it is still necessary to manually judge lanes in different directions, and draw the Region of Interest (ROI) or detection area in each lane accordingly.

然而,人為繪製的ROI可能並非最有效的偵測區域,因此若在此偵測區域計算或統計車流恐使得統計出來的數據失準。此外,若需在一個新的場域安裝多台攝像裝置,則每台攝像裝置所拍攝的場景的偵測區域仍是由人工劃分,更造成時間的浪費。並且,設置在監控場域中的攝像裝置可能因為受到外力的影響(例如振動或風吹等不可抗力因素)而產生偏移,造成原已設定好的偵測區域產生偏移,使得監控人員又需前往監控場域調整偏移的攝像裝置。However, the artificially drawn ROI may not be the most effective detection area. Therefore, if you calculate or count the traffic flow in this detection area, the statistical data may be inaccurate. In addition, if multiple camera devices need to be installed in a new field, the detection area of the scene shot by each camera device is still manually divided, which causes a waste of time. In addition, the camera device installed in the monitoring field may be affected by external forces (for example, vibration or wind and other force majeure factors), which may cause deviations in the originally set detection area, and the monitoring personnel need to go to A camera device that monitors the field to adjust the offset.

鑒於上述,本發明提供一種以滿足上述需求的影像偵測區域取得方法及空間使用情況的判定方法,以在一個新的場域安裝多台攝像裝置時不需由監控人員劃分偵測區域,藉此節省人工劃分的時間。此外,依據本發明的一或多個實施例的空間使用情況的判定方法,更可以在取得偵測區域後由運算裝置判斷在取像場景的物件的流量以及物件是否有異常行為,以及判斷攝像裝置是否有偏移,並據以通知監控單位,且當攝像裝置有偏移時,即使監控人員未即時調整偏移的攝像裝置,亦不會大幅影響偵測區域的準確性。In view of the above, the present invention provides a method for obtaining an image detection area and a method for determining space usage to meet the above requirements, so that when multiple camera devices are installed in a new field, there is no need for monitoring personnel to divide the detection area. This saves time for manual division. In addition, according to the method for determining the use of space according to one or more embodiments of the present invention, after obtaining the detection area, the computing device can determine the flow rate of the object in the capturing scene and whether the object has abnormal behavior, and determine whether the camera has abnormal behavior. Whether the device is offset, the monitoring unit is notified accordingly, and when the camera device is offset, even if the monitoring personnel does not immediately adjust the offset camera device, it will not greatly affect the accuracy of the detection area.

依據本發明一實施例的影像偵測區域取得方法,該方法包含:以一攝像裝置依序取得關聯於一取像場景的多個影像;以一運算裝置計算出該些影像中多個物件的多條移動軌跡;以該運算裝置對該些移動軌跡執行一聚類程序以取得一偵測區域;以及於一顯示器顯示該偵測區域及關聯於該取像場景且異於該些影像的另一影像。According to an image detection area acquisition method of an embodiment of the present invention, the method includes: sequentially acquiring a plurality of images associated with a capturing scene by a camera; A plurality of movement tracks; a clustering procedure is performed on the movement tracks by the computing device to obtain a detection area; and the detection area is displayed on a display and is related to the imaging scene and is different from the images An image.

依據本發明一實施例的空間使用情況的判定方法,該方法包含:以一攝像裝置依序取得關聯於一取像場景的多個影像;以一攝像裝置依序取得關聯於一取像場景的多個影像;以一運算裝置計算出該些影像中多個物件的多條移動軌跡;以該運算裝置對該些移動軌跡執行一聚類程序以取得一偵測區域;以該運算裝置基於該偵測區域執行一事件偵測程序,其中該事件偵測程序係以該運算裝置判斷一待偵測物件的行為是否符合一事件規則;以及以該運算裝置輸出該事件偵測程序的一偵測。According to an embodiment of the space usage determination method of the present invention, the method includes: sequentially obtaining a plurality of images associated with a capturing scene with a camera device; sequentially obtaining images associated with a capturing scene with a camera device A plurality of images; a computing device calculates a plurality of movement trajectories of objects in the images; the computing device performs a clustering procedure on the movement trajectories to obtain a detection area; the computing device is based on the The detection area executes an event detection process, wherein the event detection process uses the computing device to determine whether the behavior of an object to be detected meets an event rule; and the computing device outputs a detection of the event detection process .

以上之關於本揭露內容之說明及以下之實施方式之說明係用以示範與解釋本發明之精神與原理,並且提供本發明之專利申請範圍更進一步之解釋。The above description of the disclosure and the following description of the implementation manners are used to demonstrate and explain the spirit and principle of the present invention, and to provide a further explanation of the patent application scope of the present invention.

以下在實施方式中詳細敘述本發明之詳細特徵以及優點,其內容足以使任何熟習相關技藝者了解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之目的及優點。以下之實施例係進一步詳細說明本發明之觀點,但非以任何觀點限制本發明之範疇。The detailed features and advantages of the present invention are described in detail in the following embodiments. The content is sufficient to enable anyone familiar with the relevant art to understand the technical content of the present invention and implement it accordingly, and is based on the content disclosed in this specification, the scope of patent application and the drawings. Anyone who is familiar with relevant skills can easily understand the purpose and advantages of the present invention. The following examples further illustrate the viewpoints of the present invention in detail, but do not limit the scope of the present invention by any viewpoint.

本發明所揭示的影像偵測區域取得方法是用以取得攝像裝置所拍攝的取像場景的偵測區域。舉例而言,所述的偵測區域可以是平面道路、高速公路、百貨公司、商場、農場等的取像場景的偵測區域,其中取像場景較佳是有多個移動物件的場域,等然本發明不以此為限。其中為了便於說明,以下多個實施例所揭示的偵測區域取得方法將以道路作為取像場景的示例。The method for obtaining the image detection area disclosed in the present invention is used to obtain the detection area of the image capturing scene captured by the camera device. For example, the detection area may be a detection area of an imaging scene on a flat road, a highway, a department store, a shopping mall, a farm, etc., where the imaging scene is preferably a field with multiple moving objects. However, the present invention is not limited to this. For ease of description, the detection area acquisition methods disclosed in the following embodiments will take a road as an example of an image capturing scene.

請一併參考圖1及圖2,其中圖1是依據本發明一實施例所繪示的影像偵測區域取得方法的示意圖;圖2是依據本發明一實施例所繪示的影像偵測區域取得方法的流程圖。Please refer to FIGS. 1 and 2 together, in which FIG. 1 is a schematic diagram of a method for obtaining an image detection area according to an embodiment of the present invention; FIG. 2 is an image detection area according to an embodiment of the present invention The flow chart of the acquisition method.

請參考步驟S01:以攝像裝置依序取得關聯於取像場景的多個影像,所述得攝像裝置例如是設置於道路旁的監視錄影機,取像場景即為攝像裝置所拍攝的道路的場景,而影像即為如圖1所示的道路的影像。詳細而言,攝像裝置取得的多個影像係攝像裝置在不同的時間點依序取得的影像,且該些影像中包含如圖1所示的多個物件O。Please refer to step S01: Sequentially obtain multiple images associated with the capturing scene by a camera device. The camera device is, for example, a surveillance video recorder installed on the side of the road. The captured scene is the scene of the road captured by the camera device. , And the image is the image of the road as shown in Figure 1. In detail, the multiple images acquired by the camera device are the images sequentially acquired by the camera device at different time points, and the images include multiple objects O as shown in FIG. 1.

當以攝像裝置取得多個影像後,運算裝置於步驟S03計算出該些影像中的該些物件的多條移動軌跡MT。詳言之,攝像裝置於第一取像時間取得第一影像,且運算裝置以神經網路深度學習法辨識出第一影像中的這些物件,以及該些物件在第一影像中分別位處的多個第一位置,且此辨識結果的信心值高於一門檻值;攝像裝置接著於第二取像時間取得第二影像,且運算裝置以神經網路深度學習法辨識出第二影像中的這些物件,以及該些物件在第二影像中分別位處的多個第二位置,且此辨識結果的信心值高於前述門檻值。隨後,運算裝置便依據每一物件的第一位置及第二位置取得每一物件的移動軌跡MT。換句話說,運算裝置乃是依據神經網路深度學習法對多個影像的每一物件的辨識信心值以及每一物件的多個位置計算出每一物件的移動軌跡MT。其中運算裝置例如是監控單位的中央處理器或具有運算功能的雲端伺服器等;而神經網路深度學習法例如是人工智慧(Artificial Intelligence,AI)技術的卷積神經網路(Convolutional Neural Network,CNN),然本發明不以此為限。After a plurality of images are obtained by the camera device, the computing device calculates a plurality of movement trajectories MT of the objects in the images in step S03. In detail, the camera device obtains the first image at the first image capturing time, and the computing device uses the neural network deep learning method to identify these objects in the first image, and the positions of these objects in the first image. A plurality of first positions, and the confidence value of the recognition result is higher than a threshold value; the camera device then obtains the second image at the second imaging time, and the computing device uses the neural network deep learning method to recognize the second image These objects, and multiple second positions of the objects in the second image, respectively, and the confidence value of the recognition result is higher than the aforementioned threshold value. Subsequently, the computing device obtains the movement track MT of each object according to the first position and the second position of each object. In other words, the computing device calculates the movement trajectory MT of each object based on the recognition confidence value of each object in the multiple images and the multiple positions of each object based on the neural network deep learning method. The computing device is, for example, the central processing unit of the monitoring unit or the cloud server with computing function, etc.; and the deep learning method of neural network is, for example, the convolutional neural network (Convolutional Neural Network) of artificial intelligence (AI) technology. CNN), however, the present invention is not limited to this.

請繼續參考步驟S03,詳細而言,在運算裝置以神經網路深度學習法辨識出影像中的物件時,可同時附加前述的信心值,其中信心值代表運算裝置以神經網路深度學習法判斷物件後,對此判斷結果的把握程度。當信心值達到前述的門檻值(例如是70%)時,則表示運算裝置以神經網路深度學習法對物件的判斷結果是可信賴的。舉例而言,運算裝置以神經網路深度學習法辨識圖1所示的影像中的物件O為車子並產生對應於此辨識的信心值,當信心值達到70%時,則表示運算裝置以神經網路深度學習法辨識物件O是車子的結果是可信賴的。Please continue to refer to step S03. In detail, when the computing device uses the neural network deep learning method to identify the object in the image, the aforementioned confidence value can be added at the same time, where the confidence value represents the computing device judged by the neural network deep learning method After the object, the degree of certainty of the result of this judgment. When the confidence value reaches the aforementioned threshold value (for example, 70%), it means that the computing device uses the neural network deep learning method to determine the object to be reliable. For example, the computing device uses the neural network deep learning method to identify the object O in the image shown in Figure 1 as a car and generates a confidence value corresponding to this identification. When the confidence value reaches 70%, it means that the computing device uses the neural network. The network deep learning method recognizes that the object O is a car, and the result is reliable.

請繼續參考步驟S03,在運算裝置辨識出物件O後,運算裝置接著依時序取得物件O在影像中的座標位置。以單一個物件O的例子而言,運算裝置取得物件O在多個影像中的每一個影像的座標位置,並且依時序連接該些座標位置,以取得物件O的移動軌跡MT。Please continue to refer to step S03. After the computing device recognizes the object O, the computing device then obtains the coordinate position of the object O in the image according to time sequence. Taking the example of a single object O, the computing device obtains the coordinate position of the object O in each of the multiple images, and connects the coordinate positions in time sequence to obtain the movement track MT of the object O.

於運算裝置取得該些移動軌跡MT後,運算裝置接著於步驟S05對該些移動軌跡MT執行聚類程序以取得偵測區域DZ,其中聚類程序包含以運算裝置基於該些移動軌跡MT取得數個邊界點,並且依據該些邊界點取得偵測區域DZ。運算裝置依據該些移動軌跡MT取得偵測區域DZ可以是基於一機率分佈函數,且為便於後續說明,以下皆舉常態分佈函數為例,其中常態分佈函數較佳是二元的常態分佈函數,且二元的常態分佈函數的一個維度為移動軌跡MT的密度,另一個維度為移動軌跡MT與一參考線的交點分佈。After the arithmetic device obtains the movement trajectories MT, the arithmetic device then performs a clustering procedure on the movement trajectories MT to obtain the detection area DZ in step S05, wherein the clustering procedure includes obtaining the data based on the movement trajectories MT by the arithmetic device. There are two boundary points, and the detection area DZ is obtained based on these boundary points. The calculation device obtains the detection zone DZ according to the movement trajectories MT based on a probability distribution function. For the convenience of subsequent description, the normal distribution function is taken as an example below. The normal distribution function is preferably a binary normal distribution function. And one dimension of the binary normal distribution function is the density of the movement trajectory MT, and the other dimension is the distribution of the intersection of the movement trajectory MT and a reference line.

請參考圖3(a),圖3(a)是依據本發明一實施例所繪示的影像偵測區域取得方法的示意圖。詳細而言,對該些移動軌跡MT執行聚類程序以取得邊界點的實現方式可以是運算裝置基於該些移動軌跡MT歸納出常態分佈函數,並且分別以常態分佈函數的信賴區間的兩邊界值定義偵測區域DZ的邊界線,其中信賴區間例如是常態分佈函數的68%到95%,然本發明不以此為限。尤其是,依據所述的常態分佈函數的實施方式,取得數個第一交點N1的常態分佈函數及其信賴區間的兩第一邊界點BP1,以及取得數個第二交點N2的常態分佈函數及其信賴區間的兩第二邊界點BP2,再以此二第一邊界點BP1及第二邊界點BP2所圈圍的區域做為偵測區域DZ。其中,前述的數個第一交點N1為該些移動軌跡MT與參考線L1的交點,而前述的數個第二交點N2為該些移動軌跡MT與參考線L2的交點。尤其是該些移動軌跡MT的每一個均由其起點延伸至其終點,且這些第一交點N1中較佳包含該些移動軌跡MT之一的起點,而這些第二交點N2中較佳包含該些移動軌跡MT之一的終點。Please refer to FIG. 3(a). FIG. 3(a) is a schematic diagram of a method for obtaining an image detection area according to an embodiment of the present invention. In detail, the implementation manner of performing a clustering procedure on the movement trajectories MT to obtain boundary points may be that the computing device summarizes the normal distribution function based on the movement trajectories MT, and uses the two boundary values of the confidence interval of the normal distribution function respectively. Define the boundary line of the detection zone DZ, where the confidence interval is, for example, 68% to 95% of the normal distribution function, but the present invention is not limited to this. In particular, according to the implementation of the normal distribution function, the normal distribution functions of the first intersection N1 and the two first boundary points BP1 of the confidence interval are obtained, and the normal distribution functions of the second intersection N2 and The two second boundary points BP2 of the confidence interval, and the area enclosed by the two first boundary points BP1 and the second boundary point BP2 is used as the detection zone DZ. Wherein, the aforementioned first intersections N1 are the intersections of the movement trajectories MT and the reference line L1, and the aforementioned second intersections N2 are the intersections of the movement trajectories MT and the reference line L2. In particular, each of the movement trajectories MT extends from its starting point to its end, and the first intersection N1 preferably includes the starting point of one of the movement trajectories MT, and the second intersection N2 preferably includes the The end point of one of these moving tracks MT.

請參考圖3(b),圖3(b)是另一種取得偵測區域DZ的邊界線BL的方法的示意圖,其中此述取得邊界線BL的方法係基於至少二群的移動軌跡MT,即這些移動軌跡MT與單一參考線的交點所構成的常態分佈函數具有至少二信賴區間。為便於理解,以下僅以可形成二信賴區間的移動軌跡MT進行說明。在取得對應於參考線L1的二信賴區間及參考線L2的二信賴區間之後,運算裝置即基於該些移動軌跡MT之中同一群的移動軌跡MT關聯於參考線L1及參考線L2的二信賴區間的二中點取得中線CL1,並基於該些移動軌跡MT之中另一群的移動軌跡MT關聯於參考線L1及參考線L2的二信賴區間的二中點取得中線CL2,運算裝置再基於二中線CL1及CL2取得偵測區域的邊界線BL。其中運算裝置分別基於二群的該些移動軌跡MT歸納出常態分佈函數的實現方式可以是藉由如上述的參考線L1及L2的方式取得,故歸納出常態分佈函數的運作細節不再於此贅述。Please refer to Figure 3(b). Figure 3(b) is a schematic diagram of another method for obtaining the boundary line BL of the detection zone DZ. The method for obtaining the boundary line BL is based on at least two groups of movement trajectories MT, namely The normal distribution function formed by the intersection of these movement trajectories MT and a single reference line has at least two confidence intervals. For ease of understanding, only the movement trajectory MT that can form two confidence intervals is described below. After obtaining the two confidence intervals corresponding to the reference line L1 and the two confidence intervals of the reference line L2, the computing device is based on the two confidence intervals of the reference line L1 and the reference line L2 based on the movement trajectories MT of the same group among the movement trajectories MT. The two midpoints of the interval obtain the midline CL1, and the midline CL2 is obtained based on the two midpoints of the two confidence intervals of the reference line L1 and the reference line L2 based on the movement trajectory MT of the other group among the movement trajectories MT. Obtain the boundary line BL of the detection area based on the second center line CL1 and CL2. The arithmetic device respectively summarizes the normal distribution function based on the movement trajectories MT of the two groups. The realization method can be obtained by the reference line L1 and L2 as mentioned above. Therefore, the operation details of the normal distribution function are not summarized here. Go into details.

詳細而言,請先參考圖3(b)中位在左側的那群移動軌跡MT,在運算裝置基於該些移動軌跡MT歸納出二常態分佈函數後,運算裝置以二常態分佈函數的信賴區間的第一頂點CP1及第二頂點CP2的連線作為第一中線CL1,其中第一頂點CP1及第二頂點CP2例如分別是二常態分佈函數的50%,然本發明不以此為限。請參考圖3(b) 中位在右側的另一群移動軌跡MT,相似地,運算裝置以另二常態分佈函數的信賴區間的第一頂點CP1’及第二頂點CP2’的連線作為第二中線CL2。接著,運算裝置均等地劃分第一中線CL1及第二中線CL2之間的距離以取得邊界線BL,使得邊界線BL上的任意點與第一中線CL1的距離相等於邊界線BL上的該任意點與第二中線CL2之間的距離。此外,在取得第一頂點CP1/CP1’及第二頂點CP2/CP2’後,運算裝置亦可以取得第一頂點CP1與第二頂點CP2之間的第一中間點BP1,以及第一頂點CP1’與第二頂點CP2’之間的第二中間點BP2,並以連接第一中間點BP1與第二中間點BP2的線段做為邊界線BL。In detail, please refer to the group of movement trajectories MT on the left in Figure 3(b). After the computing device summarizes the two normal distribution functions based on the movement trajectories MT, the computing device uses the confidence interval of the two normal distribution functions The connection between the first vertex CP1 and the second vertex CP2 of is used as the first centerline CL1, where the first vertex CP1 and the second vertex CP2 are respectively 50% of the two normal distribution functions, but the present invention is not limited to this. Please refer to another group of movement trajectories MT on the right in Figure 3(b). Similarly, the computing device uses the connection between the first vertex CP1' and the second vertex CP2' of the confidence interval of the other two normal distribution functions as the second Centerline CL2. Next, the arithmetic device equally divides the distance between the first center line CL1 and the second center line CL2 to obtain the boundary line BL, so that the distance between any point on the boundary line BL and the first center line CL1 is equal to that on the boundary line BL The distance between this arbitrary point and the second center line CL2. In addition, after obtaining the first vertex CP1/CP1' and the second vertex CP2/CP2', the computing device can also obtain the first intermediate point BP1 between the first vertex CP1 and the second vertex CP2, and the first vertex CP1' The second intermediate point BP2 between the second vertex CP2' and the second intermediate point BP2, and a line segment connecting the first intermediate point BP1 and the second intermediate point BP2 is used as the boundary line BL.

請接續參考上述圖3(b)的實施例,當取得邊界線BL後,運算裝置可以藉由以第一中線CL1做為中心線的方式將邊界線BL對稱到第一中線CL1的另一側以取得另一邊界線,並且以二邊界線BL以及二參考線L1及L2圈圍出偵測區域DZ。相似地,運算裝置亦可以是以第二中線CL2做為中心線以對稱的方式取得偵測區域DZ。需注意的是,當分別以第一中線CL1及第二中線CL2做為中心線以取得三條邊界線BL時,則運算裝置可以將位於第一中線CL1兩側的二邊界線BL朝向第一中線CL1移動一預設距離以做為更新的二邊界線BL;以及將位於第二中線CL2兩側的二邊界線BL朝向第二中線CL2移動該預設距離以做為更新的二邊界線BL,其中所述的預設距離可以是二第一頂點CP1及CP1’之間的距離的20%,或是二第二頂點CP2及CP2’ 之間的距離的20%,本發明不以此為限。據此,可以將原本相接的二偵測區域DZ調整為彼此分隔。Please continue to refer to the embodiment of FIG. 3(b) above. After the boundary line BL is obtained, the computing device can symmetrical the boundary line BL to another of the first center line CL1 by taking the first center line CL1 as the center line. One side is used to obtain another boundary line, and the detection area DZ is enclosed by two boundary lines BL and two reference lines L1 and L2. Similarly, the computing device can also use the second center line CL2 as the center line to obtain the detection zone DZ in a symmetrical manner. It should be noted that when the first center line CL1 and the second center line CL2 are respectively used as center lines to obtain three boundary lines BL, the computing device can direct the two boundary lines BL located on both sides of the first center line CL1 toward The first center line CL1 is moved by a predetermined distance as the updated second boundary line BL; and the second boundary lines BL located on both sides of the second center line CL2 are moved toward the second center line CL2 by the predetermined distance to be updated The second boundary line BL, where the preset distance can be 20% of the distance between the two first vertices CP1 and CP1', or 20% of the distance between the two second vertices CP2 and CP2', the The invention is not limited to this. According to this, the two detection areas DZ that are originally connected can be adjusted to be separated from each other.

此外,若影像中具有至少四群的移動軌跡MT,則運算裝置可以執行如圖3(b)所揭示的以常態分佈函數取得邊界線BL的方法,基於該四群的移動軌跡MT取得二邊界線BL,再以二邊界線BL以及二參考線L1及L2圈圍出偵測區域DZ。In addition, if there are at least four groups of movement trajectories MT in the image, the computing device can execute the method of obtaining the boundary line BL with a normal distribution function as disclosed in FIG. 3(b), and obtain two boundaries based on the four groups of movement trajectories MT. Line BL, and then circle the detection area DZ with two boundary lines BL and two reference lines L1 and L2.

依據上述圖3(a)及3(b)所示的取得邊界線的方式,運算裝置便能基於邊界線取得偵測區域DZ。而在圖1的例子中,偵測區域DZ係在車輛行駛的車道內,而邊界線BL係重疊或鄰近劃分車道的車道線。According to the method of obtaining the boundary line shown in FIGS. 3(a) and 3(b), the computing device can obtain the detection zone DZ based on the boundary line. In the example of FIG. 1, the detection area DZ is in the lane where the vehicle is traveling, and the boundary line BL is overlapped or adjacent to the lane line dividing the lanes.

在以運算裝置取得偵測區域DZ後,本實施例的偵測區域DZ取得方法接著於步驟S07於顯示器顯示偵測區域DZ以及關聯於取像場景且異於該些影像的另一影像,其中另一影像較佳是套用偵測區域的即時影像。其中顯示器例如是設置於監控單位的顯示螢幕,然本發明不以此為限。After the detection area DZ is obtained by the computing device, the method for obtaining the detection area DZ of this embodiment then displays the detection area DZ on the display in step S07 and another image associated with the image capturing scene and different from the images, wherein The other image is preferably a real-time image applied to the detection area. The display is, for example, a display screen provided in a monitoring unit, but the present invention is not limited to this.

詳細而言,在取得偵測區域DZ後,攝像裝置持續取得取像場景的即時影像以供顯示器顯示,意即攝像裝置取得該些影像的時間點早於攝像裝置取得即時影像的時間點,且於顯示器顯示的即時影像上標示有偵測區域DZ,其中即時影像上標示的偵測區域DZ可以是一或多個,本發明不以此為限。In detail, after obtaining the detection zone DZ, the camera device continuously obtains real-time images of the capturing scene for display on the display, which means that the time point when the camera device obtains these images is earlier than the time point when the camera device obtains the real-time image, and The real-time image displayed on the display is marked with a detection zone DZ, wherein the detection zone DZ marked on the real-time image may be one or more, and the present invention is not limited to this.

在於顯示器顯示偵測區域DZ及即時影像(步驟S07)後,本實施例的實現方式更可以包含每經過一間隔時段後,再次以攝像裝置依序取得多個影像(步驟S01),以對不同時段的交通狀況取得相對應的偵測區域DZ。After the display shows the detection zone DZ and the real-time image (step S07), the implementation of this embodiment may further include obtaining a plurality of images sequentially by the camera device again (step S01) every time an interval has elapsed (step S01), in order to compare different The traffic condition of the time period obtains the corresponding detection zone DZ.

請參考圖4,圖4是依據本發明另一實施例所繪示的偵測區域取得方法的流程圖。其中圖4所揭示的步驟S01到S05以及步驟S01’到步驟S05’ 係與圖2所揭示的步驟S01到S05相同,故相同的運作細節不再於此贅述。惟,在圖4所揭示的偵測區域取得方法中,步驟S01所取得的多個影像異於步驟S01’所取得的多個影像,步驟S03所算出的移動軌跡也異於步驟S03’所算出的移動軌跡,因此在對步驟S03所算出的該些移動軌跡執行聚類程序後(步驟S05)所取得的第一偵測區域,係異於對步驟S03’所算出的該些移動軌跡執行聚類程序後(步驟S05’)所取得的第二偵測區域。此外,於本實施例中,較佳係步驟S01到S05執行在先,而步驟S01’到步驟S05’執行在後。Please refer to FIG. 4, which is a flowchart of a method for obtaining a detection area according to another embodiment of the present invention. The steps S01 to S05 and steps S01' to S05' disclosed in FIG. 4 are the same as the steps S01 to S05 disclosed in FIG. 2, so the same operation details will not be repeated here. However, in the detection area obtaining method disclosed in FIG. 4, the multiple images obtained in step S01 are different from the multiple images obtained in step S01', and the movement trajectory calculated in step S03 is also different from that calculated in step S03' Therefore, the first detection area obtained after performing the clustering procedure on the movement trajectories calculated in step S03 (step S05) is different from performing the clustering on the movement trajectories calculated in step S03'. The second detection area obtained after the similar procedure (step S05'). In addition, in this embodiment, it is preferable that steps S01 to S05 are executed first, and steps S01' to S05' are executed later.

詳細而言,在於步驟S05取得第一偵測區域並於步驟S07顯示具有第一偵測區域的第一影像後,運算裝置接著執行步驟S01’到步驟S05’以取得第二偵測區域,其中第一偵測區域及第二偵測區域可以是一或多個,本發明不以此為限。換言之,步驟S01及S01’所述的第一影像及第二影像較佳分別是同個取像場景但不同時間點的即時影像。In detail, after obtaining the first detection area in step S05 and displaying the first image with the first detection area in step S07, the computing device then executes steps S01' to S05' to obtain the second detection area, where The first detection area and the second detection area may be one or more, and the invention is not limited thereto. In other words, the first image and the second image described in steps S01 and S01' are preferably real-time images of the same capturing scene but at different time points, respectively.

在取得第一偵測區域及接著取得第二偵測區域後,運算裝置更於步驟S09比對第一偵測區域及第二偵測區域以取得比較值,其中運算裝置於步驟S09取得的比較值係用以表示第一偵測區域及第二偵測區域之間的重疊量。詳細而言,彼此關聯的第一偵測區域及第二偵測區域較佳是在不同日期但同一時段所取得的偵測區域,而運算裝置藉由比較不同日期但相同時段的第一偵測區域及第二偵測區域以判知攝像裝置是否有偏移,而取得第一偵測區域及第二偵測區域之間的重疊量的實現方式可以是判斷第二偵測區域重疊於第一偵測區域的比例值,並以此比例值做為所述的比較值。After obtaining the first detection area and then obtaining the second detection area, the computing device compares the first detection area with the second detection area in step S09 to obtain a comparison value, wherein the comparison value obtained by the computing device in step S09 The value is used to indicate the amount of overlap between the first detection area and the second detection area. In detail, the first detection area and the second detection area that are related to each other are preferably the detection areas obtained on different dates but the same time period, and the computing device compares the first detection areas on different dates but the same time period. Area and the second detection area to determine whether the camera device is offset, and to obtain the overlap between the first detection area and the second detection area can be achieved by judging that the second detection area overlaps the first detection area. The ratio value of the detection area, and use this ratio value as the comparison value.

請接續參考步驟S11,運算裝置並接著於步驟S11判斷比較值是否低於一重疊門檻值。換句話說,運算裝置基於比較值及重疊門檻值判斷攝像裝置本身是否有偏移,其中重疊門檻值例如是80%,然本發明不以此為限。Please continue to refer to step S11, the arithmetic device then judges whether the comparison value is lower than an overlap threshold in step S11. In other words, the arithmetic device determines whether the camera device itself has an offset based on the comparison value and the overlap threshold value, where the overlap threshold value is, for example, 80%, but the present invention is not limited to this.

當運算裝置於步驟S11判斷運算裝置判斷比較值低於重疊門檻值時,則表示攝像裝置本身可能因受到風吹、振動等外力而產生偏移,則運算裝置接著於步驟S13以第二偵測區域更新偵測區域,並且於步驟S15輸出通知以供顯示器顯示。反之,若比較值未低於重疊門檻值,表示攝像裝置未產生偏移,或是攝像裝置的偏移量是在容許範圍內,則運算裝置回到步驟S09持續對第一偵測區域及第二偵測區域進行比對。When the computing device determines in step S11 that the computing device determines that the comparison value is lower than the overlap threshold, it means that the camera itself may be offset due to external forces such as wind, vibration, etc., and the computing device then uses the second detection area in step S13 The detection area is updated, and a notification is output in step S15 for display on the display. On the contrary, if the comparison value is not lower than the overlap threshold value, it means that the camera device has no offset, or the offset of the camera device is within the allowable range, and the computing device returns to step S09 and continues to perform the first detection area and the second detection area. Second, the detection area is compared.

需注意的是,圖4所繪示的步驟S13是執行於步驟S15之前,然步驟S13以及步驟S15亦可以是同時執行,或是步驟S13執行於步驟S15之後,本發明不以此為限。It should be noted that step S13 shown in FIG. 4 is performed before step S15, but step S13 and step S15 can also be performed at the same time, or step S13 is performed after step S15, and the present invention is not limited thereto.

換句話說,運算裝置於步驟S13以第二偵測區域更新偵測區域是以最新取得的第二偵測區域作為偵測區域,並且於步驟S15輸出通知以通知監控單位攝像裝置可能有異常狀況。據此,當攝像裝置產生偏移時,即使工作人員未即時調整攝像裝置,仍不影響此取像場景的偵測區域。In other words, the computing device updates the second detection area in step S13. The detection area uses the newly acquired second detection area as the detection area, and outputs a notification in step S15 to notify the surveillance unit that the camera device may have abnormal conditions . Accordingly, when the camera device is shifted, even if the worker does not adjust the camera device in real time, the detection area of the image capturing scene will not be affected.

請參考圖5,圖5是依據本發明一實施例所繪示的空間使用情況的判定方法的流程圖。其中圖5所揭示的空間使用情況的判定方法較佳係用以判定待偵測物件的行為是否符合一事件規則。圖5所揭示的步驟S01到S07相同於圖2所揭示的步驟S01到S07,故圖5所揭示的步驟S01到S07不再於此贅述。惟,圖5所揭示的基於偵測區域的違規判定方法在於運算裝置執行聚類程序取得偵測區域後(步驟S05),運算裝置除了可以於步驟S07於顯示器顯示偵測區域及關聯於取像場景且異於該些影像的另一影像之外,更可以於步驟S08以運算裝置基於偵測區域執行一事件偵測程序,以判斷一待偵測物件的行為是否符合一事件規則。然而,上述步驟S07及步驟S08也可以一併執行。其中,所述的事件規則例如是偵測區域內物件的流量、偵測區域內不得存在任一物件(禁入區域偵測)、偵測區域中的任一物件是否停留達一段預設時間(停車偵測)、偵測區域中的任一物件是否沿一預設方向移動(逆向偵測),以及偵測區域中的任一物件的移動速度是否落入一預設速度區段內(超速/慢速偵測)的其中至少一者,然本發明不以此為限。Please refer to FIG. 5, which is a flowchart of a method for determining space usage according to an embodiment of the present invention. The method for determining the use of space as disclosed in FIG. 5 is preferably used to determine whether the behavior of the object to be detected conforms to an event rule. The steps S01 to S07 disclosed in FIG. 5 are the same as the steps S01 to S07 disclosed in FIG. 2, so the steps S01 to S07 disclosed in FIG. 5 will not be repeated here. However, the violation determination method based on the detection area disclosed in FIG. 5 is that after the computing device executes the clustering procedure to obtain the detection area (step S05), the computing device can display the detection area on the display in step S07 and be associated with the image capture. In addition to another image that is different from the scene, the computing device can also execute an event detection process based on the detection area in step S08 to determine whether the behavior of an object to be detected conforms to an event rule. However, the above steps S07 and S08 can also be performed together. Among them, the event rules are, for example, the traffic of objects in the detection area, no objects in the detection area (forbidden area detection), and whether any object in the detection area stays for a predetermined period of time ( Parking detection), whether any object in the detection area moves in a preset direction (reverse detection), and whether the moving speed of any object in the detection area falls within a preset speed zone (overspeed /Slow detection), but the present invention is not limited to this.

詳細而言,若事件規則係偵測區域內不得存在任一物件時,則執行事件偵測程序包含判斷待偵測物件的座標位置是否落在偵測區域內。相似地,若事件偵測程序的事件規則係關於偵測區域中的任一物件的停留時間或移動等,且待偵測物件的多個座標位置落在偵測區域內時,則執行事件偵測程序除了基於待偵測物件的多個座標位置之外,更包含該些座標位置所對應的多個時間資訊,以基於多個座標位置及多個時間資訊判得待偵測物件的停留時間或移動。In detail, if the event rule is that no object can exist in the detection area, executing the event detection procedure includes determining whether the coordinate position of the object to be detected falls within the detection area. Similarly, if the event rule of the event detection process is about the stay time or movement of any object in the detection area, and multiple coordinate positions of the object to be detected fall within the detection area, then the event detection will be executed. In addition to the multiple coordinate positions of the object to be detected, the measurement procedure also includes multiple time information corresponding to the coordinate positions to determine the dwell time of the object to be detected based on multiple coordinate positions and multiple time information Or move.

請繼續參考步驟S08,在運算裝置於步驟S08執行事件偵測程序後,運算裝置於步驟S10輸出事件偵測程序的偵測結果。詳言之,若事件偵測程序係取得偵測區域的物件流量(例如,車流量),則偵測結果較佳包含偵測區域的物件流量,或是物件流量的異常狀況。此外,運算裝置可以是輸出偵測結果予監控單位的記憶裝置以做記錄,或是輸出到監控單位的顯示器顯示等,本發明不以此為限。Please continue to refer to step S08. After the computing device executes the event detection process in step S08, the computing device outputs the detection result of the event detection process in step S10. In detail, if the event detection process is to obtain the object flow in the detection area (for example, the traffic flow), the detection result preferably includes the object flow in the detection area or an abnormal condition of the object flow. In addition, the computing device may be a memory device that outputs the detection result to the monitoring unit for recording, or output to the monitor of the monitoring unit for display, etc. The present invention is not limited to this.

偵測結果亦可以包含當判斷待偵測物件的行為不符合事件規則時,運算裝置輸出通知以供顯示器顯示。反之,當運算裝置於步驟S08判斷待偵測物件的行為符合事件規則時,回到步驟S01以攝像裝置依序取得關聯於取像場景的多個影像,以更新偵測區域。The detection result may also include when it is determined that the behavior of the object to be detected does not conform to the event rule, the computing device outputs a notification for display on the display. Conversely, when the computing device determines in step S08 that the behavior of the object to be detected conforms to the event rule, it returns to step S01 to sequentially obtain multiple images associated with the capturing scene by the camera to update the detection area.

以上述車輛的例子而言,待偵測物件是在取得偵測區域之後存在於取像場景的車輛,因此,若車輛(待偵測物件)的座標位置落在偵測區域內、車輛在偵測區域內的停留時間超過事件規則的預設時間、車輛在偵測區域內的移動方向不符合事件規則的預設方向(例如逆向行駛),或是車輛在偵測區域內的移動速度落在預設速度區段外等,運算裝置便輸出通知予顯示器顯示,以提醒監控單位取像場景可能有異常狀況。Taking the example of the above vehicle, the object to be detected is the vehicle that exists in the imaging scene after the detection area is obtained. Therefore, if the coordinate position of the vehicle (the object to be detected) falls within the detection area, the vehicle is in the detection area. The stay time in the detection area exceeds the preset time of the event rule, the moving direction of the vehicle in the detection area does not meet the preset direction of the event rule (for example, reverse driving), or the moving speed of the vehicle in the detection area falls below Outside the preset speed section, the computing device will output a notification to the display to remind the monitoring unit that there may be an abnormal situation in the capturing scene.

請繼續參考步驟S08,然而,在運算裝置於步驟S08執行事件偵測程序判斷待偵測物件的行為不符合事件規則時,亦可以是回到步驟S01以攝像裝置依序取得關聯於取像場景的多個影像;以及當待偵測物件的行為符合事件規則時,以運算裝置輸出通知以供顯示器顯示,本發明不以此為限。Please continue to refer to step S08. However, when the computing device executes the event detection procedure in step S08 to determine that the behavior of the object to be detected does not conform to the event rule, it can also return to step S01 to sequentially obtain the association with the capturing scene by the camera When the behavior of the object to be detected meets the event rule, the computing device outputs a notification for display on the display, and the present invention is not limited to this.

綜上所述,依據本發明的一或多個實施例的偵測區域取得方法,可以基於物件的移動軌跡劃分出有效的偵測區域,且在一個新的場域安裝多台攝像裝置時不需由監控人員劃分偵測區域,藉此節省人工劃分的時間。此外,依據本發明的一或多個實施例的空間使用情況的判定方法,更可以在取得偵測區域後由運算裝置判斷在取像場景的物件的流量以及物件是否有異常行為,以及判斷攝像裝置是否有偏移,並據以通知監控單位,且當攝像裝置有偏移時,即使監控人員未即時調整偏移的攝像裝置,亦不會大幅影響偵測區域的準確性。In summary, according to the detection area acquisition method of one or more embodiments of the present invention, the effective detection area can be divided based on the movement trajectory of the object, and it is not necessary to install multiple camera devices in a new field. The detection area needs to be divided by the monitoring personnel to save the time of manual division. In addition, according to the method for determining the use of space according to one or more embodiments of the present invention, after obtaining the detection area, the computing device can determine the flow rate of the object in the capturing scene and whether the object has abnormal behavior, and determine whether the camera has abnormal behavior. Whether the device is offset, the monitoring unit is notified accordingly, and when the camera device is offset, even if the monitoring personnel does not immediately adjust the offset camera device, it will not greatly affect the accuracy of the detection area.

雖然本發明以前述之實施例揭露如上,然其並非用以限定本發明。在不脫離本發明之精神和範圍內,所為之更動與潤飾,均屬本發明之專利保護範圍。關於本發明所界定之保護範圍請參考所附之申請專利範圍。Although the present invention is disclosed in the foregoing embodiments, it is not intended to limit the present invention. All changes and modifications made without departing from the spirit and scope of the present invention fall within the scope of the patent protection of the present invention. For the scope of protection defined by the present invention, please refer to the attached scope of patent application.

O:物件 MT:移動軌跡 DZ:偵測區域 L1、L2:參考線 N1:第一交點 N2:第二交點 BP1:第一邊界點 BP2:第二邊界點 CP1、CP1’:第一頂點 CP2、CP2’:第二頂點 CL1:第一中線 CL2:第二中線 BL:邊界線O: Object MT: Movement track DZ: detection zone L1, L2: reference line N1: the first node N2: second intersection BP1: the first boundary point BP2: Second boundary point CP1, CP1’: the first vertex CP2, CP2’: the second vertex CL1: First midline CL2: Second midline BL: boundary line

圖1係依據本發明一實施例所繪示的影像偵測區域取得方法的示意圖。 圖2係依據本發明一實施例所繪示的影像偵測區域取得方法的流程圖。 圖3(a)及3(b)係依據本發明一實施例所繪示的影像偵測區域取得方法的示意圖。 圖4係依據本發明另一實施例所繪示的影像偵測區域取得方法的流程圖。 圖5係依據本發明一實施例所繪示的違規判定方法的流程圖。FIG. 1 is a schematic diagram of a method for obtaining an image detection area according to an embodiment of the present invention. FIG. 2 is a flowchart of a method for obtaining an image detection area according to an embodiment of the invention. 3(a) and 3(b) are schematic diagrams of a method for obtaining an image detection area according to an embodiment of the present invention. FIG. 4 is a flowchart of a method for obtaining an image detection area according to another embodiment of the present invention. FIG. 5 is a flowchart of a violation determination method according to an embodiment of the present invention.

Claims (10)

一種偵測區域取得方法,該方法包含:以一攝像裝置依序取得關聯於一取像場景的多個影像;以一運算裝置計算出該些影像中多個物件的多條移動軌跡;以該運算裝置對該些移動軌跡執行一聚類程序以取得一偵測區域;以及於一顯示器顯示該偵測區域及關聯於該取像場景且異於該些影像的另一影像。A method for obtaining a detection area, the method comprising: sequentially obtaining a plurality of images associated with a capturing scene by a camera device; calculating a plurality of movement tracks of a plurality of objects in the images by a computing device; The computing device executes a clustering process on the movement tracks to obtain a detection area; and displays the detection area and another image related to the capturing scene and different from the images on a display. 如請求項1所述的偵測區域取得方法,其中該聚類程序包含:以該運算裝置基於該些移動軌跡與一參考線的交點取得一機率分佈函數的信賴區間的二邊界點,基於該些移動軌跡與另一參考線的交點取得另一機率分佈函數的信賴區間的二邊界點,以及以該四邊界點所圈圍的區域做為該偵測區域。The method for obtaining the detection area according to claim 1, wherein the clustering procedure comprises: obtaining two boundary points of a confidence interval of a probability distribution function based on the intersection of the movement trajectories and a reference line by the computing device, based on the The intersection of these movement trajectories and another reference line obtains the two boundary points of the confidence interval of another probability distribution function, and the area enclosed by the four boundary points is used as the detection area. 如請求項1所述的偵測區域取得方法,其中該方法包含:以該運算裝置比對以兩次的該聚類程序取得彼此關聯的一第一偵測區域及一第二偵測區域以取得一比較值,其中用以取得該第一偵測區域的多個影像的取得時間早於用以取得該第二偵測區域的另多個影像的取得時間;當以該運算裝置判斷該比較值低於一重疊門檻值時,以該第二偵測區域更新該偵測區域;以及輸出一通知以供該顯示器顯示。The method for obtaining a detection area according to claim 1, wherein the method comprises: obtaining a first detection area and a second detection area that are related to each other by comparing the clustering procedure twice with the computing device Obtain a comparison value, wherein the acquisition time for acquiring the multiple images of the first detection area is earlier than the acquisition time for acquiring the other multiple images of the second detection area; when the computing device determines the comparison When the value is lower than an overlap threshold, update the detection area with the second detection area; and output a notification for the display to display. 如請求項1所述的偵測區域取得方法,其中以該運算裝置計算出該些影像中該些物件的該些移動軌跡包含:以該運算裝置依據該些物件在一第一取像時間的多個第一位置,以及該些物件在一第二取像時間的多個第二位置計算出該些移動軌跡。The method for obtaining the detection area according to claim 1, wherein calculating the movement trajectories of the objects in the images by the computing device includes: using the computing device according to a first image capturing time of the objects A plurality of first positions and a plurality of second positions of the objects at a second imaging time calculate the movement trajectories. 如請求項1所述的偵測區域取得方法,其中該方法更包含:以一神經網路(Neural Network)深度學習法辨識該些影像中的該些物件並取得關聯於該些物件的一信心值;以該運算裝置判斷該信心值是否達一門檻值;以及當該信心值達該門檻值時,以該運算裝置計算出該些影像中該些物件的該些移動軌跡。The detection area obtaining method according to claim 1, wherein the method further comprises: recognizing the objects in the images by a neural network (Neural Network) deep learning method and obtaining a confidence associated with the objects Use the computing device to determine whether the confidence value reaches a threshold; and when the confidence value reaches the threshold, use the computing device to calculate the movement trajectories of the objects in the images. 一種空間使用情況的判定方法,該方法包含:以一攝像裝置依序取得關聯於一取像場景的多個影像;以一運算裝置計算出該些影像中多個物件的多條移動軌跡;以該運算裝置對該些移動軌跡執行一聚類程序以取得一偵測區域;以該運算裝置基於該偵測區域執行一事件偵測程序,其中該事件偵測程序係以該運算裝置判斷一待偵測物件的行為是否符合一事件規則;以及以該運算裝置輸出該事件偵測程序的一偵測結果。A method for judging space usage, the method comprising: sequentially obtaining a plurality of images associated with a capturing scene with a camera device; calculating a plurality of movement trajectories of a plurality of objects in the images with an arithmetic device; The computing device executes a clustering process on the movement tracks to obtain a detection area; the computing device executes an event detection process based on the detection area, wherein the event detection process uses the computing device to determine a waiting area Detect whether the behavior of the object conforms to an event rule; and output a detection result of the event detection procedure by the computing device. 如請求項6所述的空間使用情況的判定方法,其中該聚類程序包含:以該運算裝置基於該些移動軌跡與一參考線的交點取得一機率分佈函數的信賴區間的二邊界點,基於該些移動軌跡與另一參考線的交點取得另一機率分佈函數的信賴區間的二邊界點,以及以該四邊界點所圈圍的區域做為該偵測區域。The method for determining the use of space according to claim 6, wherein the clustering procedure includes: obtaining the two boundary points of the confidence interval of a probability distribution function by the computing device based on the intersection of the movement trajectories and a reference line, based on The intersection of the movement trajectories and another reference line obtains the two boundary points of the confidence interval of another probability distribution function, and the area enclosed by the four boundary points is used as the detection area. 如請求項6所述的空間使用情況的判定方法,其中該事件規則為該偵測區域中的任一物件是否停留達一段預設時間,以該運算裝置基於該偵測區域判斷該待偵測物件的行為是否符合該事件規則包含:以該運算裝置基於該偵測區域判斷該待偵測物件的一座標位置是否落在該偵測區域內;以及當判斷該待偵測物件的該座標位置落在該偵測區域內時,以該運算裝置判斷該待偵測物件的該座標位置落在該偵測區域內的時間是否達該預設時間。The method for determining space usage according to claim 6, wherein the event rule is whether any object in the detection area stays for a predetermined period of time, and the computing device determines the to-be-detected based on the detection area Whether the behavior of the object conforms to the event rule includes: judging by the computing device based on the detection area whether the position of a mark of the object to be detected falls within the detection area; and when judging the coordinate position of the object to be detected When it falls within the detection area, the computing device determines whether the time that the coordinate position of the object to be detected falls within the detection area reaches the preset time. 如請求項6所述的空間使用情況的判定方法,其中該事件規則為該偵測區域中的任一物件是否沿一預設方向移動,以該運算裝置基於該偵測區域判斷該待偵測物件的行為是否符合該事件規則包含:以該運算裝置基於該偵測區域判斷該待偵測物件的一座標位置是否落在該偵測區域內;以及當判斷該待偵測物件的該座標位置落在該偵測區域內時,以該運算裝置基於該待偵測物件的多個座標位置及對應的多個時間資訊判斷該待偵測物件是否沿該預設方向移動。The method for determining space usage according to claim 6, wherein the event rule is whether any object in the detection area moves in a predetermined direction, and the computing device determines the to-be-detected based on the detection area Whether the behavior of the object conforms to the event rule includes: judging by the computing device based on the detection area whether the position of a mark of the object to be detected falls within the detection area; and when judging the coordinate position of the object to be detected When falling within the detection area, the computing device determines whether the object to be detected moves along the predetermined direction based on the multiple coordinate positions of the object to be detected and the corresponding multiple time information. 如請求項6所述的空間使用情況的判定方法,其中該事件規則為該偵測區域中的任一物件的移動速度是否落入一預設速度區段內,以該運算裝置基於該偵測區域判斷該待偵測物件的行為是否符合該事件規則包含:以該運算裝置基於該偵測區域判斷該待偵測物件的一座標位置是否落在該偵測區域內;以及當判斷該待偵測物件的該座標位置落在該偵測區域內時,以該運算裝置基於該待偵測物件的多個座標位置及對應的多個時間資訊判斷該待偵測物件的移動速度是否落入該預設速度區段內。The method for determining space usage according to claim 6, wherein the event rule is whether the moving speed of any object in the detection area falls within a predetermined speed range, and the computing device is based on the detection The area determining whether the behavior of the object to be detected conforms to the event rule includes: judging by the computing device based on the detection area whether the position of a mark of the object to be detected falls within the detection area; and when determining the object to be detected When the coordinate position of the object to be detected falls within the detection area, the computing device is used to determine whether the moving speed of the object to be detected falls within the detection area based on the multiple coordinate positions of the object to be detected and the corresponding multiple time information. Within the preset speed section.
TW108142605A 2019-11-22 2019-11-22 Method of acquiring detection zone in image and method of determining zone usage TWI730509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108142605A TWI730509B (en) 2019-11-22 2019-11-22 Method of acquiring detection zone in image and method of determining zone usage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108142605A TWI730509B (en) 2019-11-22 2019-11-22 Method of acquiring detection zone in image and method of determining zone usage

Publications (2)

Publication Number Publication Date
TW202121332A true TW202121332A (en) 2021-06-01
TWI730509B TWI730509B (en) 2021-06-11

Family

ID=77516490

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108142605A TWI730509B (en) 2019-11-22 2019-11-22 Method of acquiring detection zone in image and method of determining zone usage

Country Status (1)

Country Link
TW (1) TWI730509B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI785655B (en) * 2021-06-18 2022-12-01 逢甲大學 Photographing system for sports and operation method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI244624B (en) * 2004-06-04 2005-12-01 Jin-Ding Lai Device and method for defining an area whose image is monitored
WO2011080900A1 (en) * 2009-12-28 2011-07-07 パナソニック株式会社 Moving object detection device and moving object detection method
WO2012014430A1 (en) * 2010-07-27 2012-02-02 パナソニック株式会社 Mobile body detection device and mobile body detection method
US20130286198A1 (en) * 2012-04-25 2013-10-31 Xerox Corporation Method and system for automatically detecting anomalies at a traffic intersection
TWI676969B (en) * 2018-10-08 2019-11-11 南開科技大學 Traffic light controlling system based on internet of things (iot) and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI785655B (en) * 2021-06-18 2022-12-01 逢甲大學 Photographing system for sports and operation method thereof

Also Published As

Publication number Publication date
TWI730509B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
US11990036B2 (en) Driver behavior monitoring
US11380105B2 (en) Identification and classification of traffic conflicts
WO2020000251A1 (en) Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN110298307B (en) Abnormal parking real-time detection method based on deep learning
WO2022227766A1 (en) Traffic anomaly detection method and apparatus
CN106652465A (en) Method and system for identifying abnormal driving behavior on road
EP4089659A1 (en) Map updating method, apparatus and device
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN106327880B (en) A kind of speed recognition methods and its system based on monitor video
WO2021036243A1 (en) Method and apparatus for recognizing lane, and computing device
CN112906428B (en) Image detection region acquisition method and space use condition judgment method
CN116503818A (en) Multi-lane vehicle speed detection method and system
TWI730509B (en) Method of acquiring detection zone in image and method of determining zone usage
CN111524350A (en) Method, system, terminal device and medium for detecting abnormal driving condition of vehicle and road cooperation
CN112447060A (en) Method and device for recognizing lane and computing equipment
JP2023156963A (en) Object tracking integration method and integration apparatus
CN116311903A (en) Method for evaluating road running index based on video analysis
JP2023536692A (en) AI-based monitoring of racetracks
CN111709354A (en) Method and device for identifying target area, electronic equipment and road side equipment
JP2000149181A (en) Traffic stream measurement system
JP7384181B2 (en) Image collection device, image collection method, and computer program for image collection
TWI761863B (en) Traffic condition detection method
Yu et al. Measuring algorithm for the distance to a preceding vehicle on curve road using on-board monocular camera
JP2007265012A (en) Device, method and program for detecting range of front glass