TWI601425B - A method for tracing an object by linking video sequences - Google Patents

A method for tracing an object by linking video sequences Download PDF

Info

Publication number
TWI601425B
TWI601425B TW100149787A TW100149787A TWI601425B TW I601425 B TWI601425 B TW I601425B TW 100149787 A TW100149787 A TW 100149787A TW 100149787 A TW100149787 A TW 100149787A TW I601425 B TWI601425 B TW I601425B
Authority
TW
Taiwan
Prior art keywords
camera
image
photographic
monitoring
window
Prior art date
Application number
TW100149787A
Other languages
Chinese (zh)
Other versions
TW201328358A (en
Inventor
倪嗣堯
林仲毅
藍元宗
羅健誠
Original Assignee
大猩猩科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大猩猩科技股份有限公司 filed Critical 大猩猩科技股份有限公司
Priority to TW100149787A priority Critical patent/TWI601425B/en
Publication of TW201328358A publication Critical patent/TW201328358A/en
Application granted granted Critical
Publication of TWI601425B publication Critical patent/TWI601425B/en

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Description

一種串接攝影畫面以形成一物件軌跡的方法 Method for serially connecting photographic pictures to form an object trajectory

本發明有關於一種多攝影機監控系統,且特別是一種用以修正多攝影機監控系統錯誤串接多攝影畫面之物件的物件串接修正方法以及使用此攝影畫面串接修正方法的多攝影機監控系統。The present invention relates to a multi-camera monitoring system, and more particularly to an object serial connection correction method for correcting an object of a multi-camera monitoring system that erroneously cascades multiple photographic pictures, and a multi-camera monitoring system using the photographic picture serial connection correction method.

傳統攝影機監控系統係針對單一監控區域提供特定事件偵測服務,同時將所有相關視訊資料與偵測結果回報至中心伺服器。然而,於視訊監控的應用上,針對單一監控區域提供特定事件偵測服務的作法已經無法滿足需要;特別針對事後分析的應用上,往往需要針對已發生的事件進行相關人事物的在整個監控系統中出現的時間位置軌跡的完整描述時,對單一特定環境提供特定事件偵測服務是無法滿足這項需求。據此,多攝影機監控系統已成為現今監控系統中的主流。The traditional camera monitoring system provides a specific event detection service for a single monitoring area, and returns all relevant video data and detection results to the central server. However, in the application of video surveillance, the provision of specific event detection services for a single monitoring area has been unable to meet the needs; especially for post-analysis applications, it is often necessary to carry out relevant human activities throughout the monitoring system for the events that have occurred. When a complete description of the temporal location trajectory occurs, it is not possible to provide a specific event detection service for a single specific environment. Accordingly, multi-camera monitoring systems have become the mainstream in today's surveillance systems.

當今所提出的多攝影機監控系統中,各系統大都將設置於特定監控區域中的各攝影機所拍攝的攝影畫面傳送至中心伺服器,而中心伺服器則將各攝影機所拍攝之畫面內容進行影像分析,以獲得單一畫面中的物件分析結果。接著,中心伺服器依據物件分析結果獲得各攝影畫面中之各物件的時空關聯性(亦即各物件出現於各監控區域的前後順序與所處位置之關聯性),並且依據各物件的時空關聯性來串接特定物件,以獲得特定物件在整體多攝影機監控環境中的軌跡資訊以及歷史影像序列。In the multi-camera monitoring system proposed today, each system transmits a photographed image taken by each camera set in a specific monitoring area to a central server, and the central server performs image analysis on the content of the images taken by each camera. To obtain the result of object analysis in a single screen. Then, the central server obtains the temporal and spatial correlation of each object in each photographic picture according to the result of the object analysis (that is, the correlation between the order of each object appearing in each monitoring area and the position), and according to the space-time association of each object. Sexually connect specific objects to obtain trajectory information and historical image sequences of specific objects in the overall multi-camera monitoring environment.

請參照美國公告US7242423號專利,其發明名稱為「Linking zones for object tracking and camera handoff」。此篇相關專利所提供的多攝影機監控系統會對各攝影機所拍攝的視訊資料獨立地進行影像分析,以藉此取得各物件在單一攝影機監控範圍中之偵測與追蹤的分析結果。接著,所述攝影機監控系統依據分析結果擷取出各物件在各攝影機監控範圍中的出現與離開的位置與其時間關聯性,並接著根據各物件的出現與離開的位置與其時間關聯性建立機率分布函式(probability distribution function)。如此,所述攝影機監控系統便可以透過上述之機率分布函式估測出出現於各攝影畫面之物件的關係,以藉此串接各攝影畫面中的特定物件,進而獲得特定物件在整體多攝影機監控環境中的歷史影像與軌跡資訊。Please refer to U.S. Patent No. 7,224,423, the name of which is "Linking zones for object tracking and camera handoff". The multi-camera monitoring system provided by this related patent independently performs image analysis on the video data captured by each camera, thereby obtaining the analysis result of detection and tracking of each object in the monitoring range of a single camera. Then, the camera monitoring system extracts the position and appearance of each object in the monitoring range of each camera according to the analysis result, and then establishes a probability distribution function according to the appearance and departure time of each object and its time correlation. Probability distribution function. In this way, the camera monitoring system can estimate the relationship of the objects appearing on each photographic picture through the above-mentioned probability distribution function, thereby connecting the specific objects in each photographic picture, thereby obtaining the specific object in the overall multi-camera. Monitor historical imagery and trajectory information in the environment.

另外,請參照中華民國TW200943963號公開專利申請案,其發明名稱為「整合式影像監視系統及其方法/INTEGRATED IMAGE SURVEILLANCE SYSTEM AND MANUFACTURING METHOD THEREOF」。此篇公開專利申請案提出了一種影像拼接方法(image registration method),此影像拼接方法將多攝影機所拍攝之多個攝影畫面拼接成一個單一畫面,以降低使用者的監控負擔。雖然,多攝影機所拍攝之多個攝影畫面拼接成一個單一畫面可以有效地降低使用者的監控負擔,但此篇公開專利申請案並未提出相對應之智慧型多攝影機監控內容分析系統。雖然拼接的多張單一畫面可以作為多攝影機監控系統的視訊資料,但因為所得之拼接單一畫面過大,因此依然會造成智慧型多攝影機監控內容分析系統的運算負擔。In addition, please refer to the published patent application of the Republic of China TW200943963, the name of which is "Integrated Image Surveillance System and Method / INTEGRATED IMAGE SURVEILLANCE SYSTEM AND MANUFACTURING METHOD THEREOF". This published patent application proposes an image registration method for splicing a plurality of photographic images taken by a multi-camera into a single picture to reduce the user's monitoring burden. Although the splicing of a plurality of photographic images taken by a plurality of cameras into a single picture can effectively reduce the monitoring burden of the user, the related patent application does not propose a corresponding intelligent multi-camera monitoring content analysis system. Although the spliced multiple single images can be used as the video data of the multi-camera monitoring system, the resulting spliced single image is too large, which still causes the computing burden of the intelligent multi-camera monitoring content analysis system.

前述之多攝影機監控系統皆信任所使用的影像分析與物件串聯演算法,並且自動地對針對各攝影畫面之相同特定物件進行串接,以產生特定物件的軌跡影像。然而,因為實際環境的不同,將可能會使各種演算法產生不同程度的誤差。因此,前述之多攝影機監控系統可能會錯誤串接不同物件,而未能即時修正。The aforementioned multi-camera monitoring system trusts the image analysis and object serialization algorithms used, and automatically concatenates the same specific objects for each photographic image to generate a trajectory image of a specific object. However, because of the actual environment, it will be possible to produce different degrees of error for various algorithms. Therefore, the aforementioned multi-camera monitoring system may incorrectly connect different objects in series, and cannot be corrected immediately.

本發明實施例提供一種多攝影機監控系統所取得的攝影畫面之物件串接及其修正方法。此攝影畫面之物件串接及其修正方法用於多攝影機監控系統中,且其步驟敘述如下。提供使用者互動平台給使用者,以讓使用者透過使用者互動平台選擇欲追蹤的特定物件。依據目前攝影畫面之拍攝時間為分界,將在此時間之前與之後,出現於各攝影機的監控畫面中與特定物件具有相關性的相關物件之攝影畫面,依據時間順序分別陳列於使用者互動平台的先前相關物件列表與後續相關物件列表。此外,系統依據物件間的相關性進行關聯性評分,進而產生物件跨攝影機串接結果。將串結結果,依據前述的時間定義,將物件串接結果,分別陳列於區物件先前串接結果以及物件後續串接結果中,也就是將串接之特定物件的軌跡影像序列中,出現於目前攝影畫面的拍攝時間之前且為其他攝影機拍攝之數張攝影畫面,依時間順序以及關聯性評分高低排列於使用者互動平台的先前物件串接結果中。將自動串接之特定物件的軌跡影像序列中,出現於目前攝影畫面的拍攝時間之後且為其他攝影機拍攝之數張攝影畫面,依時間順序以及關聯性評分高低排列於使用者互動平台的後續物件串接結果中。使用者透過參考先前相關物件列表、後續相關物件列表、先前物件串接結果與後續物件串接結果等資料判定串接結果是否正確,若發現結果有誤,則點選物件列表中的關聯性評分並非最大的一特定攝影畫面中的特定物件,以藉此指示多攝影機監控系統修正特定物件的自動串接結果。Embodiments of the present invention provide a series connection of objects of a photographic picture obtained by a multi-camera monitoring system and a method for correcting the same. The object serial connection of this photographic picture and its correction method are used in a multi-camera monitoring system, and the steps thereof are described below. A user interaction platform is provided to the user to allow the user to select a specific item to be tracked through the user interaction platform. According to the current shooting time of the photographic picture as a boundary, before and after this time, the photographic pictures of related objects appearing in the monitoring screen of each camera and related to the specific object are respectively displayed on the user interactive platform according to the time sequence. A list of previously related objects and a list of related objects. In addition, the system scores relevance based on the correlation between objects, which in turn results in the splicing of objects across the camera. According to the foregoing time definition, the serial connection result is respectively displayed in the previous serial connection result of the area object and the subsequent serial connection result of the object, that is, the trajectory image sequence of the specific object to be serially connected appears in At present, several photographic images taken before the shooting time of the photographic screen and taken by other cameras are arranged in the chronological order of the previous objects of the user interaction platform according to the chronological order and the relevance score. The trajectory image sequence of the specific object that will be automatically connected in series, which appears after the shooting time of the current photographic image and is taken by other cameras, and the subsequent objects arranged in the user interaction platform according to the chronological order and the relevance score. Concatenated results. The user determines whether the concatenation result is correct by referring to the previous related object list, the subsequent related object list, the previous object concatenation result, and the subsequent object concatenation result. If the result is found to be incorrect, the relevance rating in the object list is clicked. Not the largest specific item in a particular photographic picture, thereby instructing the multi-camera monitoring system to correct the automatic splicing result of a particular object.

本發明實施例提供一種多攝影機監控系統,此多攝影機監控系統包括多個視訊擷取分析單元、多個視訊分析資料彙整單元、視訊分析資料資料庫、多視訊內容分析單元與使用者互動平台。每一個視訊擷取分析單元係由攝影機串接一視訊分析裝置所實現,且配置於多攝影機監控系統之監控環境的各位置,其中視訊分析裝置可為一電腦或為一嵌入式系統所構成。每一個視訊擷取單元串接該相關之視訊分析資料彙整單元。視訊分析資料資料庫串接多個視訊分析資料彙整單元,且多視訊內容分析單元串接視訊分析資料資料庫。使用者互動平台串接視訊分析資料分析單元,其用以讓使用者選擇欲追蹤的特定物件,並且讓使用者透過參考使用者互動平台所提供的先前相關物件列表、後續相關物件列表、先前物件串接結果與後續物件串接結果,透過點選後續物件列表中的關聯性並非最大的一特定攝影畫面中的指定修正物件,以藉此指示分析單元修正特定物件的自動串接結果。The embodiment of the invention provides a multi-camera monitoring system, which comprises a plurality of video capture and analysis units, a plurality of video analysis data collection units, a video analysis data database, a multi-video content analysis unit and a user interaction platform. Each of the video capture analysis units is implemented by a camera connected to a video analysis device and configured at various locations in a monitoring environment of the multi-camera monitoring system. The video analysis device can be a computer or an embedded system. Each video capture unit is connected in series with the associated video analysis data collection unit. The video analysis data database is connected in series with a plurality of video analysis data collection units, and the multi-video content analysis unit is connected in series with the video analysis data database. The user interaction platform is connected to the video analysis data analysis unit, which is used for the user to select a specific object to be tracked, and allows the user to refer to the user related platform to provide a list of previously related objects, a list of subsequent related objects, and previous objects. The result of the concatenation of the concatenation result and the subsequent object is selected by the selected correction object in a specific photographic picture whose relevance in the subsequent object list is not the largest, thereby instructing the analysis unit to correct the automatic concatenation result of the specific object.

使用者互動平台依據目前攝影畫面之拍攝時間為分界,將在此時間之前與之後,出現於各攝影機的監控畫面中與特定物件具有相關性的相關物件之攝影畫面,依據時間順序分別陳列於使用者互動平台的先前相關物件列表與後續相關物件列表。除此之外,系統依據物件間的相關性進行關聯性評分,進而產生物件跨攝影機串接結果。將串結結果,依據前述的時間定義,將物件串接結果,分別陳列於區物件先前串接結果以及物件後續串接結果中,也就是將串接之特定物件的軌跡影像序列中,出現於目前攝影畫面的拍攝時間之前,且為其他攝影機拍攝之包括該特定物件的數張攝影畫面,依時間順序以及關聯性評分高低排列於使用者互動平台的先前物件串接結果中。將自動串接之特定物件的軌跡影像序列中,出現於目前攝影畫面的拍攝時間之後,且為其他攝影機拍攝之包括該特定物件的數張攝影畫面,依時間順序以及關聯性評分高低排列於使用者互動平台的後續物件串接結果中。使用者透過參考先前相關物件列表、後續相關物件列表、先前物件串接結果與後續物件串接結果點選後續物件列表中的關聯性評分並非最大的一特定攝影畫面中的特定物件,以藉此指示多攝影機監控系統修正特定物件的自動串接結果。The user interaction platform is based on the shooting time of the current photographic screen, and before and after this time, the photographic images of related objects appearing in the monitoring screen of each camera and related to the specific object are respectively displayed in time order. A list of previously related objects of the interactive platform and a list of subsequent related objects. In addition, the system scores relevance based on the correlation between objects, which in turn results in the splicing of objects across the camera. According to the foregoing time definition, the serial connection result is respectively displayed in the previous serial connection result of the area object and the subsequent serial connection result of the object, that is, the trajectory image sequence of the specific object to be serially connected appears in At present, a plurality of photographic images including the specific object captured by other cameras before the shooting time of the photographic screen are arranged in the chronological order of the previous objects of the user interaction platform according to the chronological order and the relevance score. The trajectory image sequence of the specific object to be automatically connected in series, which appears after the shooting time of the current photographic image, and which is taken by other cameras, includes several photographic images of the specific object, and is arranged in time series and relevance scores. The subsequent objects of the interactive platform are concatenated in the results. The user selects a specific object in a specific photographic picture whose relevance score in the subsequent object list is not the largest by referring to the previous related object list, the subsequent related object list, the previous object splicing result, and the subsequent object splicing result. Instructs the multi-camera monitoring system to correct the automatic concatenation of specific objects.

綜上所述,本發明實施例所提供的多攝影機監控系統具有攝影畫面之物件串接修正方法,且多攝影機監控系統具有一個使用者互動平台給予使用者操作,以使用者透過執行攝影畫面之物件串接修正方法來修正傳統多攝影機監控系統自動串接物件可能發生的錯誤。In summary, the multi-camera monitoring system provided by the embodiment of the present invention has a method for correcting the object serial connection of the photographic image, and the multi-camera monitoring system has a user interaction platform for the user to operate, and the user performs the photographic image. The object serial connection correction method is to correct the error that may occur in the automatic serial connection object of the conventional multi-camera monitoring system.

為讓本發明之上述和其他目的、特徵和優點能更明顯易懂,下文特舉本發明之較佳實施例,並配合所附圖式,作詳細說明如下。The above and other objects, features and advantages of the present invention will become more <RTIgt;

為了充分瞭解本發明,於下文將以實施例並配合附圖作詳細說明。然而,要說明的是,以下實施例並非用以限定本發明。In order to fully understand the present invention, the embodiments will be described in detail below with reference to the accompanying drawings. However, it is to be noted that the following examples are not intended to limit the invention.

請參照圖1,圖1本發明實施例提供的多攝影機安全監控系統之方塊圖。多攝影機監控系統100包括多個視訊擷取分析單元110、多個視訊分析資料彙整單元120、視訊與分析資料資料庫130、多視訊內容分析單元140與使用者互動平台150,其中多個視訊擷取分析單元110被設置在多個不同位置以針對多個不同監控區域進行監控。每一個視訊擷取分析單元110串接於對應的一個視訊分析資料彙整單元120,且這些視訊分析資料彙整單元120更串接於視訊與分析資料資料庫130。視訊分析資料資料庫130串接多視訊內容分析單元140,而多視訊內容分析單元140串接至使用者互動平台150。Please refer to FIG. 1. FIG. 1 is a block diagram of a multi-camera security monitoring system according to an embodiment of the present invention. The multi-camera monitoring system 100 includes a plurality of video capture analyzing units 110, a plurality of video analytics data collecting units 120, a video and analytics data database 130, a multi-video content analyzing unit 140, and a user interaction platform 150, wherein the plurality of video cameras 多个The fetching analysis unit 110 is placed at a plurality of different locations to monitor for a plurality of different monitoring zones. Each of the video analysis and analysis units 110 is connected in series with a corresponding video analysis data collection unit 120, and the video analysis data collection unit 120 is further connected to the video and analysis data database 130. The video analytics data repository 130 is coupled to the multi-video content analysis unit 140, and the multi-video content analysis unit 140 is coupled to the user interaction platform 150.

視訊擷取分析單元110用以取得所監控之監控區域的攝影畫面,並且針對攝影畫面進行影像分析,進而擷取出攝影畫面中的各物件及各物件中具有物理意義的特徵資料,以獲得物件分析結果。接著,視訊擷取分析單元110將影像序列與物件分析結果傳送至對應之視訊分析資料彙整單元120,其中視訊擷取分析單元110於連續多個時間點擷取的攝影畫面可以構成一個影像序列。The video capture analysis unit 110 is configured to obtain a photographic image of the monitored monitoring area, and perform image analysis on the photographic image, thereby extracting the physical information of each object and each object in the photographic image to obtain object analysis. result. Then, the video capture and analysis unit 110 transmits the image sequence and the object analysis result to the corresponding video analysis data collecting unit 120. The captured image captured by the video capturing and analyzing unit 110 at a plurality of consecutive time points may constitute an image sequence.

更詳細地說,視訊擷取分析單元110可以藉由數位攝影機串接視訊分析裝置來實現,其中視訊分析裝置可由一電腦實現,亦可以由一嵌入式系統(embedded system)平台所構成。數位攝影機用以擷取各時間點的攝影畫面,而視訊分析裝置則分析所取得之攝影畫面,並取得包含了分析所得之物件專屬的編號、位置與特徵等物件分析結果,而後將物件分析結果與影像序列傳遞至視訊分析資料彙整單元120。In more detail, the video capture analysis unit 110 can be implemented by a digital camera serially connected to the video analysis device. The video analysis device can be implemented by a computer or an embedded system platform. The digital camera is used to capture the photographic images at each time point, and the video analysing device analyzes the acquired photographic images, and obtains the analysis results of the objects, such as the number, position and features, which are unique to the analysis, and then analyzes the results of the objects. And the image sequence is transmitted to the video analysis data collecting unit 120.

為了有效率地傳輸影像序列與物件分析結果,視訊分析資料彙整單元120用以將接收的物件分析結果與影像序列進行相對應之資料壓縮編輯,以產生資料壓縮編輯結果。然後,視訊分析資料彙整單元120會將資料壓縮編輯結果傳遞視訊與分析資料資料庫130儲存,其中資料壓縮編輯結果具有物件分析結果與攝影畫面的資訊。In order to efficiently transmit the image sequence and the object analysis result, the video analysis data collecting unit 120 is configured to compress and edit the received object analysis result and the image sequence to generate a data compression editing result. Then, the video analysis data collecting unit 120 stores the data compression editing result into the video and analysis data database 130, wherein the data compression editing result has the object analysis result and the photography picture information.

更詳細地說,視訊分析資料彙整單元120將透過視訊壓縮法(如H.264等高壓縮效果的視訊編碼方法)對影像序列進行資料壓縮編輯,以減少傳輸頻寬的需求。另外,對於物件分析結果而言,視訊分析資料彙整單元120先將時序資訊插入物件分析結果中,以此確認物件分析結果與攝影畫面的對應關係。接著,視訊分析資料彙整單元120依據使用所需進行相對應的資訊轉換(諸如資料壓縮等方法),以降低傳輸資料量。In more detail, the video analysis data collecting unit 120 performs data compression editing on the image sequence through a video compression method (such as a video encoding method with high compression effect such as H.264) to reduce the transmission bandwidth requirement. In addition, for the object analysis result, the video analysis data collecting unit 120 first inserts the time series information into the object analysis result, thereby confirming the correspondence between the object analysis result and the photographing screen. Then, the video analysis data collecting unit 120 performs corresponding information conversion (such as data compression) according to the use, so as to reduce the amount of data transmitted.

為了有效的同步攝影畫面與物件分析結果的關聯,同時減少傳輸資料量,視訊分析資料彙整單元120除了將時序資訊插入物件分析結果中外,視訊分析資料彙整單元120還能夠以資料隱藏技術或使用視訊壓縮標準所定義的使用者資料區(user data zone),將各攝影畫面的物件分析結果隱藏於影像序列所對應之視訊資料中。舉例來說,視訊分析資料彙整單元120將壓縮後之物件分析結果的位元資料依序透過資料隱藏方式將這些由經過處理的視訊分析資料所構成之位元資料隱藏於影像序列對應之視訊資料的離散餘弦轉換(DCT)參數中。In order to effectively synchronize the relationship between the photographic image and the object analysis result, and reduce the amount of data transmitted, the video analytics data collecting unit 120 can insert the time series information into the object analysis result, and the video analytics data collecting unit 120 can also use the data hiding technology or use the video. The user data zone defined by the compression standard hides the object analysis result of each photographic image in the video material corresponding to the image sequence. For example, the video analysis data collecting unit 120 sequentially hides the bit data of the processed image analysis data from the bit data of the processed image analysis data in the data structure corresponding to the image sequence through the data hiding method. In the discrete cosine transform (DCT) parameter.

視訊與分析資料資料庫130用以儲存視訊分析資料彙整單元120所傳遞之資料壓縮編輯結果,因為資料壓縮編輯結果具有物件分析結果與攝影畫面的資訊,因此視訊擷取分析單元110於監控區域所拍攝的攝影畫面與物件出現前後的時空關係都會被儲存至視訊與分析資料資料庫130中,以備多視訊內容分析單元140讀取對特定物件進行分析時所需要的資料。The video and analysis data database 130 is used to store the data compression editing result transmitted by the video analysis data collecting unit 120. Since the data compression editing result has the object analysis result and the information of the photographing screen, the video capturing and analyzing unit 110 is in the monitoring area. The temporal and spatial relationship between the captured photographic screen and the appearance of the object is stored in the video and analysis data library 130, so that the multi-video content analyzing unit 140 reads the data required for analyzing the specific object.

多視訊內容分析單元140自視訊與分析資料資料庫130中讀取對特定物件進行分析時所需要的資料,並分析各視訊擷取分析單元110之各攝影畫面內的物件與特定物件的關聯性,以串接目前分析的特定物件的完整歷史軌跡資訊,進而產生特定物件的歷史影像序列。The multi-video content analyzing unit 140 reads the data required for analyzing the specific object from the video and analysis data library 130, and analyzes the correlation between the objects in the respective photographic images of the video capturing and analyzing unit 110 and the specific objects. To serialize the complete historical trajectory information of the specific object currently analyzed, and then generate a historical image sequence of the specific object.

更詳細地說,多視訊內容分析單元140從視訊與分析資料資料庫130中調閱對特定物件進行分析時所需的資料,並取出嵌於資料的相對應之物件分析結果,藉此分析特定物件與各攝影畫面內的各物件之間的相關性進行關聯性評分,而獲得相關性分析結果。接著,多視訊內容分析單元140依據相關性分析結果將各攝影畫面中出現之特定物件進行串接,而產生關聯性評分最大的串接結果,其中此關聯性評分最大的串接結果即為特定物件的軌跡影像。接著,多視訊內容分析單元140將所產生的特定物件的軌跡影像透過使用者互動平台150提供給使用者觀看,並且將特定物件的軌跡影像所對應的視訊資料反饋至視訊與分析資料資料庫130儲存。In more detail, the multi-view content analysis unit 140 reads the data required for analyzing the specific object from the video and analysis data library 130, and extracts the corresponding object analysis result embedded in the data, thereby analyzing the specific Correlation scores are obtained between the objects and the objects in the respective photographic images, and the correlation analysis results are obtained. Then, the multi-video content analyzing unit 140 concatenates the specific objects appearing in each photographic picture according to the correlation analysis result, and generates a concatenation result with the largest relevance score, wherein the concatenation result with the largest relevance score is specific. The trajectory image of the object. Then, the multi-view content analysis unit 140 provides the generated trajectory image of the specific object to the user through the user interaction platform 150, and feeds back the video data corresponding to the trajectory image of the specific object to the video and analysis data database 130. Store.

換句話說,跨越視訊擷取分析單元110之監控區域的特定物件會被記錄於各視訊擷取分析單元110的攝影畫面中,而多視訊內容分析單元140可以依據時間順序,將多個攝影畫面之特定物件串接,而形成特定物件的軌跡影像。此特定物件的軌跡影像可以讓使用者透過使用者平台150快速觀看此特定物件在各監控區域中何時出現與離開,進而瞭解特定物件在整體多攝影機監控環境中的完整行為歷史資訊。In other words, the specific object across the monitoring area of the video capture analysis unit 110 is recorded in the photographic image of each video capture analysis unit 110, and the multi-video content analysis unit 140 can display multiple photographic pictures according to the chronological order. The specific objects are connected in series to form a trajectory image of the specific object. The trajectory image of the specific object allows the user to quickly view when the specific object appears and leaves in each monitoring area through the user platform 150, thereby learning the complete behavior history information of the specific object in the overall multi-camera monitoring environment.

使用者互動平台150可以提供使用者自視訊與分析資料資料庫130取得各監控區域的各攝影畫面與直接地針對各監控區域進行同步撥放控制。另外,使用者互動平台150還可以依據使用者所設定之監控條件進行特定事件偵測與特定物件追蹤。The user interaction platform 150 can provide the user to obtain the photographic images of each monitoring area from the video and analysis data database 130 and perform synchronous playback control directly for each monitoring area. In addition, the user interaction platform 150 can also perform specific event detection and specific object tracking according to the monitoring conditions set by the user.

除此之外,有鑑於傳統多攝影機監控系統並無法保證其所自動產生之特定物件的軌跡影像為正確串接結果,因此此實施例的使用者互動平台150還可以提供使用者對特定物件的軌跡影像進行修正,以使最後呈現的軌跡影像為正確串接結果。In addition, the user interaction platform 150 of this embodiment can also provide the user with a specific object, in view of the fact that the conventional multi-camera monitoring system cannot guarantee that the trajectory image of the specific object that is automatically generated is the correct cascading result. The trajectory image is corrected so that the last rendered trajectory image is the correct splicing result.

對應於使用者互動平台150具有修正軌跡影像的能力,多視訊內容分析單元140除了提供關聯性評分最大的串接結果之外,多視訊內容分析單元140還必須提供其他關聯性評分較大的串接結果,以讓使用者在使用者互動平台150直接對特定物件的軌跡影像進行修正,其中這些串接結果是依據其關聯性評分的高低進行排列。Corresponding to the user interaction platform 150 having the ability to correct the trajectory image, the multi-video content analysis unit 140 must provide a string with other relevance scores in addition to the cascading result with the highest relevance score. The result is obtained, so that the user directly corrects the trajectory image of the specific object on the user interaction platform 150, wherein the serial connection results are arranged according to the level of the relevance score.

若使用者未透過使用者互動平台150來對軌跡影像進行修正,則多視訊內容分析單元140會默認關聯性評分最大的串接結果為正確串接結果,並接著繼續之後的串接工作,以產生特定物件的軌跡影像。相反地,若使用者認為多視訊內容分析單元140所產生之具有最高關聯性評分的串接結果為錯誤串接結果,則使用者可以透過使用者互動平台150選擇其他的串接結果,以藉此修正串接錯誤,而產生特定物件的正確軌跡影像。If the user does not correct the trajectory image through the user interaction platform 150, the multi-video content analysis unit 140 defaults the cascading result with the highest relevance score to the correct splicing result, and then continues the subsequent splicing work to Produces a trajectory image of a specific object. Conversely, if the user considers that the concatenation result of the highest relevance score generated by the multi-video content analysis unit 140 is an incorrect concatenation result, the user can select other concatenation results through the user interaction platform 150 to borrow This corrects the concatenation error and produces the correct trajectory image for the particular object.

舉例來說,多視訊內容分析單元140可藉由依據使用者於使用者互動平台150所預定分析之事件的時間,向視訊與分析資料資料庫130調閱所需分析的資料壓縮編輯結果。接著,多視訊內容分析單元140分析分散於各攝影畫面的物件,例如物件於各攝影畫面出現、離開、行進軌跡、物件特徵、甚至是過往歷史的物件出現時的日期、時間、天氣等資訊。透過分析這些資訊,多視訊內容分析單元140得以瞭解各物件在監控環境中之各種條件下出現的機率值以及可能行進的軌跡分布,便可以獲得各物件的相關性分析結果,並藉此串接散落於不同視訊擷取分析單元110所拍攝的攝影畫面中的相同物件,進而得到所有物件在整體監控環境中的完整軌跡。For example, the multi-video content analysis unit 140 can access the video and analysis data database 130 to view the edited data of the desired analysis according to the time of the event scheduled by the user on the user interaction platform 150. Next, the multi-view content analysis unit 140 analyzes the objects dispersed in the respective photographic images, such as the date, time, weather, and the like of the appearance, departure, trajectory, object features, and even the appearance of the objects in the past history of the objects. By analyzing the information, the multi-video content analyzing unit 140 can understand the probability values of the various objects in various conditions in the monitoring environment and the trajectory distribution that may travel, and can obtain the correlation analysis result of each object, and thereby serially connect the objects. The same objects scattered in the photographic images taken by the different video capture analysis units 110 are obtained, thereby obtaining a complete trajectory of all the objects in the overall monitoring environment.

視訊擷取分析單元110所獲得的物件分析結果可以與多視訊內容分析單元140所獲得的相關性分析結果連結。若要令各物件出現於對應之攝影畫面時,該物件可以具有物件編號、位置擷取出的物件特徵等物件資訊顯示於物件出現的監控環境時,多視訊內容分析單元140會將多個可能的物件之編號、出現機率、監控環境位置等資訊彙整成相關性分析結果,並且將此相關性分析結果嵌入對應的視訊資料中。接著,多視訊內容分析單元140將嵌入相關性分析結果的視訊資料反饋至視訊與分析資料資料庫130進行儲存,以備使用者互動平台150呈現所需。The object analysis result obtained by the video capture analyzing unit 110 can be linked with the correlation analysis result obtained by the multi-video content analyzing unit 140. In order to make each object appear in the corresponding photographic picture, the object information can be displayed in the monitoring environment in which the object appears in the monitoring environment in which the object number, the position and the extracted object feature are displayed, and the multi-video content analyzing unit 140 will have multiple possible The information such as the number of the object, the probability of occurrence, and the location of the monitoring environment are aggregated into correlation analysis results, and the correlation analysis result is embedded in the corresponding video data. Then, the multi-video content analyzing unit 140 feeds back the video data embedded in the correlation analysis result to the video and analysis data database 130 for storage, so that the user interaction platform 150 presents the required content.

多視訊內容分析單元140可以利用物件的先前與目前空間、時間、物件特徵等物件資訊來串接各攝影畫面之相同物件。上述物件資訊可依據分析資料的取得難易、出現的順序以及其特性而可分為三個層級。The multi-view content analysis unit 140 can use the object information of the previous and current space, time, object features, and the like of the object to concatenate the same object of each photographic picture. The above object information can be divided into three levels according to the difficulty in obtaining the analysis data, the order in which it appears, and its characteristics.

第一個層級的物件資訊是物件的出現位置與速度。藉由物件出現、消失的位置以及當時的行進速度,多視訊內容分析單元140可以推估該物件下一刻可能出現的位置。更詳細地說,多視訊內容分析單元140可透過各攝影畫面中各物件的出現、消失以及使用者設定之視訊擷取分析單元110的空間位置等資訊,配合圖形理論(graph theory)的推算,進而建構出各物件的機率分佈函數(probability distribution function,PDF)。接著,多視訊內容分析單元140利用此機率函數進行關聯性評分,藉此串接分佈於各攝影畫面的相同物件。The object information of the first level is the position and speed of the object. By the position where the object appears, disappears, and the speed of travel at that time, the multi-video content analyzing unit 140 can estimate the position at which the object may appear next time. In more detail, the multi-view content analysis unit 140 can cooperate with the projection theory by using the information such as the appearance and disappearance of each object in each photographic image and the spatial position of the video information of the analysis unit 110 set by the user. Then construct the probability distribution function (PDF) of each object. Next, the multi-view content analysis unit 140 performs the relevance score using the probability function, thereby concatenating the same objects distributed in the respective photographic pictures.

舉例來說,在捷運站的監控環境中,對一剛進入捷運站入口的人員而言,下一刻會出現的位置最高機率應為在捷運入口附近的監控區域,如收費閘道之位置。相對地,此物件因尚未經過收費閘道,故此物件出現於等候月台的機率為零。據此,多視訊內容分析單元140會分析出人員若出現在某位置時,此人員下一刻出現在各視訊擷取分析單元110所處之監控區域的機率分佈函數。換言之,多視訊內容分析單元140可以藉此機率分佈函數來串接出現於各監控畫面的相同物件,而獲得此相同物件的軌跡資訊與歷史影像。For example, in the monitoring environment of the MRT station, for a person who has just entered the entrance of the MRT station, the highest probability of the position that will appear next time should be the monitoring area near the entrance of the MRT, such as the toll gate. position. In contrast, since the object has not passed the toll gate, the probability of the object appearing on the waiting platform is zero. Accordingly, the multi-video content analyzing unit 140 analyzes the probability distribution function of the person in the monitoring area where the video capturing and analyzing unit 110 is located next time when the person appears in a certain position. In other words, the multi-view content analysis unit 140 can use the probability distribution function to serially connect the same objects appearing on each monitoring screen to obtain the trajectory information and the historical image of the same object.

第二個層級的物件資訊為物件特徵,多視訊內容分析單元140可用以比對在不同時刻、監控區域中出現的物件,並據此串接出現於各攝影畫面中的相同物件。更詳細地說,多視訊內容分析單元140依據機率分佈函數取得欲進行串接之物件的可能候選物件,並利用分析所得之物件行進方向濾除較低可能的可能候選物件。接著,多視訊內容分析單元140再透過比對物件於攝影畫面中的物件特徵(如顏色、外形等資訊)來串接物件,也就是進行相關性分析時,同時考慮機率分布函數以及物件特徵資訊,進而取得較佳的關聯性評分結果。The object information of the second level is the object feature, and the multi-video content analyzing unit 140 can be used to compare the objects appearing in the monitoring area at different times, and according to the same object appearing in each photographic picture. In more detail, the multi-view content analysis unit 140 obtains possible candidate objects of the object to be concatenated according to the probability distribution function, and filters out the lower possible possible candidate objects by using the analyzed object traveling direction. Then, the multi-video content analyzing unit 140 converges the objects by comparing the object features (such as color, shape, and the like) of the object in the photographic image, that is, when performing correlation analysis, considering the probability distribution function and the object feature information. And then obtain a better correlation score result.

再以捷運站的監控環境為例說明,多視訊內容分析單元140對捷運站收費閘口離開的人員分析其離開收費閘口的速度、方向以及人員離開收費閘口之監控影像的對應位置。接著,多視訊內容分析單元140還分析出上述人員下一刻可能會出現再哪些視訊擷取分析單元110的攝影畫面中,並且會比對這些視訊擷取分析單元110的攝影畫面中之人員的特徵(如顏色)而串接各攝影畫面中之人員的行為軌跡。Taking the monitoring environment of the MRT station as an example, the multi-video content analyzing unit 140 analyzes the speed and direction of the departure gate gate and the corresponding position of the monitoring image of the personnel leaving the toll gate by the person leaving the toll gate of the MRT station. Then, the multi-view content analysis unit 140 analyzes the photographic images of the video capture analysis unit 110 that may appear in the next moment, and compares the characteristics of the people in the photographic images of the video capture analysis unit 110. (such as color) and the behavior track of the people in each photographic picture.

第三個層級的物件資訊為歷史資料,多視訊內容分析單元140可以統計過往的視訊資料,進而分析各物件所有可能的移動軌跡,計算出各種軌跡之分佈機率,並用以推估分析物件的可能出現位置。更詳細地說,多視訊內容分析單元140可以對監控環境中的所有歷史資料(過往的視訊資料)與由其中所分析出的物件資訊進行資料分析與數據統計,以獲得對應於此監控環境之相對可信的物件統計資訊。另外,物件統計資訊可進一步地依據時間、環境參數等不同條件進行分類。如此,多視訊內容分析單元140可以得知在特定時間、環境參數的條件下,監控環境中之物件的歷史行為軌跡。也就是進行相關性分析時,同時考慮機率分布函數、物件特徵資訊以及歷史物件行為軌跡分類資訊,進而取得較佳的關聯性評分結果。The object information of the third level is historical data, and the multi-video content analyzing unit 140 can count the past video data, and then analyze all possible moving trajectories of each object, calculate the probability of distribution of various trajectories, and estimate the possibility of analyzing the objects. The location appears. In more detail, the multi-view content analysis unit 140 can perform data analysis and data statistics on all historical data (previous video data) in the monitoring environment and the object information analyzed therein to obtain a corresponding monitoring environment. Relatively credible object statistics. In addition, the object statistical information can be further classified according to different conditions such as time and environmental parameters. In this way, the multi-view content analysis unit 140 can know the historical behavior track of the object in the environment under the condition of specific time and environment parameters. That is to say, when the correlation analysis is carried out, the probability distribution function, the object feature information and the historical object behavior trajectory classification information are simultaneously considered, and the better correlation score result is obtained.

再以捷運站監控環境為實施例,多視訊內容分析單元140針對某捷運站的過往視訊資料進行分析統計,並在統計一段特定時間長度的影像序列後,得知人員在上下課時間時的歷史行為軌跡。例如,在上下課時間時,穿著學生制服的民眾將僅會經過各出入口,穿越捷運上層通道,然後離開捷運站,並不會進入捷運站搭乘捷運;而在上下班時間時,大多進入捷運站的人員將經過捷運站出入口,穿越收費閘道,而後搭乘捷運。Taking the monitoring environment of the MRT station as an embodiment, the multi-video content analyzing unit 140 analyzes and compares the past video data of a MRT station, and after counting the video sequence of a certain length of time, it is known that the personnel are in the class time. Historical behavioral trajectory. For example, during the class time, the people wearing the student uniforms will only pass through the entrances and exits, cross the upper MRT passage, and then leave the MRT station, and will not enter the MRT station for the MRT; Most of the people entering the MRT station will pass through the MRT station entrance and exit, cross the toll gate, and then take the MRT.

由此,多視訊內容分析單元140可得知此捷運站的出入人員的流動統計資料,並藉此預估監控區域出現的人員可能的行進方向。例如,在上下課時間時,若有一個人員穿著特定的學生制服,則此人員穿越上層通道離開捷運站的機率將高於穿越收費閘道搭乘捷運的機率。Thus, the multi-video content analyzing unit 140 can know the flow statistics of the inbound and out of the MRT station, and thereby estimate the possible traveling direction of the person appearing in the monitoring area. For example, in the time of class, if a person wears a specific student uniform, the probability that the person will leave the MRT station through the upper channel will be higher than the probability of taking the MRT through the toll gate.

另外,對跨視訊擷取分析單元110之攝影畫面的物件進行串接時,多視訊內容分析單元140將可得到物件軌跡分布之關聯性評分拓譜圖。關聯性評分拓譜圖中的各個結點表示為進行追蹤的可能物件,藉由串聯起所有最高關聯性評分的可能物件,將可取得物件的行為軌跡。In addition, when the objects of the photographic images of the video capture analysis unit 110 are concatenated, the multi-view content analysis unit 140 scores the correlation of the object trajectory distributions. Each node in the relevance score extension map is represented as a possible object for tracking. By tying up all possible objects with the highest relevance score, the behavior track of the object will be obtained.

圖2為本發明實施例之攝影畫面之物件串接修正方法於使用者互動平台上之介面的示意圖。使用者互動平台上之介面包括監控環境視窗210、攝影機列表視窗220、至少一監控物件視窗230與多攝影機攝影畫面視窗240。攝影畫面之物件串接修正方法可以使用軟體來實現,且使用者互動平台上之介面可以實現於各種作業系統的平台上。然而,攝影畫面之物件串接修正方法與使用者互動平台上之介面的實現方式卻不以此為限。2 is a schematic diagram of an interface of a method for correcting object serial connection of a photographic image on a user interaction platform according to an embodiment of the present invention. The interface on the user interaction platform includes a monitoring environment window 210, a camera list window 220, at least one monitoring object window 230, and a multi-camera photographic screen window 240. The object serial connection correction method of the photographic image can be implemented by using software, and the interface on the user interaction platform can be realized on the platform of various operating systems. However, the implementation method of the object serial connection correction method and the user interaction platform on the photographic screen is not limited thereto.

監控環境視窗210包括用以呈現整體監控環境的環境示意圖211,環境示意圖211可以讓使用瞭解監控環境的地理特性(例如走廊位置與房間佈局的資訊)、視訊擷取分析單元110的分佈狀況(亦即分佈位置)與監控環境中之特定物件的行為軌跡。使用者可以設定環境示意圖211自地理環境圖、建築架構圖與監控設施分佈圖中選擇其中之一作為環境示意圖211,或者使用者亦可以選擇上述部份或全部的圖進行疊合,並以疊合的圖作為環境示意圖211。除此之外,使用者亦可以透過三維電腦圖像(3D computer graphics)來呈現境示意圖211。The monitoring environment window 210 includes an environment diagram 211 for presenting an overall monitoring environment. The environment diagram 211 can be used to understand the geographical characteristics of the monitoring environment (such as information on corridor location and room layout) and the distribution of the video capture analysis unit 110 (also That is, the distribution location) and the behavioral trajectory of the specific object in the monitoring environment. The user can set the environment diagram 211 to select one of the geographic environment map, the architectural architecture map and the monitoring facility map as the environment diagram 211, or the user may select some or all of the above diagrams to be superimposed and stacked. The combined diagram is taken as an environmental diagram 211. In addition, the user can also present the schematic 211 through 3D computer graphics.

監控環境視窗210還包括播放控制單元212與時間軸控制元件213。播放控制單元212係用以對特定物件之事後歷史軌跡追蹤、呈現與修正時得以有效地操控視訊資料的播放(前進、後退),而時間軸控制元件213可以控制視訊資料於特定時間點開始播放。播放控制單元212可連動控制所有呈現於使用者介面中的視訊資料的播放,以同步地將多攝影監控系統100之各視訊擷取分析單元110的視訊資料於此使用者互動平台150的介面上播放。The monitoring environment window 210 also includes a playback control unit 212 and a timeline control component 213. The play control unit 212 is configured to effectively control the playback (forward and backward) of the video data when tracking, presenting and correcting the after-travel historical trajectory of the specific object, and the time axis control component 213 can control the video data to start playing at a specific time point. . The playback control unit 212 can control the playback of all the video data presented in the user interface to synchronously capture the video data of the video capture analysis unit 110 of the multi-camera monitoring system 100 on the interface of the user interaction platform 150. Play.

攝影機列表視窗220係用以呈現系統中所有的攝影機(即視訊擷取分析單元110中所使用的攝影機)的編號以及攝影機在監控環境中的位置之間的關係。各攝影機可以特定的識別方式來顯示與區分,例如以不同顏色區分各攝影機。攝影機列表視窗220內容與監控環境視窗210的內容同步顯示。當使用者點選攝影機列表視窗220的其中一個攝影機時,所選擇的攝影機將以醒目色標記呈現於監控環境視窗210與多攝影機列表視窗220中,而未被選擇的攝影機則將以非醒目色標記呈現於監控環境視窗210與多攝影機列表視窗220上。The camera list window 220 is used to present the relationship between the number of all cameras in the system (i.e., the cameras used in the video capture analysis unit 110) and the position of the camera in the surveillance environment. Each camera can be displayed and distinguished by a specific recognition method, for example, to distinguish each camera in a different color. The contents of the camera list window 220 are displayed in synchronization with the contents of the monitoring environment window 210. When the user clicks on one of the cameras in the camera list window 220, the selected camera will be displayed in the monitor environment window 210 and the multi-camera list window 220 in bold colors, while the unselected camera will be in a non-obtrusive color. The markers are presented on the monitoring environment window 210 and the multi-camera list window 220.

監控物件視窗230的顯示畫面231係用以呈現使用者所選擇之攝影機目前所拍攝的攝影畫面。監控物件視窗230可以持續呈現所選擇的物件,縱使此選擇的物件已經離開原選定的攝影機之監控區域。監控物件視窗230可以讓使用者針對物件串接的結果進行修正(亦即修正選擇物件的行為軌跡),以更正多視訊內容分析單元140錯誤串接攝影畫面的物件。The display screen 231 of the monitoring object window 230 is used to present a photographing picture currently taken by the camera selected by the user. The monitored object window 230 can continue to present the selected item even though the selected item has left the monitored area of the originally selected camera. The monitoring object window 230 allows the user to correct the result of the serial connection of the objects (ie, correct the behavior track of the selected object) to correct the object of the multi-video content analyzing unit 140 erroneously concatenating the photographic picture.

更詳細地說,使用者透過監控物件視窗230可選擇同時或部分呈現出目前正在追蹤的物件之先前、後續的可能物件(透過先前物件列表232、後續相關物件列表233來呈現)、先前及後續的串接結果(透過先前物件串接結果234、後續串接結果235來呈現),以讓使用者得以依此監控物件視窗230針對物件串接的結果進行修正,以避免多多視訊內容分析單元140錯誤串接攝影畫面的物件。同時為了令使用者得以明確了解可能物件的完整狀態,先前、後續的可能物件呈現方式可為一影像序列撥放或是完整物件之截圖亦或是透過疊加方式產生之物件軌跡影像。影像序列撥放係指撥放該可能物件在該攝影機監控範圍內所記錄之影像序列。完整物件之截圖係指該物件完整呈現於該攝影機監控範圍時所擷取之完整監控影像,而物件軌跡影像則將該可能物件在該攝影機監控範圍內所記錄之影像序列透過特定的影像處理疊加方式產生之特殊處理的單一影像。In more detail, the user can select, through the monitoring object window 230, the previous and subsequent possible items (presented by the previous object list 232, the subsequent related object list 233), previously and partially present, which are currently being tracked, previously and subsequently. The concatenation result (presented by the previous object concatenation result 234 and the subsequent concatenation result 235), so that the user can monitor the object window 230 to correct the result of the object concatenation, thereby avoiding the multi-video content analysis unit 140. Error connecting the objects of the photographic screen. At the same time, in order to enable the user to clearly understand the complete state of the possible objects, the previous and subsequent possible objects may be presented as an image sequence or a screenshot of the complete object or an object track image generated by the overlay. Image sequence playback refers to the sequence of images recorded by the possible object within the surveillance range of the camera. The complete object screenshot refers to the complete surveillance image captured by the object when it is completely displayed in the camera monitoring range, and the object trajectory image superimposes the image sequence recorded by the possible object in the camera monitoring range through a specific image processing. A single image of the special treatment produced by the method.

多攝影機攝影畫面視窗240係用以呈現使用者選定的數個攝影機所擷取的即時攝影畫面,亦或用以播放視訊與分析資料資料庫130中所記錄之多個選定攝影機的歷史視訊資料。多攝影機攝影畫面視窗240內可以由數個特定組合之視訊播放視窗所組合而成,亦可為由至少一個浮動視窗呈現多個攝影機所拍攝的攝影畫面。The multi-camera photographic screen 240 is used to present an instant photographic image captured by a plurality of cameras selected by the user, or to play historical video data of a plurality of selected cameras recorded in the video and analysis data repository 130. The multi-camera photographing screen window 240 may be composed of a plurality of video playback windows of a specific combination, or may be a photographing image taken by a plurality of cameras by at least one floating window.

當使用者於使用者互動平台150對監控環境進行即時監控時,使用者互動平台上之介面會具有監控環境示意圖210、攝影機列表視窗220與多攝影機攝影畫面視窗240。多攝影機攝影畫面視窗240呈現了數個甚至是所有的即時攝影畫面,所呈現的攝影畫面皆可由視訊與分析資料資料庫130中取得。各攝影機的攝影畫面可為獨立子視窗241,且各獨立子視窗241的大小與位置可由使用者自行設定。另外,各攝影機的攝影畫面亦可為分割畫面242中的其中一個畫面,且其分布方式由使用者自行設定。When the user monitors the monitoring environment on the user interaction platform 150, the interface on the user interaction platform has a monitoring environment diagram 210, a camera list window 220, and a multi-camera photography screen window 240. The multi-camera photographic screen 240 presents a number of even all of the instant photographic images, and the rendered photographic images are captured by the video and analytics data repository 130. The photographic screen of each camera can be an independent sub-window 241, and the size and position of each independent sub-window 241 can be set by the user. In addition, the photographing screen of each camera may be one of the divided screens 242, and the distribution manner thereof is set by the user.

圖3A為本發明實施例之使用者選擇攝影機進行即時監控時之使用者互動平台上的介面示意圖。當使用者由多攝影機攝影畫面視窗240、監控環境視窗210中的攝影機分佈位置或多攝影機列表視窗220點選其中一個攝影機時,特定攝影機監控視窗250立即產生,同時在監控環境視窗210與攝影機列表視窗220將被選擇的攝影機以醒目色(如紅色)標記,而其他未被選擇的攝影機則以非醒目色(如暗灰色)標記。另外,多攝影機攝影畫面視窗240將縮小至介面之畫面下緣;或者,多攝影機攝影畫面視窗240將縮小而位於介面之畫面邊緣,且其他多攝影機之攝影畫面擇以縮小畫面來呈現。FIG. 3A is a schematic diagram of an interface on a user interaction platform when a user selects a camera for real-time monitoring according to an embodiment of the present invention. When the user selects one of the cameras from the multi-camera photographic picture window 240, the camera distribution position in the monitoring environment window 210, or the multi-camera list window 220, the specific camera monitoring window 250 is immediately generated while monitoring the environment window 210 and the camera list. Window 220 marks the selected camera in a bold color (such as red), while other unselected cameras are marked in a non-obtrusive color (such as dark gray). In addition, the multi-camera photographing screen window 240 will be reduced to the lower edge of the screen of the interface; or, the multi-camera photographing screen window 240 will be reduced to be located at the edge of the screen of the interface, and the photographing screens of other multi-cameras will be presented with a reduced screen.

被選擇之攝影機目前所拍攝的攝影畫面會呈現於特定攝影機監控視窗250的顯示畫面231。另外,先前相關物件列表232會呈現被選擇的攝影機之鄰近攝影機於數秒前的所拍攝的攝影畫面,後續相關物件列表233則呈現被選擇的攝影機之鄰近攝影機目前所拍攝的攝影畫面。另外,因使用者尚未點選要追蹤的物件,故先前物件串接結果234與後續物件串接結果235並不需要呈現任何內容,且可以暗色系標記呈現,亦或是不用會出現於特定攝影機監控視窗250中。The photographed picture currently taken by the selected camera is presented on the display screen 231 of the particular camera monitor window 250. In addition, the previous related object list 232 will present the captured photographic image of the selected camera adjacent to the camera a few seconds ago, and the subsequent related object list 233 will present the photographic image currently captured by the adjacent camera of the selected camera. In addition, since the user has not selected the object to be tracked, the previous object serialization result 234 and the subsequent object serial connection result 235 do not need to present any content, and may be presented in a dark color, or may not appear in a specific camera. Monitor window 250.

舉例來說,當使用者點選1號攝影機時,對應於1號攝影機使的特定攝影機監控視窗250會立即產生。同時,攝影機列表視窗220中的1號攝影機將以紅色標記呈現,其餘攝影機則以暗灰色標記呈現。環境示意圖中的位置A將呈現紅色外框,其餘位置(位置B~位置H)將呈現半透明暗灰色外框。因為即時監控無需使用播放控制單元212與時間軸控制元件213,因此播放控制單元212與時間軸控制元件213以半透明方式呈現。除此之外,多攝影機攝影畫面視窗240則將縮小至介面之畫面下緣。For example, when the user clicks on the No. 1 camera, the specific camera monitoring window 250 corresponding to the No. 1 camera is immediately generated. At the same time, the camera No. 1 in the camera list window 220 will be presented in red, and the remaining cameras will be presented in dark gray. The position A in the environment diagram will present a red frame, and the remaining positions (position B to position H) will present a translucent dark gray frame. Since the live monitoring does not require the use of the playback control unit 212 and the timeline control element 213, the playback control unit 212 and the timeline control component 213 are presented in a translucent manner. In addition, the multi-camera photographic screen window 240 will be reduced to the lower edge of the screen of the interface.

圖3B為本發明實施例之特定攝影機監控視窗的詳細示意圖。如同前面所述,因使用者尚未點選要追蹤的物件,故先前物件串接結果234與後續物件串接結果235並不需要呈現任何內容,且可以暗色系標記呈現。FIG. 3B is a detailed schematic diagram of a specific camera monitoring window according to an embodiment of the present invention. As described above, since the user has not clicked on the item to be tracked, the previous item concatenation result 234 and the subsequent item concatenated result 235 do not need to present anything, and may be presented in a dark color.

多攝影機監控系統100除了會將被選擇攝影機目前所拍攝的攝影畫面呈現於顯示區231外,還會將被選擇之攝影機的編號標記於特定攝影機監控視窗250上,例如,將1號攝影機標記於特定攝影機監控視窗250的上緣。另外,多攝影機監控系統100還可以在的顯示區231上標記拍攝時間。In addition to presenting the photographic picture currently captured by the selected camera to the display area 231, the multi-camera monitoring system 100 also marks the number of the selected camera on the specific camera monitoring window 250, for example, marking the No. 1 camera. The upper edge of the window 250 is monitored by a particular camera. In addition, the multi-camera monitoring system 100 can also mark the shooting time on the display area 231.

除此之外,多攝影機監控系統100還會擷取攝影畫面中的物件資訊(包括物件的出現位置、物件編號與物件特徵等,但不以此為限),並將這些物件資訊標記於顯示區231之攝影畫面的物件。物件的出現位置會以方框標記,且在方框周圍描述物件的物件資訊(如對應機率值最高的物件編號、物件型態、顏色特徵、目前存在於監控環境中的空間資訊等資訊,但不以此為限)。In addition, the multi-camera monitoring system 100 also captures the object information in the photographic image (including the appearance of the object, the object number and the object feature, etc., but not limited thereto), and marks the information of the object on the display. The object of the photographic picture of the area 231. The position of the object will be marked by a box, and the object information of the object will be described around the box (such as the object number with the highest probability value, the object type, the color characteristics, the spatial information currently existing in the monitoring environment, etc.), but Not limited to this).

舉例來說,於圖3B中,人物甲出現的位置會以方框標記,且物件資訊標記於方框附近,其中人物甲之物件編號物件型態與顏色特徵分別為123、人與棕色。同樣地,人物乙出現的位置會以方框標記,且物件資訊標記於方框附近,其中人物乙之物件編號物件型態與顏色特徵分別為126、人與紅/灰色。For example, in FIG. 3B, the position where the character A appears is marked by a box, and the object information is marked near the box, wherein the object number and the color feature of the character A are 123, person and brown, respectively. Similarly, the position where the character B appears will be marked with a box, and the object information is marked near the box, wherein the object number and color characteristics of the character B are 126, person and red/gray, respectively.

先前相關物件列表232除了會呈現被選擇的攝影機之鄰近攝影機於數秒前的所拍攝的攝影畫面外,還會有拍攝時間與攝影機編號標記於先前相關物件列表232中。同樣地,後續相關物件列表233除了會呈現被選擇的攝影機之鄰近攝影機目前所拍攝的攝影畫面,還會有拍攝時間與攝影機編號標記於後續相關物件列表233中。於先前相關物件列表232與後續相關物件列表233中,攝影畫面的排序方式係以攝影機編號或與被選擇之攝影機的遠近來排列。The previously related object list 232, in addition to the captured camera taken by the proximity camera of the selected camera a few seconds ago, may also have the shooting time and camera number marked in the previously associated object list 232. Similarly, the subsequent related object list 233 will display the photographing time and the camera number in the subsequent related object list 233 in addition to the photographing picture currently taken by the adjacent camera of the selected camera. In the previous related object list 232 and the subsequent related object list 233, the order of the photographic pictures is arranged by the camera number or the distance from the selected camera.

圖3B的先前相關物件列表232與後續相關物件列表233中,攝影畫面的排序方式係以攝影機編號來排列,因此鄰近於1號攝影機的2、3、4、6號攝影機所拍攝的攝影畫面會依序排列來呈現。於圖3B中,顯示區所顯示之1號攝影機目前所拍攝的攝影畫面之拍攝時間為12:06:30,因此,後續相關物件列表233之2、3、4、6號攝影機所拍攝的攝影畫面之拍攝時間亦為12:06:30,另外,先前相關物件列表232之2、3、4、6號攝影機所拍攝的攝影畫面之拍攝時間則為12:06:20。In the previous related object list 232 and the subsequent related object list 233 of FIG. 3B, the order of the photographic pictures is arranged by the camera number, so that the photographic pictures taken by the cameras 2, 3, 4, and 6 adjacent to the No. 1 camera will be Arranged in order. In FIG. 3B, the shooting time of the photographed picture currently taken by the No. 1 camera displayed in the display area is 12:06:30, so that the photographs taken by the cameras 2, 3, 4, and 6 of the subsequent related object list 233 are photographed. The shooting time of the screen is also 12:06:30. In addition, the shooting time of the camera shots of cameras 2, 3, 4, and 6 of the previous related object list 232 is 12:06:20.

圖4A為本發明實施例之使用者選擇特定物件進行即時監控時之使用者互動平台上的介面示意圖。當使用者點選特定物件後,特定攝影機監控視窗250將變為監控物件視窗230。例如,使用者點選物件編號“123”的物件後,使用者互動平台150上之介面將轉變,特定攝影機監控視窗250將變為物件編號“123”之物件的監控物件視窗230,且因此介面提供使用者控制所選擇之特定物件於各時間點的攝影畫面,因此播放控制單元212與時間軸控制元件213不再以半透明的方式呈現,而是以可用狀態的方式呈現。FIG. 4A is a schematic diagram of an interface on a user interaction platform when a user selects a specific object for real-time monitoring according to an embodiment of the present invention. When the user clicks on a particular item, the particular camera monitoring window 250 will become the monitored object window 230. For example, after the user clicks on the object of the item number "123", the interface on the user interaction platform 150 will change, and the specific camera monitoring window 250 will become the monitored object window 230 of the object of the item number "123", and thus the interface The user is provided to control the photographic picture of the selected particular item at each point in time, so the playback control unit 212 and the timeline control element 213 are no longer presented in a translucent manner, but are presented in a usable state.

在監控環境視窗210中,除了位置A將呈現紅色外框,其餘位置(位置B~位置H)將呈現半透明暗灰色外框。同時,監控環境視窗210呈現了所選擇之特定物件的歷史行為軌跡。特定物件的歷史行為軌跡可以透過分析過濾的方式來取得。更詳細地說,首先在視訊與分析資料資料庫130中取得屬於物件編號“123”的物件資訊。接著,透過相對應的時間與物件存在於監控環境中的空間資訊串接出此特定物件的歷史行為軌跡。另外,為避免進行多次的資料擷取與分析,使用者互動平台150更可以將暫存目前正在觀看之攝影畫面中的所有物件的物件資訊。In the monitoring environment window 210, except for the position A, a red frame will be presented, and the remaining positions (position B to position H) will present a translucent dark gray frame. At the same time, the monitoring environment window 210 presents a historical behavioral trajectory of the selected particular object. The historical behavior of a particular object can be obtained by analyzing and filtering. In more detail, first, the object information belonging to the object number "123" is obtained in the video and analysis data library 130. Then, the historical behavior track of the specific object is connected through the corresponding time and the spatial information that the object exists in the monitoring environment. In addition, in order to avoid multiple data capture and analysis, the user interaction platform 150 can temporarily store the object information of all the objects in the photographic picture currently being viewed.

圖4B為本發明實施例之監控物件視窗的詳細示意圖。圖4B用以詳細地表示圖4A中的監控物件視窗230,監控物件視窗230中央的顯示區231用以呈現目前的攝影畫面,且目前的顯示區231還標記對應於目前之攝影畫面的攝影機編號、拍攝時間與各物件的物件資訊等。舉例來說,圖4B之目前的攝影畫面為如1號攝影機所拍攝,因此顯示區231有1號攝影機的標記。4B is a detailed schematic diagram of a window of a monitored object according to an embodiment of the present invention. 4B is a detailed view of the monitored object window 230 of FIG. 4A. The display area 231 in the center of the monitoring object window 230 is used to present the current photographic image, and the current display area 231 also marks the camera number corresponding to the current photographic image. , shooting time and object information of each object. For example, the current photographic picture of FIG. 4B is taken by a camera No. 1, so that the display area 231 has the mark of the No. 1 camera.

除此之外,使用者所點選的特定物件可以醒目色(如紅色等顏色)的外框標示,而其他物件將以非醒目色虛線外框標示。此處的先前相關物件列表232可用以呈現出現於先前時間且不同之攝影機拍攝出之可能物件的攝影畫面,這些可能物件的攝影畫面將於先前相關物件列表232中依據物件相關機率的高低順序來排列。此處的後續相關物件列表233可用以呈現出現於目前時間幾秒後且不同之攝影機拍攝出之可能物件的攝影畫面,這些可能物件的攝影畫面將於先前相關物件列表233中依據物件相關機率的高低順序來排列。In addition, the specific items selected by the user can be marked with a bold color (such as red color), while other objects will be marked with a non-obtrusive dotted line. The previous related object list 232 herein can be used to present a photographic picture of possible objects taken at a previous time and different cameras, and the photographic pictures of these possible objects will be in the previous related object list 232 according to the order of the object related probability. arrangement. The subsequent related object list 233 here can be used to present a photographic picture of possible objects taken after a few seconds of the current time and taken by different cameras, and the photographic pictures of these possible objects will be based on the object-related probability in the previously related object list 233. Arrange in high and low order.

舉例來說,在圖4B的先前相關物件列表232中,對應於目前的攝影畫面之物件編號“123”之物件的物件相關機率高低排列的可能物件依序為3號攝影機之攝影畫面中出現的之物件編號“123”的物件、4號攝影機之攝影畫面中出現的之物件編號“147”的物件與6號攝影機之攝影畫面中出現的之物件編號“169”的物件。於此實施例中,多視訊內容分析單元140認為3號攝影機之攝影畫面中出現的之物件編號“123”的物件與目前之攝影畫面中之物件編號“123”的物件應為相同物件,因此3號攝影機之攝影畫面中出現的之物件編號“123”的物件,而4號攝影機之攝影畫面中出現的之物件編號“147”的物件與6號攝影機之攝影畫面中出現的之物件編號“169”的物件則以非醒目虛線外框標示之。For example, in the previous related object list 232 of FIG. 4B, the objects related to the object-related probability of the object corresponding to the object number "123" of the current photographic picture are sequentially appearing in the photographic image of the No. 3 camera. The object of the article number "123", the object of the article number "147" appearing in the photographic image of the camera No. 4, and the object of the article number "169" appearing in the photographic image of the camera No. 6. In this embodiment, the multi-video content analyzing unit 140 considers that the object of the object number "123" appearing in the photographic image of the camera No. 3 and the object of the object number "123" in the current photographic image should be the same object, The object number "123" appearing in the photographic image of camera No. 3, and the object number "147" appearing in the photographic image of camera No. 4 and the object number appearing in the photographic image of camera No. 6" The 169" object is indicated by a non-obscured dotted frame.

另外,呈現於先前相關物件列表232中之攝影畫面可為一可能物件在攝影機最終完整呈現的攝影畫面外,亦可為一可能物件在攝影機之監控區域內的部份攝影畫面,亦或是可能物件出現於監控區域內之行為軌跡疊加後的攝影畫面。總而言之,先前相關物件列表232用以呈現可能物件之攝影畫面的方式並非用以限定本發明。In addition, the photographic image presented in the previous related object list 232 may be a photographic image that may be completely present in the camera, or may be a partial photographic image of a possible object in the monitoring area of the camera, or may be The object appears in the photographic image of the superimposed behavior track in the monitoring area. In summary, the manner in which the previously associated object list 232 is used to present a photographic image of a possible object is not intended to limit the invention.

另外,若後續相關物件列表233之攝影畫面中存在與所選定之特定物件為相同物件時,則此攝影畫面會被排列至最高位置。舉例來說,因4號攝影機之監控區域與1號攝影機之監控區域重疊,因此選定之特定物件(例如物件編號“123”之物件)會同時出現於1號攝影機與4號攝影機機拍攝的攝影畫面中。因為4號攝影機所拍攝的攝影畫面中含有選定之特定物件(物件編號“123”之物件),因此4號攝影機所拍攝的攝影畫面被置放於後續相關物件列表233中的第一優先位置呈現。同時,4號攝影機所拍攝的攝影畫面中的被選定的物件也將以醒目的物件外框標示。In addition, if there is a same object in the photographic image of the subsequent related object list 233 as the selected specific object, the photographic picture is arranged to the highest position. For example, because the monitoring area of camera No. 4 overlaps with the monitoring area of camera No. 1, the selected specific object (such as the object number "123") will appear in both the camera of the No. 1 camera and the No. 4 camera. In the picture. Since the photographed picture taken by the No. 4 camera contains the selected specific object (the object of the object number "123"), the photographed picture taken by the No. 4 camera is placed in the first priority position in the subsequent related object list 233. . At the same time, the selected objects in the photographic picture taken by the No. 4 camera will also be marked with a prominent object frame.

先前物件串接結果234則用以呈現被選定之物件先前所出現之不同攝影機的攝影畫面,且這些攝影畫面會依據時間順序來排列呈現。呈現於先前物件串接結果234中之攝影畫面可以是可能物件在攝影機最終完整呈現的攝影畫面外,亦可以是可能物件在攝影機之監控區域內的部份攝影畫面,或者是可能物件出現於監控區域內之行為軌跡疊加後的攝影畫面。總而言之,先前物件串接結果234呈現的方式並非用以限定本發明。The previous object concatenation result 234 is used to present the photographic images of different cameras previously appearing on the selected object, and the photographic images are arranged in chronological order. The photographic image presented in the previous object serialization result 234 may be a photographic image in which the object may be completely presented in the camera, or may be a partial photographic image of the object in the monitoring area of the camera, or a possible object may appear in the monitoring. The photographic picture after the behavior track in the area is superimposed. In summary, the manner in which previous object serialization results 234 are presented is not intended to limit the invention.

由於圖4B為於即時監控的情況下的監控物件視窗230,因此後續物件串接結果235在即時監控的情況下並無法得知所選定之特定物件的未來行為軌跡,故後續物件串接結果235可以依舊暗色系標記來呈現,或者甚至是不出現於監控物件視窗230中。Since FIG. 4B is the monitoring object window 230 in the case of real-time monitoring, the subsequent object serialization result 235 cannot know the future behavior trajectory of the selected specific object in the case of real-time monitoring, so the subsequent object serial connection result 235 It can still be rendered in dark color markings, or even in the monitoring object window 230.

若使用者希望看到先前的選定之特定物件的攝影畫面,則可以拉選時間軸控制元件213,亦或是透過播放控制單元212,即可在監控物件視窗230中觀看選定物件在指定時間的攝影畫面。換句話說,使用者互動平台150還具有針對特定物件進行事後檢閱的功能。If the user wants to see the photographic image of the previously selected specific object, the time axis control component 213 can be selected, or the playback control unit 212 can be used to view the selected object in the monitored object window 230 at a specified time. Photography screen. In other words, the user interaction platform 150 also has the function of performing a post-mortem review of a particular item.

當使用者透過使用者互動平台150欲檢閱特定時段之的攝影畫面時,此時使用者互動平台150上所呈現的介面包含監控環境視窗210、攝影機列表視窗220、播放控制單元212、時間軸控制元件213與多攝影機攝影畫面視窗240。多攝影機攝影畫面視窗240呈現了數個甚至是所有的使用者指定之特定時段之攝影畫面,且這些攝影畫面可以由視訊與分析資料資料庫130中取得。各攝影畫面可以由獨立子視窗來呈現,或可以由分割畫面中的其中一個畫面來呈現。如同前面所述,獨立子視窗大小與位置可由使用者自行設定,且分割畫面中之各畫面的分佈方式也可以由使用者自行定義。使用者可以使用播放控制單元212與時間軸控制元件213對所有的攝影畫面同步進行撥放或操控,以藉此觀看到監控環境中所需之攝影畫面。When the user wants to view the photographic image of the specific time period through the user interaction platform 150, the interface presented on the user interaction platform 150 at this time includes the monitoring environment window 210, the camera list window 220, the playback control unit 212, and the time axis control. Element 213 and multi-camera photographic picture window 240. The multi-camera photographic picture window 240 presents a number of photographic pictures for a particular time period specified by the user, and these photographic pictures can be taken from the video and analytics data library 130. Each photographic picture may be presented by an independent sub-window or may be rendered by one of the divided pictures. As described above, the size and position of the independent sub-window can be set by the user, and the distribution pattern of each picture in the divided picture can also be defined by the user. The user can use the playback control unit 212 and the timeline control component 213 to simultaneously play or manipulate all of the photographic frames to thereby view the desired photographic images in the surveillance environment.

請參照圖5A與圖5B,圖5A為本發明實施例之使用者選擇攝影機進行事後檢閱時之使用者互動平台上的介面示意圖,圖5B為本發明實施例之特定攝影機監控視窗的詳細示意圖,其中圖5B的詳細示意圖對應於使用者選擇攝影機進行事後檢閱時之特定攝影機監控視窗。5A and 5B, FIG. 5A is a schematic diagram of an interface on a user interaction platform when a user selects a camera for post-mortem review according to an embodiment of the present invention, and FIG. 5B is a detailed schematic diagram of a specific camera monitoring window according to an embodiment of the present invention. The detailed schematic diagram of FIG. 5B corresponds to a specific camera monitoring window when the user selects the camera for post-mortem review.

當使用者由多攝影機攝影畫面視窗240、監控環境視窗210中的攝影機分佈位置或攝影機列表視窗220中點選某特定攝影機時,特定攝影機監控視窗250會立即產生。同時,在監控環境視窗210與攝影機列表視窗220會將選定的攝影機以醒目色(如紅色)標示,而其他未被選定的攝影機則以非醒目色(如暗灰色)標示。另外,多攝影機攝影畫面視窗240將縮小至介面之畫面下緣;或者,多攝影機攝影畫面視窗240將縮小而位於介面之畫面邊緣,且其他多攝影機之攝影畫面則以縮小畫面來呈現。When the user clicks on a particular camera from the multi-camera photographic screen 240, the camera distribution location in the monitoring environment window 210, or the camera list window 220, the particular camera monitoring window 250 is generated immediately. At the same time, in the monitoring environment window 210 and the camera list window 220, the selected cameras are marked in bold colors (such as red), while other unselected cameras are marked in non-obtrusive colors (such as dark gray). In addition, the multi-camera photographing screen window 240 will be reduced to the lower edge of the screen of the interface; or, the multi-camera photographing screen window 240 will be reduced to be located at the edge of the screen of the interface, and the photographing screens of other multi-cameras will be presented in a reduced screen.

被選擇之攝影機目前所播放的攝影畫面會呈現於特定攝影機監控視窗250的顯示畫面231。另外,先前相關物件列表232會呈現被選擇的攝影機之鄰近攝影機於數秒前的所播放的攝影畫面,後續相關物件列表233則呈現被選擇的攝影機之鄰近攝影機於數秒後所播放的攝影畫面。另外,因使用者尚未點選要追蹤的物件,故先前物件串接結果234與後續物件串接結果235並不需要呈現任何內容,且可以暗色系標記呈現,亦或是不出現於特定攝影機監控視窗250中。The photographic screen currently being played by the selected camera will be presented on the display screen 231 of the particular camera monitoring window 250. In addition, the previous related object list 232 presents the played photographic picture of the selected camera adjacent to the camera a few seconds ago, and the subsequent related object list 233 presents the photographic picture played by the adjacent camera of the selected camera after a few seconds. In addition, since the user has not selected the item to be tracked, the previous object serialization result 234 and the subsequent object serial connection result 235 do not need to present any content, and may be presented in a dark color mark or not in a specific camera monitoring. In window 250.

舉例來說,當使用者點選1號攝影機時,對應於1號攝影機使的特定攝影機監控視窗250會立即產生。同時,攝影機列表視窗220中的1號攝影機將以紅色標記呈現,其餘攝影機則以暗灰色標記呈現。環境示意圖中的位置A將呈現紅色外框,其餘位置(位置B~位置H)將呈現半透明暗灰色外框。除此之外,多攝影機攝影畫面視窗240則將縮小至介面之畫面下緣。For example, when the user clicks on the No. 1 camera, the specific camera monitoring window 250 corresponding to the No. 1 camera is immediately generated. At the same time, the camera No. 1 in the camera list window 220 will be presented in red, and the remaining cameras will be presented in dark gray. The position A in the environment diagram will present a red frame, and the remaining positions (position B to position H) will present a translucent dark gray frame. In addition, the multi-camera photographic screen window 240 will be reduced to the lower edge of the screen of the interface.

圖6A為本發明實施例之使用者選擇特定物件進行事後檢閱時之使用者互動平台上的介面示意圖。當使用者點選特定物件後,特定攝影機監控視窗250將變為監控物件視窗230。例如,使用者點選物件編號“123”的物件後,使用者互動平台150上之介面將轉變,特定攝影機監控視窗250將變為物件編號“123”之物件的監控物件視窗230。FIG. 6A is a schematic diagram of an interface on a user interaction platform when a user selects a specific object for post-mortem review according to an embodiment of the present invention. When the user clicks on a particular item, the particular camera monitoring window 250 will become the monitored object window 230. For example, after the user clicks on the object of the item number "123", the interface on the user interaction platform 150 will change, and the specific camera monitoring window 250 will become the monitored object window 230 of the object of the item number "123".

在監控環境視窗210中,除了位置A將呈現紅色外框,其餘位置(位置B~位置H)將呈現半透明暗灰色外框。同時,監控環境視窗210呈現了所選擇之特定物件的歷史行為軌跡,其中監控環境視窗210所標記的圓點表示物件於監控環境中的位置,故此圓點將依據播放時間時物件所在位置而變動,且可以閃爍的方式呈現,以凸顯選擇物件在監控環境中的位置。據此,所選定之物件的整個行為軌跡將依據所得之串接物件的結果而獲得。In the monitoring environment window 210, except for the position A, a red frame will be presented, and the remaining positions (position B to position H) will present a translucent dark gray frame. At the same time, the monitoring environment window 210 presents the historical behavior track of the selected specific object, wherein the circle marked by the monitoring environment window 210 indicates the position of the object in the monitoring environment, so the dot will change according to the position of the object during the playing time. And can be rendered in a blinking manner to highlight the location of the selected object in the monitored environment. Accordingly, the entire behavioral trajectory of the selected object will be obtained based on the results of the resulting tandem object.

圖6B為本發明實施例之監控物件視窗的詳細示意圖。圖6B用以詳細地表示圖6A中的監控物件視窗230,監控物件視窗230中央的顯示區231用以呈現目前的攝影畫面,且目前的顯示區231還標記對應於目前之攝影畫面的攝影機編號、拍攝時間與各物件的物件資訊等。舉例來說,圖6B之目前的攝影畫面為如1號攝影機所拍攝,因此顯示區231有1號攝影機的標記。FIG. 6B is a detailed schematic diagram of a window of a monitored object according to an embodiment of the present invention. 6B is a detailed view of the monitoring object window 230 in FIG. 6A. The display area 231 in the center of the monitoring object window 230 is used to present the current photographic picture, and the current display area 231 also marks the camera number corresponding to the current photographic picture. , shooting time and object information of each object. For example, the current photographic picture of FIG. 6B is taken by a camera No. 1, so that the display area 231 has the mark of the No. 1 camera.

先前相關物件列表232可用以呈現出現於先前時間且不同之攝影機拍攝出之可能物件的攝影畫面,這些可能物件的攝影畫面將於先前相關物件列表232中依據物件關聯性評分的高低順序來排列。The previous related object list 232 can be used to present photographic images of possible objects that were taken at a previous time and different cameras, and the photographic images of these possible objects will be ranked in the previous related object list 232 in accordance with the order of the object relevance ratings.

另外,呈現於先前相關物件列表232中之攝影畫面可以是可能物件在攝影機最終完整呈現的攝影畫面外,亦可以是可能物件在攝影機之監控區域內的部份攝影畫面,或者是可能物件出現於監控區域內之行為軌跡疊加後的攝影畫面。總而言之,先前相關物件列表232用以呈現可能物件之攝影畫面的方式並非用以限定本發明。In addition, the photographic image presented in the previous related object list 232 may be a photographic image in which the object may be completely presented in the camera, or may be a partial photographic image of the object in the monitoring area of the camera, or a possible object may appear in the photographic image of the camera. The photographic picture superimposed on the behavior track in the monitoring area. In summary, the manner in which the previously associated object list 232 is used to present a photographic image of a possible object is not intended to limit the invention.

此處的後續相關物件列表233可用以呈現出現於目前播放時間幾秒後且不同之攝影機拍攝出之可能物件的攝影畫面,這些可能物件的攝影畫面將於先前相關物件列表233中依據物件關聯性評分的高低順序來排列。The subsequent related object list 233 here can be used to present a photographic picture of possible objects taken after a few seconds of the current playing time and taken by different cameras, and the photographic pictures of these possible objects will be based on the object relevance in the previously related object list 233. The order of the scores is arranged in order.

先前物件串接結果234則用以呈現被選定之物件先前所出現之不同攝影機之監控區域中的攝影畫面,且這些攝影畫面會依據時間順序來排列呈現。呈現於先前物件串接結果234中之攝影畫面可以是可能物件在攝影機最終完整呈現的攝影畫面外,亦可以是可能物件在攝影機之監控區域內的部份攝影畫面,或者是可能物件出現於監控區域內之行為軌跡疊加後的攝影畫面。The previous object concatenation result 234 is used to present the photographic pictures in the monitored areas of the different cameras previously appearing by the selected object, and the photographic pictures are arranged in chronological order. The photographic image presented in the previous object serialization result 234 may be a photographic image in which the object may be completely presented in the camera, or may be a partial photographic image of the object in the monitoring area of the camera, or a possible object may appear in the monitoring. The photographic picture after the behavior track in the area is superimposed.

後續物件串接結果235則用以呈現被選定之物件目前時間之後所出現之不同攝影機之監控區域中的攝影畫面,且這些攝影畫面會依據時間順序來排列呈現。呈現於後續物件串接結果235中之攝影畫面可以是可能物件在攝影機最終完整呈現的攝影畫面外,亦可以是可能物件在攝影機之監控區域內的部份攝影畫面,或者是可能物件出現於監控區域內之行為軌跡疊加後的攝影畫面。The subsequent object concatenation result 235 is used to present the photographic pictures in the monitoring area of different cameras that appear after the current time of the selected object, and the photographic pictures are arranged in chronological order. The photographic image presented in the subsequent object serialization result 235 may be a photographic image in which the object may be completely presented in the camera, or may be a partial photographic image of the object in the monitoring area of the camera, or a possible object may appear in the monitoring. The photographic picture after the behavior track in the area is superimposed.

圖7為本發明實施例之多攝影機監控系統串接物件錯誤時的監控物件視窗之詳細示意圖。於圖7中,可以得知多攝影機監控系統100在對物件編號為“123”進行串接時有錯誤的發生。不論在即時監控亦或是事後檢閱的操作過程中,多視訊內容分析單元140都有可能因故導致物件串接錯誤,而造成不同物件被辨識成同一物件,導致使用者互動平台150上呈現出平滑但卻是錯誤的物件之軌跡資訊與歷史影像。FIG. 7 is a detailed schematic diagram of a monitored object window when a plurality of camera monitoring systems are connected in series with an error according to an embodiment of the present invention. In Fig. 7, it can be seen that the multi-camera monitoring system 100 has an error occurring when the object number "123" is concatenated. During the operation of the instant monitoring or the post-mortem review, the multi-video content analyzing unit 140 may cause the object to be concatenated incorrectly, so that different objects are recognized as the same object, and the user interaction platform 150 is presented. Smooth but erroneous object trajectory information and historical images.

使用者在觀看選擇之物件的軌跡資訊與歷史影像時,有可能時發現實為不同物件被標記為相同的物件編號。於圖7的實施例中,物件編號為“123”之物件為使用者所選擇欲追蹤的特定物件。在此監控物件視窗230中,顯示區231顯示拍攝時間為12:06:30之1號攝影機的畫面,且因為物件編號為“123”之物件被選擇,所以多視訊內容分析單元140會對物件編號為“123”之物件進行串接。When the user views the trajectory information and the historical image of the selected object, it is possible to find that different objects are marked as the same object number. In the embodiment of FIG. 7, the object with the item number "123" is the specific item that the user selects to track. In the monitored object window 230, the display area 231 displays the screen of the camera No. 1 with the shooting time of 12:06:30, and since the object whose object number is "123" is selected, the multi-video content analyzing unit 140 will object to the object. The object numbered "123" is connected in series.

於此實施例中,物件編號為“123”之物件實質上為甲人員,但多視訊內容分析單元140卻將乙人員誤認為物件編號為“123”之物件,而產生了錯誤的串接結果。據此,後續物件串接結果235所呈現的攝影畫面並非為甲人員的正確行為軌跡。In this embodiment, the object whose object number is "123" is substantially a personnel, but the multi-video content analysis unit 140 mistakes the B personnel for the object whose object number is "123", and the wrong serial result is generated. . Accordingly, the photographic image presented by the subsequent object serialization result 235 is not the correct behavior trajectory of the personnel.

此時,使用者僅須自後續相關物件列表233中點選使用者認定之正確物件的攝影畫面。於此實施例中,使用者將點選12號攝影機於拍攝時間12:06:40所拍攝之物件編號為“126”之物件的攝影畫面。接著,顯示區231將顯示使用者於後續物件列表所選擇的攝影畫面。使用者再點選物件編號為“126”之物件(實質上為甲人員)後,介面將出現詢問是否校正的確認訊息。待使用者確認後,介面將把此校正資料傳送至多視訊內容分析單元140,而多視訊內容分析單元140則依據校正資料修正物件資訊,也就是將物件編號為“123”之物件與物件編號為“126”之物件,在時間為12:06:40之後的攝影畫面皆進行重新比對,進而修正串接結果,而使得甲人員都標記為物件編號為“123”之物件,而乙人員都標記為物件編號為“126”之物件。另外,修正後的串接結果除了通知使用者互動平台150外,同時將會將此修正後的串接結果儲存至視訊與分析資料資料庫130中。At this time, the user only has to select the photographic picture of the correct object determined by the user from the subsequent related object list 233. In this embodiment, the user will select the photographic image of the object whose number is "126" recorded by the 12th camera at the shooting time of 12:06:40. Next, the display area 231 will display the photographic screen selected by the user in the subsequent object list. After the user selects the object with the item number "126" (essentially a person), the interface will display a confirmation message asking whether to correct. After the user confirms, the interface will transmit the correction data to the multi-video content analysis unit 140, and the multi-video content analysis unit 140 corrects the object information according to the correction data, that is, the object number and the object number whose object number is "123" is The object of "126" is re-aligned after the time of 12:06:40, and then the result of the serial connection is corrected, so that the personnel are marked as the object with the object number "123", and the personnel are all Mark the object with the item number "126". In addition, the corrected concatenation result will be stored in the video and analysis data repository 130 in addition to notifying the user interaction platform 150.

在對串接結果進行修正時,因使用者所點選的攝影畫面並非為機率值最大的攝影畫面,故使用者互動平台150將通知分析單元140目前追蹤的特定物件應出現於12號攝影機於拍攝時間12:06:40所拍攝之物件編號為“126”之物件的攝影畫面中。另外,多視訊內容分析單元140將依據目前追蹤的物件之物件特徵以及相關物件資訊,在使用者選定的攝影畫面中的物件資訊進行物件比對,並給予使用紅色虛框顯示的建議串接物件。若使用者判定建議串接物件為正確物件,使用者僅須點選紅色虛框,無須再次確認,使用者互動平台150便會此校正資料送給分析單元140。若使用者所認為建議串接物件為錯誤物件,則使用者可點選其他虛框表示的物件。在使用者點選其他虛框表示的物件後,介面會再次地詢問使用者是否確認此校正,且在使用者確認校校正後,使用者互動平台150才會送出校正資料給多視訊內容分析單元140。When the splicing result is corrected, the photographic image selected by the user is not the photographic image with the highest probability value, so the user interaction platform 150 will notify the analyzing unit 140 that the specific object currently being tracked should appear on the No. 12 camera. The shooting time is 12:06:40 and the picture number of the object with the item number "126" is taken. In addition, the multi-video content analyzing unit 140 compares the object information in the photographic image selected by the user according to the object feature of the currently tracked object and related object information, and gives the suggested splicing object displayed by using the red virtual frame. . If the user determines that the suggested splicing object is the correct object, the user only needs to click the red imaginary box, and the user interaction platform 150 sends the correction data to the analyzing unit 140. If the user thinks that the proposed splicing object is an erroneous object, the user can click on other objects indicated by the imaginary box. After the user clicks on the object indicated by the other virtual frame, the interface will again ask the user whether to confirm the correction, and after the user confirms the calibration, the user interaction platform 150 will send the correction data to the multi-video content analysis unit. 140.

在詳細地介紹完本發明實施例所提供的攝影畫面之物件串接修正方法所使用的介面後,接著使用流程圖來說明攝影畫面之物件串接修正方法的各步驟。請參照圖8,圖8是本發明實施例攝影畫面之物件串接修正方法的流程圖。首先,在步驟S800中,獲取多攝影機監控系統中之各攝影機的攝影畫面。接著,在步驟S801中,對各攝影畫面進行分析,以獲得各攝影畫面中之各物件資訊,其中物件資訊包括物件編號、物件特徵與物件型態等。After the interface used in the object serial connection correction method of the photographing screen provided by the embodiment of the present invention is described in detail, each step of the object serial connection correcting method of the photographing screen will be described next using a flowchart. Please refer to FIG. 8. FIG. 8 is a flowchart of a method for correcting serial connection of objects in a photographing screen according to an embodiment of the present invention. First, in step S800, a photographing screen of each camera in the multi-camera monitoring system is acquired. Next, in step S801, each photographic image is analyzed to obtain information about each object in each photographic image, wherein the object information includes the object number, the object feature, and the object type.

然後,在步驟S802中,提供使用者互動平台給使用者,以讓使用者透過使用者互動平台選擇欲追蹤的特定物件。在步驟S803中,多攝影機監控系統計算目前攝影畫面之拍攝時間前各攝影機的攝影畫面之物件與特定物件的相關性。在步驟S804中,多攝影機監控系統計算目前攝影畫面之拍攝時間後各攝影機的攝影畫面之物件與特定物件的相關性。Then, in step S802, a user interaction platform is provided to the user, so that the user selects a specific object to be tracked through the user interaction platform. In step S803, the multi-camera monitoring system calculates the correlation between the object of the photographing screen of each camera before the photographing time of the current photographing screen and the specific object. In step S804, the multi-camera monitoring system calculates the correlation between the object of the photographing screen of each camera and the specific object after the photographing time of the current photographing screen.

在步驟S805中,多攝影機監控系統自動地將出現於各攝影畫面的特定物件進行串接,以獲得特定物件的軌跡資訊與歷史影像,其中自動串接特定物件的方式是將特定物件與其關聯性評分最高的物件進行串接。In step S805, the multi-camera monitoring system automatically concatenates the specific objects appearing on each photographic image to obtain trajectory information and historical images of the specific object, wherein the manner of automatically splicing the specific object is to associate the specific object with the related object. The highest rated object is concatenated.

在步驟S806中,依據目前攝影畫面之拍攝時間前各攝影機的攝影畫面與特定物件的相關性將各物件之攝影畫面依序列於使用者互動平台的先前相關物件列表。在步驟S807中,依據目前攝影畫面之拍攝時間後各攝影機的攝影畫面與特定物件的相關性將各物件之攝影畫面依序列於使用者互動平台的後續相關物件列表。In step S806, the photographic images of the objects are sequenced according to the previous related object list of the user interaction platform according to the correlation between the photographic images of the cameras before the shooting time of the current photographic image and the specific objects. In step S807, the photographic images of the objects are sequenced according to the subsequent related object list of the user interaction platform according to the correlation between the photographic images of the cameras and the specific objects after the shooting time of the current photographic screen.

在步驟S808中,將自動串接之特定物件的軌跡資訊與歷史影像中於目前攝影畫面之拍攝時間前,且為非目前攝影機所拍攝之包含該特定物件數張攝影畫面,依序排列於使用者互動平台的先前物件串接結果中。在步驟S809中,將自動串接之特定物件的軌跡資訊與歷史影像中於目前攝影畫面之拍攝時間後,且為非目前攝影機所拍攝之包含該特定物件的數張攝影畫面,依序排列於使用者互動平台的後續物件串接結果中。In step S808, the trajectory information of the specific object automatically connected in series and the historical image before the shooting time of the current photographic image, and the photographic images of the specific object captured by the non-current camera are sequentially arranged in use. The previous object of the interactive platform is concatenated in the results. In step S809, the trajectory information of the specific object automatically connected in series and the shooting time of the current photographic image in the historical image, and the plurality of photographic images of the specific object captured by the current camera are sequentially arranged. The subsequent object of the user interaction platform is concatenated in the result.

若使用發現自動串接結果錯誤,則使用者會點選後續物件列表中的關聯性並非最大的一張攝影畫面中的正確串接物件來修正。據此,在步驟S810中,判斷後續物件列表中的關聯性並非最大的一張攝影畫面是否被點選。若後續物件列表中的關聯性並非最大的一張攝影畫面未被點選,則表示物件自動串接結果為正確串接結果,並結束此攝影畫面之物件串接修正方法。If the use of the found automatic cascading result is wrong, the user will click on the correct spliced object in the photographic image that is not the most relevant in the subsequent object list to correct. According to this, in step S810, it is judged whether or not the one photographing screen whose relevance in the subsequent object list is not the largest is clicked. If the photographic image whose relevance is not the largest in the subsequent object list is not selected, it indicates that the automatic cascading result of the object is the correct cascading result, and the object serial connection correction method of the photographic image is ended.

若後續物件列表中的關聯性並非最大的一張攝影畫面有被點選,則在步驟S811中,將所點選的攝影畫面顯示為目前攝影畫面,並在使用者點選目前攝影畫面的物件後,詢問使用者是否對物件自動串接的結果進行修正。若使用者不對物件自動串接的結果進行修正,則結束此攝影畫面之物件串接修正方法。若使用者確認對物件自動串接的結果進行修正,則在步驟S812中,使用者互動平台依據所點選的攝影畫面之物件產生校正資料給多攝影機監控系統,以讓多攝影機監控系統產生建議的物件串接修正結果。If the photographic image whose relevance is not the largest in the subsequent object list is clicked, in step S811, the selected photographic image is displayed as the current photographic image, and the user selects the object of the current photographic image. After that, ask the user whether to correct the result of the automatic serial connection of the object. If the user does not correct the result of the automatic serial connection of the object, the object serial connection correction method of the photographing screen is ended. If the user confirms that the result of the automatic serial connection of the object is corrected, then in step S812, the user interaction platform generates correction data according to the selected object of the photographic picture to the multi-camera monitoring system, so that the multi-camera monitoring system generates suggestions. The object is concatenated with the correction result.

之後,在步驟S813中,使用者互動平台詢問使用者是否此建議的物件串接修正結果作為特定物件的正確串接結果。若使用者認為建議的物件串接修正結果作為特定物件的正確串接結果,則在步驟S814中,將建議的物件串接修正結果作為特定物件的正確串接結果,並接著結束攝影畫面之物件串接修正方法。相反地,若使用者不認為建議的物件串接修正結果作為正確物件串接結果,則回到步驟S810中。Thereafter, in step S813, the user interaction platform asks the user whether the suggested object serialization correction result is the correct concatenation result of the specific object. If the user thinks that the suggested object serialization correction result is the correct concatenation result of the specific object, then in step S814, the suggested object serialization correction result is taken as the correct concatenation result of the specific object, and then the object of the photographic image is ended. Tandem correction method. Conversely, if the user does not consider the suggested object serialization correction result as the correct object serialization result, the process returns to step S810.

綜上所述,本發明實施例所提供的多攝影機監控系統具有攝影畫面之物件串接修正方法,且多攝影機監控系統具有一個使用者互動平台給予使用者操作,以使用者透過執行攝影畫面之物件串接修正方法來修正傳統多攝影機監控系統自動串接物件可能發生的錯誤。In summary, the multi-camera monitoring system provided by the embodiment of the present invention has a method for correcting the object serial connection of the photographic image, and the multi-camera monitoring system has a user interaction platform for the user to operate, and the user performs the photographic image. The object serial connection correction method is to correct the error that may occur in the automatic serial connection object of the conventional multi-camera monitoring system.

雖然本發明之較佳實施例已揭露如上,然本發明並不受限於上述實施例,任何所屬技術領域中具有通常知識者,在不脫離本發明所揭露之範圍內,當可作些許之更動與調整,因此本發明之保護範圍應當以後附之申請專利範圍所界定者為準。Although the preferred embodiments of the present invention have been disclosed as above, the present invention is not limited to the above-described embodiments, and any one of ordinary skill in the art can make some modifications without departing from the scope of the present invention. The scope of protection of the present invention should be determined by the scope of the appended claims.

100...多攝影機監控系統100. . . Multi-camera monitoring system

110...視訊擷取分析單元110. . . Video capture analysis unit

120...視訊分析資料彙整單元120. . . Video analysis data collection unit

130...視訊與分析資料資料庫130. . . Video and analytics data library

140...多視訊內容分析單元140. . . Multi-video content analysis unit

150...使用者互動平台150. . . User interaction platform

210...監控環境視窗210. . . Monitoring environment window

211...環境示意圖211. . . Environmental schematic

212...播放控制單元212. . . Playback control unit

213...時間軸控制元件213. . . Time axis control element

210...攝影機列表視窗210. . . Camera list window

230...監控物件視窗230. . . Monitor object window

231...顯示區231. . . Display area

232...先前物件列表232. . . Previous object list

233...後續物件列表233. . . Follow-up object list

234...先前物件串接結果234. . . Previous object concatenation result

235...後續物件串接結果235. . . Subsequent object concatenation results

240...多攝影機攝影畫面視窗240. . . Multi-camera photographic screen

241...獨子視窗241. . . Unique window

242...分割畫面242. . . Split screen

250...特定攝影機監控視窗250. . . Specific camera monitoring window

S800~S814...步驟流程S800~S814. . . Step flow

圖1本發明實施例提供的多攝影機安全監控系統之方塊圖。FIG. 1 is a block diagram of a multi-camera security monitoring system according to an embodiment of the present invention.

圖2為本發明實施例之攝影畫面之物件串接修正方法於使用者互動平台上之介面的示意圖。2 is a schematic diagram of an interface of a method for correcting object serial connection of a photographic image on a user interaction platform according to an embodiment of the present invention.

圖3A為本發明實施例之使用者選擇攝影機進行即時監控時之使用者互動平台上的介面示意圖。FIG. 3A is a schematic diagram of an interface on a user interaction platform when a user selects a camera for real-time monitoring according to an embodiment of the present invention.

圖3B為本發明實施例之特定攝影機監控視窗的詳細示意圖。FIG. 3B is a detailed schematic diagram of a specific camera monitoring window according to an embodiment of the present invention.

圖4A為本發明實施例之使用者選擇特定物件進行即時監控時之使用者互動平台上的介面示意圖。FIG. 4A is a schematic diagram of an interface on a user interaction platform when a user selects a specific object for real-time monitoring according to an embodiment of the present invention.

圖4B為本發明實施例之監控物件視窗的詳細示意圖。4B is a detailed schematic diagram of a window of a monitored object according to an embodiment of the present invention.

圖5A為本發明實施例之使用者選擇攝影機進行事後檢閱時之使用者互動平台上的介面示意圖。FIG. 5A is a schematic diagram of an interface on a user interaction platform when a user selects a camera for post-mortem review according to an embodiment of the present invention.

圖5B為本發明實施例之特定攝影機監控視窗的詳細示意圖。FIG. 5B is a detailed schematic diagram of a specific camera monitoring window according to an embodiment of the present invention.

圖6A為本發明實施例之使用者選擇特定物件進行事後檢閱時之使用者互動平台上的介面示意圖。FIG. 6A is a schematic diagram of an interface on a user interaction platform when a user selects a specific object for post-mortem review according to an embodiment of the present invention.

圖6B為本發明實施例之監控物件視窗的詳細示意圖。FIG. 6B is a detailed schematic diagram of a window of a monitored object according to an embodiment of the present invention.

圖7為本發明實施例之多攝影機監控系統串接物件錯誤時的監控物件視窗之詳細示意圖。FIG. 7 is a detailed schematic diagram of a monitored object window when a plurality of camera monitoring systems are connected in series with an error according to an embodiment of the present invention.

圖8是本發明實施例攝影畫面之物件串接修正方法的流程圖。FIG. 8 is a flowchart of a method for correcting object serial connection of a photographing screen according to an embodiment of the present invention.

230...監控物件視窗230. . . Monitor object window

231...顯示區231. . . Display area

232...先前物件列表232. . . Previous object list

233...後續物件列表233. . . Follow-up object list

234...先前物件串接結果234. . . Previous object concatenation result

235...後續物件串接結果235. . . Subsequent object concatenation results

Claims (8)

一種串接攝影畫面以形成一物件軌跡的方法,包括:透過一使用者互動平台以選擇欲追蹤的一物件;在一記錄時間內,指出被一多攝影機監控系統擷取之一第一複數個串接影像,其中,每一串接影像與該被追蹤之物件具有一相關程度;根據與該被追蹤之物件的相關程度,從該第一複數個串接影像中連結一第二複數個串接影像以產生一第一物件串接結果;以及如果在該第一物件串接結果中發現一錯誤的串接影像,則透過該使用者互動平台選擇一串接影像來取代該錯誤的串接影像以產生一第二串接結果,且根據該被選擇的串接影像,更新該第一物件串接結果中在該錯誤的串接影像之後的串接影像,其中,該被選擇的串接影像與該被追蹤之物件的相關程度低於該錯誤的串接影像與該被追蹤之物件的相關程度。 A method for serially connecting a photographic image to form an object trajectory includes: selecting an object to be tracked through a user interaction platform; indicating a first plurality of captured by a multi-camera monitoring system during a recording time a series of images, wherein each of the serial images has a degree of correlation with the object being tracked; and a second plurality of strings are coupled from the first plurality of serial images according to the degree of correlation with the object being tracked And connecting the image to generate a first object serial connection result; and if an incorrect serial image is found in the first object serial connection result, selecting a serial image through the user interaction platform to replace the wrong serial connection And generating, by the image, a second serial connection result, and updating, according to the selected serial image, the serial image after the erroneous serial image in the first object serial connection result, wherein the selected serial connection The extent to which the image is related to the object being tracked is less than the degree to which the erroneous tandem image is related to the object being tracked. 如申請專利範圍第1項所述之方法,其中該錯誤的串接影像和該被選取的串接影像是藉由不同的攝影機擷取。 The method of claim 1, wherein the erroneous concatenated image and the selected concatenated image are captured by different cameras. 如申請專利範圍第1項所述之方法,其中該第一複數個串接影像包含一先前串接影像以及一後續串接影像,其中該使用者互動平台包含一監控視窗用來呈現目前的監控物件、該先前串接影像以及該後續串接影像。 The method of claim 1, wherein the first plurality of concatenated images comprises a previous concatenated image and a subsequent concatenated image, wherein the user interaction platform includes a monitoring window for presenting the current monitoring The object, the previously concatenated image, and the subsequent concatenated image. 如申請專利範圍第1項所述之方法,更包括:根據該追蹤物件去尋找在每一攝影畫面中可能符合的物件以對各攝影畫面進行分析;以及該多攝影機監控系統的多視訊內容分析單元計算目前攝影畫面之物件與該特定物件的相關性。 The method of claim 1, further comprising: searching for objects that may be met in each photographic image according to the tracking object to analyze the photographic images; and analyzing the multi-video content of the multi-camera monitoring system. The unit calculates the correlation between the object of the current photographic picture and the particular object. 如申請專利範圍第1項所述之方法,其中透過該使用者平台同時查看該第一串接結果以及該被選擇的串接影像以決定該錯誤的串接影像以及該被選擇的串接影像。 The method of claim 1, wherein the first serial connection result and the selected serial image are simultaneously viewed through the user platform to determine the erroneous serial image and the selected serial image. . 如申請專利範圍第3項所述之方法,其中該使用者互動平台更包括:一監控環境視窗,包括一環境示意圖,其中該環境示意圖用以呈現該多攝影機監控系統之一整體監控環境,其中該監控環境包含監控環境的一地理特性、各攝影機的分佈位置與該監控環境中之特定物件的行為軌跡;一攝影機列表視窗,係用以呈現各攝影機編號以及各攝影機在該監控環境中的分佈位置之間的關係;以及一多攝影機攝影畫面視窗,係用以呈現該使用者選定的數個攝影機所擷取的即時攝影畫面,亦或用以播放一資料庫中所記錄之多個選定攝影機的歷史視訊資料。 The method of claim 3, wherein the user interaction platform further comprises: a monitoring environment window, including an environment diagram, wherein the environment diagram is used to present an overall monitoring environment of the multi-camera monitoring system, wherein The monitoring environment includes a geographic characteristic of the monitoring environment, a distribution location of each camera, and a behavior track of a specific object in the monitoring environment; a camera list window is used to present each camera number and the distribution of each camera in the monitoring environment. a relationship between the locations; and a multi-camera photographic window for presenting an instant photographic image captured by a plurality of cameras selected by the user, or for playing a plurality of selected cameras recorded in a database Historical video material. 如申請專利範圍第6項所述之方法,其中該監控環境視窗更包括:一播放控制單元,係用以對該特定物件之事後歷史軌跡追蹤、呈現與修正時得以有效地操控視訊資料的播放;以及一時間軸控制元件,用以控制該視訊資料於一特定時間點開始前後播放。 The method of claim 6, wherein the monitoring environment window further comprises: a playback control unit, configured to effectively control the playback of the video data when tracking, presenting and correcting the subsequent historical track of the specific object. And a time axis control component for controlling the video data to be played back and forth at a specific time point. 如申請專利範圍第6項所述之方法,其中該環境示意圖為自一地理環境圖、一建築架構圖與監控設施分佈圖中選擇其中之一,或者為選擇上述部份或全部的圖進行疊合,又或者為透過三維電腦圖像來疊合上述多個圖的至少其中之一。The method of claim 6, wherein the environment diagram is one selected from a geographic environment map, an architectural architecture map, and a monitoring facility distribution map, or a stack of the selected ones or all of the maps. And at least one of the plurality of figures is superimposed through the three-dimensional computer image.
TW100149787A 2011-12-30 2011-12-30 A method for tracing an object by linking video sequences TWI601425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100149787A TWI601425B (en) 2011-12-30 2011-12-30 A method for tracing an object by linking video sequences

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100149787A TWI601425B (en) 2011-12-30 2011-12-30 A method for tracing an object by linking video sequences

Publications (2)

Publication Number Publication Date
TW201328358A TW201328358A (en) 2013-07-01
TWI601425B true TWI601425B (en) 2017-10-01

Family

ID=49225389

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100149787A TWI601425B (en) 2011-12-30 2011-12-30 A method for tracing an object by linking video sequences

Country Status (1)

Country Link
TW (1) TWI601425B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI571835B (en) * 2015-07-17 2017-02-21 Jin-Chang Yang Instantly record the transmission of the field command system
TWI563844B (en) * 2015-07-24 2016-12-21 Vivotek Inc Setting method for a surveillance system, setting device thereof and computer readable medium
TWI601423B (en) 2016-04-08 2017-10-01 晶睿通訊股份有限公司 Image capture system and sychronication method thereof
TWI589158B (en) * 2016-06-07 2017-06-21 威聯通科技股份有限公司 Storage system of original frame of monitor data and storage method thereof
TWI647956B (en) * 2017-04-11 2019-01-11 大眾電腦股份有限公司 Object tracking system and method there of
TWI650018B (en) * 2017-07-18 2019-02-01 晶睿通訊股份有限公司 Method for providing user interface for scene stitching of scene and electronic device thereof
TWI760812B (en) * 2020-08-10 2022-04-11 威聯通科技股份有限公司 Method and system for object-space correspondence analysis across sensors

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050207622A1 (en) * 2004-03-16 2005-09-22 Haupt Gordon T Interactive system for recognition analysis of multiple streams of video
US20100045799A1 (en) * 2005-02-04 2010-02-25 Bangjun Lei Classifying an Object in a Video Frame
US20100157049A1 (en) * 2005-04-03 2010-06-24 Igal Dvir Apparatus And Methods For The Semi-Automatic Tracking And Examining Of An Object Or An Event In A Monitored Site

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050207622A1 (en) * 2004-03-16 2005-09-22 Haupt Gordon T Interactive system for recognition analysis of multiple streams of video
US20100045799A1 (en) * 2005-02-04 2010-02-25 Bangjun Lei Classifying an Object in a Video Frame
US20100157049A1 (en) * 2005-04-03 2010-06-24 Igal Dvir Apparatus And Methods For The Semi-Automatic Tracking And Examining Of An Object Or An Event In A Monitored Site

Also Published As

Publication number Publication date
TW201328358A (en) 2013-07-01

Similar Documents

Publication Publication Date Title
TWI601425B (en) A method for tracing an object by linking video sequences
US11704936B2 (en) Object tracking and best shot detection system
RU2498404C2 (en) Method and apparatus for generating event registration entry
US9141184B2 (en) Person detection system
US10515471B2 (en) Apparatus and method for generating best-view image centered on object of interest in multiple camera images
Lee et al. Hierarchical abnormal event detection by real time and semi-real time multi-tasking video surveillance system
US9092699B2 (en) Method for searching for objects in video data received from a fixed camera
US8724970B2 (en) Method and apparatus to search video data for an object of interest
WO2013069605A1 (en) Similar image search system
CN105279480A (en) Method of video analysis
US11676389B2 (en) Forensic video exploitation and analysis tools
DE112017003800T5 (en) MONITORING SUPPORT DEVICE, MONITORING SUPPORT SYSTEM AND MONITORING SUPPORT PROCESS
DE102006053286A1 (en) Method for detecting movement-sensitive image areas, apparatus and computer program for carrying out the method
US20230093631A1 (en) Video search device and network surveillance camera system including same
CN113194291B (en) Video monitoring system and method based on big data
WO2021017496A1 (en) Directing method and apparatus and computer-readable storage medium
JP4728795B2 (en) Person object determination apparatus and person object determination program
JP6618349B2 (en) Video search system
JP2007312271A (en) Surveillance system
CN103260004A (en) Method for rectifying object tandem of photographed images and its multi-camera monitoring system
JP2006301995A (en) Person search device and person search method
JP6820489B2 (en) Image processing device and image processing program
JP2017182295A (en) Image processor
CN110852172A (en) Method for expanding crowd counting data set based on Cycle Gan picture collage and enhancement
Lashkia et al. A team play analysis support system for soccer games