TWI536838B - Video playback method and apparatus - Google Patents

Video playback method and apparatus Download PDF

Info

Publication number
TWI536838B
TWI536838B TW103136646A TW103136646A TWI536838B TW I536838 B TWI536838 B TW I536838B TW 103136646 A TW103136646 A TW 103136646A TW 103136646 A TW103136646 A TW 103136646A TW I536838 B TWI536838 B TW I536838B
Authority
TW
Taiwan
Prior art keywords
video
path
object path
length
time
Prior art date
Application number
TW103136646A
Other languages
Chinese (zh)
Other versions
TW201616862A (en
Inventor
陳俊諺
Original Assignee
威聯通科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 威聯通科技股份有限公司 filed Critical 威聯通科技股份有限公司
Priority to TW103136646A priority Critical patent/TWI536838B/en
Priority to US14/689,038 priority patent/US9959903B2/en
Publication of TW201616862A publication Critical patent/TW201616862A/en
Application granted granted Critical
Publication of TWI536838B publication Critical patent/TWI536838B/en

Links

Landscapes

  • Studio Circuits (AREA)

Description

視頻播放方法與裝置 Video playing method and device

本發明是有關於一種視頻裝置,且特別是有關於一種視頻播放(video playback)方法與裝置。 The present invention relates to a video device, and more particularly to a video playback method and apparatus.

視頻監控系統(video surveillance system)可以利用攝影模組拍攝場景而獲得原始視頻(original video),並將原始視頻存儲於硬碟中。對於傳統視頻播放系統而言,播放(playback)是一個常用的功能。播放功能可以讓使用者觀看存儲於硬碟中的原始視頻的內容。使用者可以觀看原始視頻在特定的時間段(specific time section)的內容,以搜索所感興趣的物體或異常事件。然而,原始視頻的時間長度往往極長。例如,原始視頻的時間長度可能數小時,甚至是數天。使用者可以利用預設的恒速(preset constant speed)來加速播放/觀看存儲於硬碟中的原始視頻。雖然傳統視頻播放系統可以縮短視頻播放時間,但是傳統視頻播放系統無法在使用者預設的時間長度中顯示原始視頻的所有物件。 The video surveillance system can capture the scene using the camera module to obtain the original video and store the original video on the hard disk. For traditional video playback systems, playback is a commonly used feature. The playback function allows the user to view the contents of the original video stored on the hard drive. The user can view the content of the original video in a specific time section to search for an object of interest or an abnormal event. However, the length of the original video is often extremely long. For example, the length of the original video may be hours or even days. The user can use the preset constant speed to speed up playing/viewing the original video stored on the hard disk. Although the conventional video playback system can shorten the video playback time, the conventional video playback system cannot display all the objects of the original video for a preset length of time by the user.

本發明提供一種視頻播放(video playback)方法與裝置,可以縮短視頻播放時間,以及在預設的播放時間長度中顯示感興趣的所有物件。 The invention provides a video playback method and device, which can shorten the video playing time and display all the objects of interest in the preset playing time length.

本發明實施例提供一種視頻播放方法,包括:提供原始視頻(original video),其中該原始視頻是由攝影模組拍攝場景而獲得;提供播放時間長度,以決定合成視頻的時間長度,其中該合成視頻的時間長度小於該原始視頻的時間長度;從該原始視頻中提取至少一物件路徑;以及選擇性地調整所述至少一物件路徑,以將所述至少一物件路徑合成至該合成視頻中。 An embodiment of the present invention provides a video playing method, including: providing an original video, wherein the original video is obtained by capturing a scene by a photography module; and providing a length of playing time to determine a length of the synthesized video, where the composite The length of time of the video is less than the length of time of the original video; at least one object path is extracted from the original video; and the at least one object path is selectively adjusted to composite the at least one object path into the composite video.

在本發明的一實施例中,上述的視頻播放方法更包括:播放該合成視頻。 In an embodiment of the invention, the video playing method further includes: playing the composite video.

在本發明的一實施例中,上述的從該原始視頻中提取所述至少一物件路徑之步驟包括:進行物件檢測和背景提取程序,以從原始視頻中提取至少一物件與至少一背景圖像;依據在原始視頻中一目前幀(current frame)的所述至少一物件與先前幀(previous frame)的所述至少一物件的關係,創建所述物件路徑;以及將背景圖像與物件路徑存儲在存儲裝置。 In an embodiment of the invention, the step of extracting the at least one object path from the original video comprises: performing an object detection and a background extraction process to extract at least one object and at least one background image from the original video. Creating the object path according to a relationship of the at least one object of a current frame in the original video with the at least one object of a previous frame; and storing the background image and the object path In the storage device.

在本發明的一實施例中,上述的創建所述至少一物件路徑之步驟包括:若該目前幀的物件沒有在該先前幀的父物件(parent object),或在該目前幀中與其他物件共有父物件,或該目前幀的物件擁有多個父物件,則創建一新物件路徑,其中該目 前幀的該物件是新物件路徑的第一個物件;若該目前幀的該物件具有唯一的父物件,且該目前幀的該物件為該父物件的唯一子物件(child object),則將該目前幀的該物件添加到該父物件所屬的一現有物件路徑;以及當所述至少一物件路徑的最後一個物件沒有子物件時,或當所述至少一物件路徑的最後一個物件擁有不止一個子物件時,或當所述至少一物件路徑的最後一個物件和其他物件路徑共有子物件時,所述至少一物件路徑為結束。 In an embodiment of the invention, the step of creating the path of the at least one object includes: if the object of the current frame is not in a parent object of the previous frame, or in the current frame and other objects If there is a parent object, or the object of the current frame has multiple parent objects, a new object path is created, where the mesh The object of the previous frame is the first object of the new object path; if the object of the current frame has a unique parent object, and the object of the current frame is the only child object of the parent object, then The object of the current frame is added to an existing object path to which the parent object belongs; and when the last object of the at least one object path has no child objects, or when the last object of the at least one object path has more than one The at least one object path ends when the sub-object, or when the last object of the at least one object path and the other object path share the sub-object.

在本發明的一實施例中,上述的視頻播放方法更包括:提供在該原始視頻中的開始時間T b 與結束時間T e ;以及若T b P t T e ,或T b P t +P l T e ,或P t T b T e P t +P l ,其中P t P l 分別是在該存儲裝置中的一候選物件路徑的發生時間和長度,則此候選物件路徑被選中作為所述至少一物件路徑。 In an embodiment of the present invention, the video playing method further includes: providing a start time T b and an end time T e in the original video; and if T b P t T e , or T b P t + P l T e , or P t T b and T e P t + P l , where P t and P l are the occurrence time and length of a candidate object path in the storage device, respectively, and the candidate object path is selected as the at least one object path.

在本發明的一實施例中,上述的視頻播放方法更包括:依據所述至少一物件路徑在一個場景中不同像素處的擠迫情形,計算建議時間長度;以及將該建議時間長度提示給使用者,以輔助該使用者決定該播放時間長度。 In an embodiment of the present invention, the video playing method further includes: calculating a length of the suggested time according to the crowding situation of the at least one object path at different pixels in a scene; and prompting the suggested time length to use In order to assist the user to determine the length of the play time.

在本發明的一實施例中,上述的視頻播放方法更包括:依據該原始視頻產生擠迫圖(crowdedness map),以描述所述物 件路徑在不同像素處的擠迫值;計算等式,其中C m 表示 該些擠迫值的關聯值,C th 表示閾值;以及計算等式,其 中T p 表示該建議時間長度,R f 表示該合成視頻的畫面播放速率 (frame rate)。 In an embodiment of the present invention, the video playing method further includes: generating a crowdedness map according to the original video to describe an extruding value of the object path at different pixels; calculating an equation Where C m represents the associated value of the crowded values, C th represents the threshold; and the equation is calculated Where T p represents the suggested length of time and R f represents the frame rate of the composite video.

在本發明的一實施例中,上述的視頻播放方法更包括:提供一物件屬性;以及依據該物件屬性,篩選所述至少一物件路徑。 In an embodiment of the present invention, the video playing method further includes: providing an object attribute; and screening the at least one object path according to the item attribute.

在本發明的一實施例中,上述的選擇性地調整所述至少一物件路徑以將所述至少一物件路徑合成至該合成視頻之步驟包括:依照所述物件路徑於該原始視頻中的出現順序,將所述物件路徑重新安排在該合成視頻中;以及將所述物件路徑與背景圖像合成為該合成視頻。 In an embodiment of the invention, the step of selectively adjusting the at least one object path to synthesize the at least one object path to the composite video comprises: following the appearance of the object path in the original video Sequence, rearranging the object path in the composite video; and synthesizing the object path and the background image into the composite video.

在本發明的一實施例中,上述的將所述物件路徑重新安排在該合成視頻中之步驟包括:選擇性地調整所述物件路徑,以獲得至少一經調整物件路徑;依照所述物件路徑於原始視頻中的出現順序,初始化所述經調整物件路徑於合成視頻中的時間位置;以及依照所述經調整物件路徑於合成視頻中的重疊情形,調整所述經調整物件路徑於合成視頻中的時間位置。 In an embodiment of the invention, the step of rearranging the object path in the composite video comprises: selectively adjusting the object path to obtain at least one adjusted object path; according to the object path An order of occurrence in the original video, initializing a temporal position of the adjusted object path in the composite video; and adjusting the adjusted object path in the composite video in accordance with an overlap condition of the adjusted object path in the composite video Time location.

在本發明的一實施例中,上述的將所述獲得至少一經調整物件路徑之步驟包括:若所述物件路徑有父物件路徑,則合併所述物件路徑與父物件路徑,作為所述經調整物件路徑;若所述物件路徑的時間長度大於閾長度,則依加速因數(speedup factor)加快所述物件路徑的播放速度以縮短所述物件路徑的時間長度,作為所述經調整物件路徑,其中閾長度小於或等於該播放時間長度,而該加速因數為實數;以及若所述物件路徑的時間長度大於 閾長度,則將所述物件路徑拆分為多個子路徑,作為所述經調整物件路徑。 In an embodiment of the invention, the step of obtaining the at least one adjusted object path includes: if the object path has a parent object path, merging the object path and the parent object path as the adjusted An object path; if the time length of the object path is greater than the threshold length, speeding up the playback speed of the object path according to a speedup factor to shorten the time length of the object path, as the adjusted object path, wherein The threshold length is less than or equal to the length of the play time, and the acceleration factor is a real number; and if the time length of the object path is greater than The threshold length splits the object path into a plurality of sub-paths as the adjusted object path.

在本發明的一實施例中,上述的視頻播放方法更包括:計算第一因數Sg=(P l /P th ),其中P l 是所述物件路徑的時間長度,P th 表示該閾長度;若第一因數Sg大於最大加速值S max ,則將第一因數Sg設為最大加速值S max ,其中最大加速值S max 為大於1且小於4的實數;若第一因數Sg小於1,則將第一因數Sg設為1;若所述物件路徑於擠迫圖中的代表擠迫值C p 大於或等於上限擠迫值C U ,則將第二因數Sc設為最大加速值S max ,其中上限擠迫值C U 為實數;若代表擠迫值C p 小於或等於下限擠迫值C L ,則將第二因數Sc設為1,其中下限擠迫值C L 為實數,且下限擠迫值C L 小於上限擠迫值C U ;若代表擠迫值C p 大於下限擠迫值C L 且小於上限擠迫值C U ,則將第二因數Sc設為[(C p -C L )/(C U -C L )]*(S max -1)+1;以及從第一因數Sg與第二因數Sc中選取大者作為該加速因數S p In an embodiment of the present invention, the video playing method further includes: calculating a first factor S g = ( P l / P th ), where P l is a time length of the object path, and P th represents the threshold length If the first factor S g is greater than the maximum acceleration value S max , the first factor S g is set to the maximum acceleration value S max , wherein the maximum acceleration value S max is a real number greater than 1 and less than 4; if the first factor S g If less than 1, the first factor S g is set to 1; if the representative forced value C p of the object path in the crowd map is greater than or equal to the upper limit forced value C U , the second factor S c is set to The maximum acceleration value S max , wherein the upper limit forced value C U is a real number; if the representative forced value C p is less than or equal to the lower limit forced value C L , the second factor S c is set to 1, wherein the lower limit forced value C L is a real number, and the lower limit forced value C L is less than the upper limit forced value C U ; if the representative forced value C p is greater than the lower limit forced value C L and less than the upper limit forced value C U , the second factor S c is set Is [( C p - C L )/( C U - C L )]*( S max -1)+1; and the larger one is selected from the first factor S g and the second factor S c as the acceleration factor S p .

在本發明的一實施例中,上述的將所述至少一物件路徑拆分為多個子路徑之步驟包括:調整該些子路徑中的第一子路徑至該些子路徑中的其他子路徑的幀偏移(frame shift),以降低該些子路徑的重疊面積。 In an embodiment of the present invention, the step of splitting the at least one object path into a plurality of sub-paths includes: adjusting a first sub-path of the sub-paths to other sub-paths of the sub-paths Frame shift to reduce the overlap area of the sub-paths.

在本發明的一實施例中,上述的初始化所述經調整物件路徑於該合成視頻中的時間位置之步驟包括:若所述經調整物件路徑中的第一物件路徑與第二物件路徑之間在時間上有空隙,則將第一物件路徑與第二物件路徑中後發生者的時間提前,且使該 後發生者的時間晚於第一物件路徑與第二物件路徑中先發生者的時間;以及將所述經調整物件路徑的時間偏移分別乘以調整值,以使所述經調整物件路徑的時間範圍被包含於該播放時間長度的範圍中。 In an embodiment of the invention, the step of initializing the time position of the adjusted object path in the composite video comprises: if between the first object path and the second object path in the adjusted object path If there is a gap in time, the time of the first object path and the subsequent occurrence in the second object path is advanced, and the The time of the subsequent occurrence is later than the time of the first object path and the first occurrence in the second object path; and the time offset of the adjusted object path is respectively multiplied by the adjustment value to make the adjusted object path The time range is included in the range of the length of the play time.

在本發明的一實施例中,上述的調整所述經調整物件路徑於該合成視頻中的時間位置之步驟包括:調整所述經調整物件路徑中的第一物件路徑至所述經調整物件路徑中的其他物件路徑的幀偏移,以降低所述經調整物件路徑的彼此重疊面積。 In an embodiment of the invention, the step of adjusting the temporal position of the adjusted object path in the composite video comprises: adjusting a first object path in the adjusted object path to the adjusted object path The frame offset of the other object paths in the middle to reduce the overlapping area of the adjusted object paths.

在本發明的一實施例中,上述的將所述物件路徑與背景圖像合成為該合成視頻之步驟包括:利用高斯混合方法(Gaussian blending method)混合所述物件路徑的物件圖像與該背景圖像。 In an embodiment of the invention, the step of synthesizing the object path and the background image into the composite video comprises: mixing an object image of the object path with the background by a Gaussian blending method image.

在本發明的一實施例中,上述的將所述物件路徑與背景圖像合成為該合成視頻之步驟包括:利用Alpha混合方法以半透明方式混合所述物件路徑中相互重疊的物件圖像。 In an embodiment of the invention, the step of synthesizing the object path and the background image into the composite video comprises: mixing the object images overlapping each other in the object path in a translucent manner by using an Alpha blending method.

在本發明的一實施例中,上述的將所述物件路徑與背景圖像合成為該合成視頻之步驟包括:計算所述物件路徑中多個物件的z軸距離;以及根據該些物件的z軸距離的降冪順序,將該些物件粘貼在背景圖像上。 In an embodiment of the invention, the step of synthesizing the object path and the background image into the composite video comprises: calculating a z-axis distance of the plurality of objects in the object path; and z according to the objects The descending order of the axial distances, the objects are pasted on the background image.

本發明實施例提供一種視頻播放裝置,包括物件路徑提取模組以及視頻合成模組。物件路徑提取模組經配置以從原始視頻中提取至少一物件路徑。視頻合成模組耦接至物件路徑提取模組,以接收所述物件路徑。視頻合成模組經配置以依據播放時間 長度選擇性地調整所述物件路徑,以將所述物件路徑合成至合成視頻中,其中該播放時間長度決定該合成視頻的時間長度,而該合成視頻的時間長度小於原始視頻的時間長度。 Embodiments of the present invention provide a video playback device, including an object path extraction module and a video synthesis module. The object path extraction module is configured to extract at least one object path from the original video. The video synthesis module is coupled to the object path extraction module to receive the object path. Video synthesis module configured to play time The object path is selectively adjusted in length to synthesize the object path into a composite video, wherein the length of play time determines the length of time of the composite video, and the length of time of the composite video is less than the length of time of the original video.

在本發明的一實施例中,上述的視頻播放裝置更包括攝影模組。攝影模組耦接至物件路徑提取模組。攝影模組經配置以拍攝場景而獲得原始視頻。 In an embodiment of the invention, the video playback device further includes a camera module. The photography module is coupled to the object path extraction module. The photography module is configured to capture the scene to obtain the original video.

在本發明的一實施例中,上述的視頻播放裝置更包括顯示模組。顯示模組耦接至視頻合成模組。顯示模組經配置以播放該合成視頻。 In an embodiment of the invention, the video playback device further includes a display module. The display module is coupled to the video synthesis module. The display module is configured to play the composite video.

在本發明的一實施例中,上述的物件路徑包括第一物件路徑與第二物件路徑,而於合成視頻中該第一物件路徑的播放速度不同於該第二物件路徑的播放速度。 In an embodiment of the invention, the object path includes a first object path and a second object path, and the playback speed of the first object path in the composite video is different from the playback speed of the second object path.

在本發明的一實施例中,上述的物件路徑包括第一物件路徑與第二物件路徑,該第一物件路徑的第一物件與該第二物件路徑的第二物件出現於原始視頻中的時間不重疊,而該第一物件與該第二物件出現於合成視頻中的時間相重疊。 In an embodiment of the invention, the object path includes a first object path and a second object path, and the time of the first object of the first object path and the second object of the second object path appear in the original video Do not overlap, and the first object overlaps with the time at which the second object appears in the composite video.

在本發明的一實施例中,上述的物件路徑於合成視頻中的空間位置相同於所述物件路徑於原始視頻中的空間位置。 In an embodiment of the invention, the spatial position of the object path in the composite video is the same as the spatial position of the object path in the original video.

在本發明的一實施例中,上述的視頻播放裝置更包括存儲模組。存儲模組耦接至物件路徑提取模組。其中,物件路徑提取模組包括物件檢測和背景提取單元以及物件路徑產生單元。物件檢測和背景提取單元經配置以接收原始視頻並從原始視頻中提 取至少一物件與至少一背景圖像,以及將所述背景圖像存儲於存儲模組。物件路徑產生單元耦接至物件檢測和背景提取單元。物件路徑產生單元經配置以依據在原始視頻中目前幀的所述物件與先前幀的所述物件的關係,創建所述物件路徑,以及將所述物件路徑存儲於存儲模組。 In an embodiment of the invention, the video playback device further includes a storage module. The storage module is coupled to the object path extraction module. The object path extraction module includes an object detection and background extraction unit and an object path generation unit. The object detection and background extraction unit is configured to receive the original video and extract from the original video Taking at least one object and at least one background image, and storing the background image in the storage module. The object path generating unit is coupled to the object detecting and background extracting unit. The object path generation unit is configured to create the object path based on the relationship of the object of the current frame in the original video with the object of the previous frame, and store the object path in the storage module.

在本發明的一實施例中,若目前幀的物件沒有在先前幀的父物件,或在目前幀中的物件與其他物件共有父物件,或目前幀的物件擁有多個父物件,則該物件路徑產生單元創建新物件路徑,其中目前幀的該物件是新物件路徑的第一個物件。若目前幀的物件具有唯一的父物件,且目前幀的物件為父物件的唯一子物件,則物件路徑產生單元將目前幀的該物件添加到父物件所屬的現有物件路徑。當所述物件路徑的最後一個物件沒有子物件時,或當所述至少一物件路徑的最後一個物件擁有不止一個子物件時,或當所述物件路徑的最後一個物件和其他物件路徑共有子物件時,物件路徑產生單元結束所述物件路徑。 In an embodiment of the present invention, if the object of the current frame does not have a parent object of the previous frame, or the object in the current frame shares a parent object with other objects, or the object of the current frame has multiple parent objects, the object The path generation unit creates a new object path in which the object of the current frame is the first object of the new object path. If the object of the current frame has a unique parent object, and the object of the current frame is the only child of the parent object, the object path generating unit adds the object of the current frame to the existing object path to which the parent object belongs. When the last object of the object path has no child objects, or when the last object of the at least one object path has more than one child object, or when the last object of the object path and other object paths share the child object At this time, the object path generating unit ends the object path.

在本發明的一實施例中,上述的物件路徑的資料包括:所述物件路徑的時間長度、所述物件路徑的第一個物件的時間戳記、所述物件路徑的每個成員物件對應於第一個物件的時間偏移(time shift)、每個成員物件的位置、每個成員物件的大小、或父物件路徑。 In an embodiment of the present invention, the information of the object path includes: a time length of the object path, a time stamp of the first object of the object path, and each member object of the object path corresponds to the first The time shift of an object, the position of each member object, the size of each member object, or the parent object path.

在本發明的一實施例中,上述的視頻播放裝置更包括使用者介面。使用者介面經配置以提供在原始視頻中的開始時間T b 與結束時間T e 。若T b P t T e ,或T b P t +P l T e ,或P t T b T e P t +P l ,其中P t P l 分別是在存儲裝置中的候選物件路徑的發生時間和長度,則視頻合成模組選擇此候選物件路徑作為所述物件路徑。 In an embodiment of the invention, the video playback device further includes a user interface. Configured to provide a user interface in the original video start time end time T b and T e. If T b P t T e , or T b P t + P l T e , or P t T b and T e P t + P l , where P t and P l are the occurrence time and length of the candidate object path in the storage device, respectively, and the video synthesis module selects the candidate object path as the object path.

在本發明的一實施例中,上述的視頻合成模組包括物件路徑蒐集單元以及視頻長度估算單元。物件路徑蒐集單元經配置以從物件檢測和背景提取單元所產生的所述物件路徑中蒐集部份或全部物件路徑。視頻長度估算單元耦接至物件路徑蒐集單元,以接收物件路徑蒐集單元的蒐集結果。視頻長度估算單元經配置以依據所述物件路徑在一個場景中不同像素處的擠迫情形,而估算建議時間長度,其中該建議時間長度用以提示給一使用者,以輔助該使用者決定該播放時間長度。 In an embodiment of the invention, the video synthesizing module includes an object path collecting unit and a video length estimating unit. The object path collection unit is configured to collect some or all of the object paths from the object path generated by the object detection and background extraction unit. The video length estimating unit is coupled to the object path collecting unit to receive the collected result of the object path collecting unit. The video length estimating unit is configured to estimate a recommended length of time according to an overcrowding situation of the object path at different pixels in a scene, wherein the suggested length of time is used to prompt a user to assist the user in determining the The length of play time.

在本發明的一實施例中,上述的視頻長度估算單元依據原始視頻產生擠迫圖,以描述所述物件路徑在不同像素處的擠迫 值。視頻長度估算單元計算等式,其中C m 表示該些擠迫 值的關聯值,C th 表示閾值。視頻長度估算單元計算等式 ,其中T p 表示該建議時間長度,R f 表示合成視頻的畫面 播放速率。 In an embodiment of the invention, the video length estimating unit generates an extruded map according to the original video to describe the crowded value of the object path at different pixels. Video length estimation unit calculation equation Where C m represents the associated value of the crowded values and C th represents the threshold. Video length estimation unit calculation equation Where T p represents the suggested length of time and R f represents the picture playback rate of the composite video.

在本發明的一實施例中,上述的關聯值C m 是該擠迫圖中所有像素的該些擠迫值的平均值。 In an embodiment of the invention, the associated value C m is an average of the crowded values of all pixels in the crowded map.

在本發明的一實施例中,上述的關聯值C m 是該擠迫圖的 該些擠迫值中前50%擠迫值的平均值,或前20%擠迫值的平均值,或前10%擠迫值的平均值。 In an embodiment of the invention, the associated value C m is an average of the top 50% of the forced values of the crowded map, or an average of the top 20% of the forced values, or The average of the 10% crowded value.

在本發明的一實施例中,上述的閾值C th 大於0且小於該關聯值C m In an embodiment of the invention, the threshold C th is greater than 0 and less than the associated value C m .

在本發明的一實施例中,上述的視頻合成模組包括物件路徑蒐集單元以及物件篩選器。物件路徑蒐集單元經配置以從物件路徑提取模組所產生的所述物件路徑中蒐集部份或全部物件路徑。物件篩選器耦接至物件路徑蒐集單元,以接收該物件路徑蒐集單元的蒐集結果。物件篩選器經配置以依據物件屬性來篩選所述物件路徑。 In an embodiment of the invention, the video synthesizing module includes an object path collecting unit and an object filter. The object path collection unit is configured to collect a portion or all of the object paths from the object path generated by the object path extraction module. The object filter is coupled to the object path collecting unit to receive the collected result of the object path collecting unit. The object filter is configured to filter the object path based on the item properties.

在本發明的一實施例中,上述的物件屬性包括大小、顏色、紋理(texture)、材質、臉、運動方向、或行為。 In an embodiment of the invention, the object attributes include size, color, texture, material, face, direction of motion, or behavior.

在本發明的一實施例中,上述的視頻合成模組包括物件路徑重排單元以及視頻合成單元。物件路徑重排單元經配置以依照所述物件路徑於原始視頻中的出現順序,將所述物件路徑重新安排在合成視頻中。視頻合成單元耦接至物件路徑重排單元,以接收物件路徑重排單元的重排結果。視頻合成單元經配置以將所述物件路徑與背景圖像合成為合成視頻。 In an embodiment of the invention, the video synthesis module includes an object path rearrangement unit and a video synthesis unit. The object path rearrangement unit is configured to rearrange the object path in the composite video in accordance with the order of appearance of the object path in the original video. The video synthesis unit is coupled to the object path rearrangement unit to receive the rearrangement result of the object path rearrangement unit. The video synthesis unit is configured to synthesize the object path and the background image into a composite video.

在本發明的一實施例中,上述的物件路徑重排單元選擇性地調整所述物件路徑以獲得至少一經調整物件路徑。依照所述物件路徑於原始視頻中的出現順序,物件路徑重排單元初始化所述經調整物件路徑於合成視頻中的時間位置。依照所述經調整物 件路徑於合成視頻中的重疊情形,物件路徑重排單元調整所述經調整物件路徑於合成視頻中的時間位置。 In an embodiment of the invention, the object path rearranging unit selectively adjusts the object path to obtain at least one adjusted object path. The object path rearrangement unit initializes the temporal position of the adjusted object path in the composite video in accordance with the order of appearance of the object path in the original video. According to the adjusted object The object path rearrangement unit adjusts the temporal position of the adjusted object path in the composite video.

在本發明的一實施例中,若上述的物件路徑有一父物件路徑,則物件路徑重排單元合併所述物件路徑與該父物件路徑,作為所述經調整物件路徑。若所述物件路徑的時間長度大於閾長度,則物件路徑重排單元依加速因數加快所述物件路徑的播放速度以縮短所述物件路徑的時間長度,作為所述經調整物件路徑。其中,該閾長度小於或等於該播放時間長度,而該加速因數為實數。若所述物件路徑的時間長度大於該閾長度,則物件路徑重排單元將所述物件路徑拆分為多個子路徑,作為所述經調整物件路徑。 In an embodiment of the invention, if the object path has a parent object path, the object path rearrangement unit merges the object path with the parent object path as the adjusted object path. If the time length of the object path is greater than the threshold length, the object path rearranging unit accelerates the playing speed of the object path according to the acceleration factor to shorten the time length of the object path as the adjusted object path. The threshold length is less than or equal to the length of the play time, and the acceleration factor is a real number. If the time length of the object path is greater than the threshold length, the object path rearrangement unit splits the object path into a plurality of sub-paths as the adjusted object path.

在本發明的一實施例中,上述的閾長度大於或等於該播放時間長度的四分之一。 In an embodiment of the invention, the threshold length is greater than or equal to a quarter of the length of the play time.

在本發明的一實施例中,上述的閾長度等於該播放時間長度的二分之一。 In an embodiment of the invention, the threshold length is equal to one-half of the length of the play time.

在本發明的一實施例中,上述的加速因數S p =(P l /P th ),其中P l 是所述物件路徑的時間長度,P th 表示該閾長度。若加速因數S p 大於最大加速值S max ,則將加速因數S p 設為最大加速值S max ,其中該最大加速值S max 大於1且小於4。若加速因數S p 小於1,則將加速因數S p 設為1。 In an embodiment of the present invention, the above-described acceleration factor S p = (P l / P th), where P l is the length of time the article path, P th denotes the threshold length. When the acceleration factor is greater than the maximum acceleration value S p S max, then the acceleration factor S p as the maximum acceleration value S max, where S max value is the maximum acceleration is greater than 1 and less than 4. When the acceleration factor is less than S p 1, S p factor is set to 1 will be accelerated.

在本發明的一實施例中,若上述的物件路徑於擠迫圖中的代表擠迫值C p 大於或等於上限擠迫值C U ,則加速因數S p 設為 最大加速值S max ,其中上限擠迫值C U 與最大加速值S max 為實數。若代表擠迫值C p 小於或等於下限擠迫值C L ,則將加速因數S p 設為1,其中下限擠迫值C L 為實數,且下限擠迫值C L 小於上限擠迫值C U 。若代表擠迫值C p 大於下限擠迫值C L 且小於上限擠迫值C U ,則將加速因數S p 設為[(C p -C L )/(C U -C L )]*(S max -1)+1。 In an embodiment of the present invention, when the above-described object path in congested congestion graph represent C p value greater than or equal to the upper limit value of the congestion C U, the acceleration factor S p as the maximum acceleration value S max, wherein The upper limit forced value C U and the maximum acceleration value S max are real numbers. If the representative value of the congestion C p value of less than or equal to the lower limit C L congestion, then the acceleration factor S p is 1, wherein lower limit value of the congestion C L is a real number, and the lower limit value C L crowded crowded than the upper limit value C U. If the representative congestion crowded than the lower limit value of C p and C L than the upper limit value of the congestion value C U, then the acceleration factor S p is set to [(C p - C L) / (C U - C L)] * ( S max -1) +1.

在本發明的一實施例中,上述的物件路徑重排單元計算第一因數Sg=(P l /P th ),其中P l 是所述物件路徑的時間長度,P th 表示該閾長度。若第一因數Sg大於最大加速值S max ,則物件路徑重排單元將第一因數Sg設為最大加速值S max ,其中最大加速值S max 為大於1且小於4的實數。若第一因數Sg小於1,則物件路徑重排單元將第一因數Sg設為1。若所述物件路徑於擠迫圖中的代表擠迫值C p 大於或等於上限擠迫值C U ,則物件路徑重排單元將第二因數Sc設為最大加速值S max ,其中上限擠迫值C U 為實數。若代表擠迫值C p 小於或等於下限擠迫值C L ,則物件路徑重排單元將第二因數Sc設為1,其中下限擠迫值C L 為實數,且下限擠迫值C L 小於上限擠迫值C U 。若代表擠迫值C p 大於下限擠迫值C L 且小於上限擠迫值C U ,則物件路徑重排單元將第二因數Sc設為[(C p -C L )/(C U -C L )]*(S max -1)+1。物件路徑重排單元從第一因數Sg與第二因數Sc中選取大者作為該加速因數S p In an embodiment of the invention, the object path rearrangement unit calculates a first factor S g =( P l / P th ), where P l is the length of time of the object path, and P th represents the threshold length. If the first factor S g is greater than the maximum acceleration value S max , the object path rearrangement unit sets the first factor S g to the maximum acceleration value S max , wherein the maximum acceleration value S max is a real number greater than 1 and less than 4. If the first factor S g is less than 1, the object path rearrangement unit sets the first factor S g to 1. If the representative forced value C p of the object path in the crowded map is greater than or equal to the upper limit forced value C U , the object path rearranging unit sets the second factor S c to the maximum acceleration value S max , wherein the upper limit is squeezed The forced value C U is a real number. If the representative forced value C p is less than or equal to the lower limit forced value C L , the object path rearrangement unit sets the second factor S c to 1, wherein the lower limit forced value C L is a real number, and the lower limit forced value C L Less than the upper limit of the forced value C U . If the representative forced value C p is greater than the lower limit forced value C L and less than the upper limit forced value C U , the object path rearrangement unit sets the second factor S c to [( C p - C L )/( C U - C L )]*( S max -1)+1. Rearrangement unit selected object path from the first factor and the second factor S g S c as the greater of the acceleration factor S p.

在本發明的一實施例中,上述的物件路徑重排單元調整該些子路徑中的第一子路徑至該些子路徑中的其他子路徑的幀偏移,以降低該些子路徑的重疊面積。 In an embodiment of the present invention, the object path rearranging unit adjusts a frame offset of the first sub-paths of the sub-paths to other sub-paths of the sub-paths to reduce overlapping of the sub-paths. area.

在本發明的一實施例中,若所述經調整物件路徑中的第一物件路徑與第二物件路徑之間在時間上有空隙,則物件路徑重排單元將第一物件路徑與第二物件路徑中後發生者的時間提前,且使該後發生者的時間晚於第一物件路徑與第二物件路徑中先發生者的時間。物件路徑重排單元將所述經調整物件路徑的時間偏移分別乘同一調整值,以使所述經調整物件路徑的時間範圍被包含於該播放時間長度的範圍中。 In an embodiment of the invention, if there is a gap between the first object path and the second object path in the adjusted object path, the object path rearranging unit will first object path and the second object The time of the subsequent occurrence in the path is advanced, and the time of the subsequent occurrence is later than the time of the first object path and the first occurrence in the second object path. The object path rearranging unit multiplies the time offset of the adjusted object path by the same adjustment value such that the time range of the adjusted object path is included in the range of the playing time length.

在本發明的一實施例中,上述的物件路徑重排單元調整所述經調整物件路徑中的第一物件路徑至所述經調整物件路徑中的其他物件路徑的幀偏移,以降低所述經調整物件路徑的彼此重疊面積。 In an embodiment of the invention, the object path rearranging unit adjusts a frame offset of the first object path in the adjusted object path to other object paths in the adjusted object path to reduce the The overlapping areas of the object paths are adjusted.

在本發明的一實施例中,上述的視頻合成單元利用高斯混合方法混合所述物件路徑的物件圖像與背景圖像。 In an embodiment of the invention, the video synthesizing unit mixes the object image and the background image of the object path by a Gaussian blending method.

在本發明的一實施例中,上述的視頻合成單元利用Alpha混合方法以半透明方式混合所述物件路徑中相互重疊的物件圖像。 In an embodiment of the invention, the video synthesizing unit mixes the object images overlapping each other in the object path in a translucent manner by using an Alpha blending method.

在本發明的一實施例中,上述的視頻合成單元計算所述物件路徑中多個物件的z軸距離。根據該些物件的z軸距離的降冪順序,視頻合成單元將該些物件粘貼在背景圖像上。 In an embodiment of the invention, the video synthesizing unit calculates a z-axis distance of a plurality of objects in the object path. The video synthesizing unit pastes the objects onto the background image according to the order of power reduction of the z-axis distances of the objects.

在本發明的一實施例中,上述的物件的z軸距離反比於物件的y座標的最大值,或正比於從物件到圖像中心的最小距離。 In an embodiment of the invention, the z-axis distance of the object is inversely proportional to the maximum value of the y-coordinate of the object, or proportional to the minimum distance from the object to the center of the image.

本發明實施例提供一種視頻播放方法,包括:提供原始視頻,其中該原始視頻是由攝影模組拍攝場景而獲得;由物件路徑提取模組從原始視頻中提取至少一物件路徑與一背景圖像;以及計算所述物件路徑中多個物件的z軸距離;以及根據該些物件的z軸距離的降冪順序,將該些物件粘貼在背景圖像上,以將所述物件路徑合成至合成視頻中,其中該合成視頻的時間長度小於該原始視頻的時間長度。 An embodiment of the present invention provides a video playing method, including: providing an original video, wherein the original video is obtained by capturing a scene by a photography module; and the object path extraction module extracts at least one object path and a background image from the original video. And calculating a z-axis distance of the plurality of objects in the object path; and affixing the objects to the background image according to a descending power order of the z-axis distances of the objects to synthesize the object path to the composite In the video, the length of time of the composite video is less than the length of time of the original video.

在本發明的一實施例中,上述的物件的z軸距離反比於該些物件的y座標的最大值,或正比於從該些物件到圖像中心的最小距離。 In an embodiment of the invention, the z-axis distance of the object is inversely proportional to the maximum value of the y-coordinate of the objects, or proportional to the minimum distance from the objects to the center of the image.

基於上述,本發明實施例所述視頻播放方法與視頻播放裝置可以縮短視頻播放時間,即合成視頻的時間長度小於原始視頻的時間長度。所述視頻播放方法與視頻播放裝置可以依據固定設置或由使用者動態決定的播放時間長度,來決定合成視頻的時間長度。所述視頻播放方法與視頻播放裝置可以從原始視頻中提取至少一物件路徑,以及選擇性地調整所述物件路徑,以將所述物件路徑合成至合成視頻中。因此,所述視頻播放方法與視頻播放裝置可以在預設的播放時間長度中顯示感興趣的所有物件。 Based on the above, the video playing method and the video playing device in the embodiments of the present invention can shorten the video playing time, that is, the length of the synthesized video is smaller than the length of the original video. The video playing method and the video playing device may determine the length of the synthesized video according to a fixed setting or a playing time length dynamically determined by the user. The video playback method and video playback device may extract at least one object path from the original video and selectively adjust the object path to composite the object path into the composite video. Therefore, the video playing method and the video playing device can display all the items of interest in a preset playing time length.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the invention will be apparent from the following description.

11‧‧‧原始視頻 11‧‧‧ original video

12‧‧‧物件路徑 12‧‧‧ Object path

13‧‧‧合成視頻 13‧‧‧Composite video

100‧‧‧視頻播放裝置 100‧‧‧Video player

110‧‧‧攝影模組 110‧‧‧Photography module

120‧‧‧物件路徑提取模組 120‧‧‧Object Path Extraction Module

121‧‧‧物件檢測和背景提取單元 121‧‧‧ Object detection and background extraction unit

122‧‧‧物件路徑產生單元 122‧‧‧object path generation unit

130‧‧‧存儲模組 130‧‧‧Storage Module

140‧‧‧視頻合成模組 140‧‧‧Video Synthesis Module

141‧‧‧視頻長度估算單元 141‧‧‧Video Length Estimation Unit

142‧‧‧物件路徑蒐集單元 142‧‧‧Object Path Collection Unit

143‧‧‧物件路徑重排單元 143‧‧‧Object path rearrangement unit

144‧‧‧視頻合成單元 144‧‧‧Video Synthesis Unit

145‧‧‧物件篩選器 145‧‧‧ Object Filter

150‧‧‧使用者介面 150‧‧‧User interface

160‧‧‧顯示模組 160‧‧‧ display module

411、412、413、421、422、423、 431、432、433‧‧‧邊界框 411, 412, 413, 421, 422, 423, 431, 432, 433‧‧‧ bounding box

1100、1200‧‧‧圖像幀 1100, 1200‧‧‧ image frames

1101、1102、1103、1201、1202、1203‧‧‧物件 1101, 1102, 1103, 1201, 1202, 1203‧‧‧ objects

1210‧‧‧圖像中心 1210‧‧‧Image Center

FS2、FS3‧‧‧幀偏移 FS2, FS3‧‧‧ frame offset

L N ‧‧‧時間範圍 L N ‧‧‧Time Range

Pa、Pb、Pc、Pd、Pe、Pf、P1、P2、P3、P4、P9、P101、P102、P103‧‧‧物件路徑 P a , P b , P c , P d , P e , P f , P1, P2, P3, P4, P9, P101, P102, P103‧‧‧ object path

P9_1、P9_2‧‧‧子路徑 P9_1, P9_2‧‧‧ subpath

r‧‧‧半徑 R‧‧‧ Radius

S210~S250、S241~S246、 S210~S250, S241~S246,

S810、S1310~S1340‧‧‧步驟 S810, S1310~S1340‧‧‧ steps

SC‧‧‧調整值 SC‧‧‧ adjustment value

TL‧‧‧播放時間長度/合成視頻的 時間長度 T L ‧‧‧Play time length / length of composite video

Tov‧‧‧原始視頻的時間長度 The length of the original video of ov ‧‧‧

圖1是依照本發明一實施例說明一種視頻播放裝置的電路方塊示意圖。 FIG. 1 is a block diagram showing a circuit of a video playing device according to an embodiment of the invention.

圖2是依照本發明一實施例說明一種視頻播放方法的流程示意圖。 2 is a schematic flow chart of a video playing method according to an embodiment of the invention.

圖3是依照本發明實施例說明圖1所示物件路徑提取模組與視頻合成模組的電路方塊示意圖。 FIG. 3 is a block diagram showing the circuit of the object path extraction module and the video synthesis module shown in FIG. 1 according to an embodiment of the invention.

圖4A、圖4B與圖4C繪示了在不同幀中出現的物件之間的關係。 4A, 4B, and 4C illustrate the relationship between objects that appear in different frames.

圖5A是依照本發明實施例顯示了一個辦公室場景的圖像示意圖。 Figure 5A is a diagram showing an image of an office scene in accordance with an embodiment of the present invention.

圖5B顯示了圖5A所示視頻所對應的擠迫圖。 Figure 5B shows an extruded view corresponding to the video shown in Figure 5A.

圖6A是依照本發明另一實施例顯示了一個火車站月台場景的圖像示意圖。 6A is a schematic diagram showing an image of a train station platform scene in accordance with another embodiment of the present invention.

圖6B顯示了圖6A所示視頻所對應的擠迫圖。 Fig. 6B shows an overcrowded view corresponding to the video shown in Fig. 6A.

圖7A與7B是依照本發明實施例說明原始視頻與合成視頻的示意圖。 7A and 7B are schematic diagrams illustrating an original video and a composite video in accordance with an embodiment of the present invention.

圖8是依照本發明實施例說明圖2所示步驟S240的詳細實施流程示意圖。 FIG. 8 is a schematic flow chart showing the detailed implementation of step S240 shown in FIG. 2 according to an embodiment of the invention.

圖9是依照本發明實施例說明了圖8所示步驟S243的操作過程示意圖。 FIG. 9 is a schematic diagram showing the operation of step S243 shown in FIG. 8 according to an embodiment of the present invention.

圖10A、圖10B與圖10C是依照本發明實施例說明物件路徑重排單元初始化經調整物件路徑的時間位置示意圖。 10A, FIG. 10B and FIG. 10C are schematic diagrams showing the time position of the object path rearrangement unit initializing the adjusted object path according to an embodiment of the invention.

圖11是依照本發明實施例說明一般相機獲得的圖像的示意圖。 11 is a schematic diagram illustrating an image obtained by a general camera in accordance with an embodiment of the present invention.

圖12是依照本發明實施例說明魚眼相機獲得的圖像的示意圖。 Figure 12 is a schematic illustration of an image obtained by a fisheye camera in accordance with an embodiment of the present invention.

圖13是依照本發明另一實施例說明一種視頻播放方法的流程示意圖。 FIG. 13 is a schematic flowchart diagram of a video playing method according to another embodiment of the present invention.

圖14是依照本發明另一實施例說明圖1所示視頻合成模組的電路方塊示意圖。 FIG. 14 is a block diagram showing the circuit of the video synthesizing module of FIG. 1 according to another embodiment of the present invention.

在本案說明書全文(包括申請專利範圍)中所使用的「耦接」一詞可指任何直接或間接的連接手段。舉例而言,若文中描述第一裝置耦接於第二裝置,則應該被解釋成該第一裝置可以直接連接於該第二裝置,或者該第一裝置可以透過其他裝置或某種連接手段而間接地連接至該第二裝置。另外,凡可能之處,在圖式及實施方式中使用相同標號的元件/構件/步驟代表相同或類似部分。不同實施例中使用相同標號或使用相同用語的元件/構件/步驟可以相互參照相關說明。 The term "coupled" as used throughout the specification (including the scope of the patent application) may be used in any direct or indirect connection. For example, if the first device is described as being coupled to the second device, it should be construed that the first device can be directly connected to the second device, or the first device can be connected through other devices or some kind of connection means. Connected to the second device indirectly. In addition, wherever possible, the elements and/ Elements/components/steps that use the same reference numbers or use the same terms in different embodiments may refer to the related description.

下述諸實施例將說明視頻播放(video playback)方法與/或視頻播放裝置。所述視頻播放方法與/或視頻播放裝置可以解決 播放長視頻的問題,這長視頻是由固定相機在使用者預設的時間長度中所攝得的。在一些實施例中(但不限於此),在合成用來播放的影像之前,所述視頻播放方法與/或視頻播放裝置可以預估一個合適的最短播放長度(建議時間長度),並將該建議時間長度提示給使用者。使用者可以參考由該裝置建議的時間長度來設定預期的播放時間長度。所述視頻播放方法與/或視頻播放裝置可以依據此播放時間長度來產生具有使用者期望時間長度的合成視頻。合成視頻可以包含出現在原始視頻的所有物件(或感興趣的所有物件)。合成視頻中物件出現順序可以相同於原始視頻。在一些實施例中(但不限於此),所述視頻播放方法與/或視頻播放裝置可以用半透明(semitransparent)方式顯示合成視頻中的重疊物件。在另一些實施例中(但不限於此),所述視頻播放方法與/或視頻播放裝置可以用較近物件(near object)遮擋(occluding)較遠物件(distant object),以使合成視頻看起來像是從相機拍攝的視頻。 The following embodiments will explain a video playback method and/or a video playback device. The video playing method and/or video playing device can solve The problem of playing long videos, which are taken by the fixed camera for a user-predetermined length of time. In some embodiments, but not limited to, the video playing method and/or the video playing device may estimate a suitable shortest playing length (proposed length of time) before synthesizing the image for playing, and The suggested length of time is prompted to the user. The user can refer to the length of time suggested by the device to set the expected length of play time. The video playing method and/or the video playing device may generate a composite video having a length of time desired by the user according to the length of the playing time. A composite video can contain all the objects that appear in the original video (or all objects of interest). Objects in the composite video appear in the same order as the original video. In some embodiments, but not limited to, the video playback method and/or video playback device may display overlapping objects in the composite video in a semi-transparent manner. In other embodiments, but not limited to, the video playing method and/or the video playing device may occluding a distant object with a near object to make the synthesized video look It looks like a video taken from a camera.

圖1是依照本發明一實施例說明一種視頻播放裝置100的電路方塊示意圖。視頻播放裝置100包括攝影模組110、物件路徑提取模組120、存儲模組130、視頻合成模組140、使用者介面150以及顯示模組160。攝影模組110耦接至物件路徑提取模組120。攝影模組110可以拍攝場景而獲得原始視頻(original video)11。物件路徑提取模組120可以從原始視頻11中提取至少一物件路徑(object path)12。存儲模組130耦接至物件路徑提取模組 120。依照不同的設計需求,物件路徑提取模組120可以將所述物件路徑12直接提供給視頻合成模組140,或是將所述物件路徑12儲存於存儲模組130。 FIG. 1 is a block diagram showing a circuit of a video playback apparatus 100 according to an embodiment of the invention. The video playback device 100 includes a camera module 110, an object path extraction module 120, a storage module 130, a video synthesis module 140, a user interface 150, and a display module 160. The photography module 110 is coupled to the object path extraction module 120. The photography module 110 can capture a scene to obtain an original video 11. The object path extraction module 120 can extract at least one object path 12 from the original video 11. The storage module 130 is coupled to the object path extraction module 120. The object path extraction module 120 can provide the object path 12 directly to the video synthesis module 140 or store the object path 12 in the storage module 130 according to different design requirements.

視頻合成模組140耦接至物件路徑提取模組120與存儲模組130,以接收所述物件路徑12。視頻合成模組140可以依據預設的播放時間長度,選擇性地調整所述物件路徑12,以將所述物件路徑12合成至合成視頻13中。其中,該播放時間長度決定合成視頻13的時間長度,而合成視頻13的時間長度小於原始視頻11的時間長度。依照不同的設計需求,所述播放時間長度可以是固定設置的一個預設值,或是由使用者動態決定的播放時間長度值。 The video synthesis module 140 is coupled to the object path extraction module 120 and the storage module 130 to receive the object path 12 . The video synthesis module 140 can selectively adjust the object path 12 according to a preset play time length to synthesize the object path 12 into the composite video 13. The length of the play time determines the length of time of the synthesized video 13 , and the length of the synthesized video 13 is less than the length of time of the original video 11 . According to different design requirements, the length of the play time may be a preset value of a fixed setting or a play time length value determined dynamically by the user.

使用者介面150耦接至視頻合成模組140。使用者介面150可以將使用者所輸入的開始時間T b 與結束時間T e 傳送給視頻合成模組140。使用者可以藉由開始時間T b 與結束時間T e 的設定,來決定欲觀看原始視頻11的物件的時間範圍。在決定了欲觀看的原始視頻11的時間範圍後,視頻合成模組140可以將此時間範圍所屬的物件路徑合成至合成視頻13中。其中,合成視頻13的時間長度是符合預設的播放時間長度。合成視頻13的時間長度無關於原始視頻11的內容。顯示模組160耦接至視頻合成模組140。顯示模組160可以播放視頻合成模組140所產生的合成視頻13給使用者觀看。 The user interface 150 is coupled to the video synthesis module 140. The user interface 150 can transmit the start time T b and the end time T e input by the user to the video synthesis module 140. The user can determine the time range of the object to view the original video 11 by setting the start time T b and the end time T e . After determining the time range of the original video 11 to be viewed, the video composition module 140 may synthesize the object path to which the time range belongs into the composite video 13. The length of the synthesized video 13 is in accordance with the preset playing time length. The length of time in which the video 13 is synthesized is not related to the content of the original video 11. The display module 160 is coupled to the video synthesis module 140. The display module 160 can play the synthesized video 13 generated by the video synthesis module 140 for viewing by the user.

圖2是依照本發明一實施例說明一種視頻播放方法的流 程示意圖。步驟S210提供原始視頻,其中該原始視頻是由攝影模組拍攝場景而獲得。於步驟S220中,由物件路徑提取模組從該原始視頻中提取至少一物件路徑。於步驟S230中,提供播放時間長度,以決定合成視頻的時間長度,其中該合成視頻的時間長度小於該原始視頻的時間長度。於步驟S240中,由視頻合成模組選擇性地調整所述物件路徑,以將所述物件路徑合成至該合成視頻中。於步驟S250中,播放該合成視頻給使用者觀看。圖2與圖1可以相互參照,故不再贅述。在一些實施例中,圖2所述視頻播放方法可以實現於硬體電路(例如圖1所示視頻播放裝置100)。在另一些實施例中,圖2所述視頻播放方法可以實現於韌體(firmware)。此韌體可以運行於中央處理單元、微控制器或是其他韌體運行平台。在其他實施例中,圖2所述視頻播放方法可以實現於軟體。此軟體可以存放或運行於電腦、智慧型手機或是其他軟體運行平台。 2 is a flow chart of a video playing method according to an embodiment of the invention. Schematic diagram. Step S210 provides an original video, wherein the original video is obtained by shooting a scene by the photography module. In step S220, at least one object path is extracted from the original video by the object path extraction module. In step S230, a play time length is provided to determine a time length of the synthesized video, wherein the synthesized video has a time length smaller than a length of the original video. In step S240, the object path is selectively adjusted by the video synthesis module to synthesize the object path into the composite video. In step S250, the composite video is played for viewing by the user. 2 and FIG. 1 can be referred to each other, and therefore will not be described again. In some embodiments, the video playback method of FIG. 2 can be implemented in a hardware circuit (such as video playback device 100 shown in FIG. 1). In other embodiments, the video playback method of FIG. 2 can be implemented in firmware. This firmware can run on a central processing unit, microcontroller or other firmware operating platform. In other embodiments, the video playing method described in FIG. 2 can be implemented in a software. This software can be stored or run on a computer, smart phone or other software running platform.

在一些應用情境中,所述物件路徑可能包括第一物件路徑與第二物件路徑。經由視頻合成模組140選擇性地調整第一物件路徑與第二物件路徑後,於合成視頻13中的該第一物件路徑的播放速度可以不同於該第二物件路徑的播放速度。物件路徑的播放速度取決於設定的播放時間長度TL。舉例來說,於實際應用情境中,若第一物件路徑與第二物件路徑的播放時間長度均小於閾長度P th (此閾長度P th 小於或等於合成視頻13的播放時間長度TL),則第一物件路徑與第二物件路徑的播放速度可能相同。若第 一物件路徑與/或第二物件路徑的播放時間長度大於閾長度P th ,則視頻合成模組140可以依據閾長度P th 來調整第一物件路徑與/或第二物件路徑的播放速度,使得第一物件路徑與第二物件路徑的播放速度可能不同(或相同)。 In some application scenarios, the object path may include a first item path and a second item path. After the first object path and the second object path are selectively adjusted via the video synthesis module 140, the playback speed of the first object path in the composite video 13 may be different from the playback speed of the second object path. The playback speed of the object path depends on the set playback time length T L . For example, in a practical application scenario, if the playing time length of the first object path and the second object path are both less than the threshold length P th (the threshold length P th is less than or equal to the playing time length T L of the synthesized video 13), The playback speed of the first object path and the second object path may be the same. If the playing time length of the first object path and/or the second object path is greater than the threshold length P th , the video synthesizing module 140 may adjust the playing speed of the first object path and/or the second object path according to the threshold length P th So that the playback speed of the first object path and the second object path may be different (or the same).

在一些應用情境中,所述物件路徑可能包括第一物件路徑與第二物件路徑。該第一物件路徑的第一物件與該第二物件路徑的第二物件出現於該原始視頻11中的時間不重疊。經由視頻合成模組140選擇性地調整第一物件路徑與第二物件路徑後,該第一物件與該第二物件出現於合成視頻13中的時間相重疊。舉例來說,圖7A所示物件路徑P1的物件與物件路徑P2的物件出現於原始視頻11中的時間不重疊。經由視頻合成模組140選擇性地調整物件路徑P1與物件路徑P2後,物件路徑P1的物件與物件路徑P2的物件出現於合成視頻13中的時間相重疊,如圖7B所示。需注意的是,所述物件路徑於合成視頻13中的空間位置相同於所述物件路徑於原始視頻11中的空間位置。舉例來說,圖7B所示物件路徑P1、P2、P3與P4在合成視頻13中的空間位置相同於圖7A所示物件路徑P1、P2、P3與P4於原始視頻11中的空間位置。圖7A與圖7B將於後文詳述之。 In some application scenarios, the object path may include a first item path and a second item path. The time at which the first object of the first object path and the second object of the second object path appear in the original video 11 does not overlap. After the first object path and the second object path are selectively adjusted via the video synthesis module 140, the first object overlaps with the time when the second object appears in the composite video 13. For example, the object of the object path P1 shown in FIG. 7A does not overlap with the time when the object of the object path P2 appears in the original video 11. After the object path P1 and the object path P2 are selectively adjusted via the video synthesis module 140, the object of the object path P1 overlaps with the time when the object of the object path P2 appears in the composite video 13, as shown in FIG. 7B. It should be noted that the spatial position of the object path in the composite video 13 is the same as the spatial position of the object path in the original video 11. For example, the object paths P1, P2, P3, and P4 shown in FIG. 7B have the same spatial position in the composite video 13 as the spatial positions of the object paths P1, P2, P3, and P4 shown in FIG. 7A in the original video 11. 7A and 7B will be described later in detail.

圖3是依照本發明實施例說明圖1所示物件路徑提取模組120與視頻合成模組140的電路方塊示意圖。圖3所示實施例可以參照圖1與圖2的相關說明而類推之。 FIG. 3 is a block diagram showing the circuit of the object path extraction module 120 and the video synthesis module 140 shown in FIG. 1 according to an embodiment of the invention. The embodiment shown in FIG. 3 can be analogized with reference to the related description of FIG. 1 and FIG. 2.

請參照圖3,物件路徑提取模組120包括物件檢測和背 景提取(object detection and background extraction)單元121以及物件路徑產生(object path generation)單元122。在產生物件路徑之前,應首先檢測物件。如美國專利公告號US 8,599,255中所述,「物件」被定義為在視頻流(video stream)的幀(frame)中一個場景的前景(foreground of a scene)。與此相反的是,「背景」被定義為一個靜態的場景(static scene),其在視頻的時間序列幀中幾乎不變或僅有細微的差別(slight difference)。物件檢測和背景提取單元121可以接收原始視頻11,並從原始視頻11中提取至少一物件(object)與至少一背景圖像(background image)。物件檢測和背景提取單元121可以用任何演算法從原始視頻11中提取物件與背景圖像。舉例來說(但不以此為限),在一些實施例中,物件檢測和背景提取單元121可以採用美國專利公告號US 8,599,255中所述方法或是其他已知方法,來從原始視頻11中提取物件與背景圖像。 Referring to FIG. 3, the object path extraction module 120 includes object detection and back An object detection and background extraction unit 121 and an object path generation unit 122. Objects should be detected first before the object path is generated. As described in U.S. Patent No. 8,599,255, "object" is defined as the foreground of a scene in a frame of a video stream. In contrast, "background" is defined as a static scene that is nearly constant or has only a slight difference in the time series frame of the video. The object detection and background extraction unit 121 can receive the original video 11 and extract at least one object and at least one background image from the original video 11. The object detection and background extraction unit 121 can extract the object and the background image from the original video 11 using any algorithm. For example, but not by way of limitation, in some embodiments, the object detection and background extraction unit 121 can be from the original video 11 using the method described in US Pat. No. 8,599,255 or other known methods. Extract objects and background images.

每個物件和每個背景圖像都有一個對應於其來源幀(source frame)的時間戳記(timestamp)。物件檢測和背景提取單元121可以將所述背景圖像存儲於存儲模組130中。依照不同的設計需求,存儲模組130可能包含存儲裝置(storage device,例如硬碟、固態硬碟等)、記憶體、緩衝器或是其他資料儲存媒體。在一些實施例中,物件檢測和背景提取單元121可以將所有來源幀的背景圖像存儲於存儲模組130中。在另一些實施例中,為了節省存儲空間,不是每個背景圖像都被存儲。例如,每隔一 段恒定期間(constant period)才選取背景圖像加以存儲。 Each object and each background image has a timestamp corresponding to its source frame. The object detection and background extraction unit 121 may store the background image in the storage module 130. According to different design requirements, the storage module 130 may include a storage device (such as a hard disk, a solid state drive, etc.), a memory, a buffer, or other data storage media. In some embodiments, the object detection and background extraction unit 121 can store background images of all source frames in the storage module 130. In other embodiments, not every background image is stored in order to save storage space. For example, every other one The background image is selected for storage during the constant period of the segment.

物件路徑產生單元122耦接至物件檢測和背景提取單元121。物件路徑產生單元122可以依據在原始視頻11中的目前幀的物件與先前幀中的物件的關係,創建物件路徑12,以及將所述物件路徑12存儲於存儲模組130。在對一個幀實施了物件檢測之後,在目前幀中所有檢測到的物件將會被檢查其與先前幀中的物件的關係。物件的邊界框(bounding box)可以用於建立所述關係。舉例來說,如果在目前幀中的物件的邊界框重疊於先前幀中物件的邊界框,則在序列幀中出現的這兩個物件具有它們之間的關係。在目前幀中的物件被視為是子物件(child object),而在先前幀中的物件被視為父物件(parent object)。 The object path generating unit 122 is coupled to the object detecting and background extracting unit 121. The object path generating unit 122 may create the object path 12 according to the relationship between the object of the current frame in the original video 11 and the object in the previous frame, and store the object path 12 in the storage module 130. After object detection is performed on a frame, all detected objects in the current frame will be checked for their relationship to objects in the previous frame. A bounding box of objects can be used to establish the relationship. For example, if the bounding box of an object in the current frame overlaps the bounding box of the object in the previous frame, the two objects that appear in the sequence frame have a relationship therebetween. An object in the current frame is considered a child object, and an object in the previous frame is treated as a parent object.

圖4A繪示了在不同幀中出現的物件之間的關係。圖4A所繪示的多個方框各自代表在不同幀中出現的物件的邊界框。由於這些方框(物件的邊界框)各自重疊於其先前幀中的方框(物件的邊界框),因此圖4A所繪示的這些方框(物件的邊界框)具有關聯性。物件路徑產生單元122可以將具有關聯性的這些方框(物件的邊界框)組成一個物件路徑。 Figure 4A illustrates the relationship between objects that occur in different frames. The plurality of blocks depicted in Figure 4A each represent a bounding box of objects that appear in different frames. Since these boxes (the bounding boxes of the objects) each overlap the box in its previous frame (the bounding box of the object), the boxes (the bounding box of the object) depicted in Figure 4A are related. The object path generation unit 122 may make these boxes (boundary boxes of objects) having an association into one object path.

圖4B與圖4C分別繪示了在不同幀中出現的物件之間的關係。圖4B與圖4C所繪示的多個方框各自代表在不同幀中出現的物件的邊界框。物件路徑的產生,包括三個情況:1)創建新的物件路徑;2)將成員(members)添加到現有的物件路徑;以及3)結束物件路徑。所述三個情況將配合圖4A、圖4B與圖4C 分述如下。 4B and 4C respectively illustrate the relationship between objects appearing in different frames. The plurality of blocks depicted in Figures 4B and 4C each represent a bounding box of objects that appear in different frames. The generation of the object path includes three cases: 1) creating a new object path; 2) adding members to the existing object path; and 3) ending the object path. The three cases will cooperate with Figures 4A, 4B and 4C The description is as follows.

創建新的物件路徑有三個條件:1)目前幀的該物件沒有父物件(在先前幀的對應物件);2)在目前幀中的該物件與其他物件共有同一個父物件;或3)目前幀的該物件擁有多個父物件。物件路徑產生單元122可以依據至少滿足上述條件之一的物件而創建一個新物件路徑,而目前幀的此物件是此新物件路徑的第一個物件。例如,圖4A所示物件的邊界框411沒有父物件,則物件路徑產生單元122可以將物件的邊界框411作為新創建的物件路徑的第一個物件。圖4B所示目前幀中物件的邊界框422與目前幀中物件的邊界框423共有同一個父物件(物件的邊界框421),則物件路徑產生單元122可以將物件的邊界框422作為新創建的物件路徑Pb的第一個物件,以及將物件的邊界框423作為新創建的物件路徑Pc的第一個物件。圖4C所示目前幀中物件的邊界框433擁有多個父物件(物件的邊界框431與432),則物件路徑產生單元122可以將物件的邊界框433作為新創建的物件路徑Pf的第一個物件。 There are three conditions for creating a new object path: 1) the object in the current frame has no parent object (the corresponding object in the previous frame); 2) the object in the current frame shares the same parent object as the other objects; or 3) the current The object of the frame has multiple parent objects. The object path generating unit 122 may create a new object path according to an object satisfying at least one of the above conditions, and the object of the current frame is the first object of the new object path. For example, if the bounding box 411 of the object shown in FIG. 4A has no parent object, the object path generating unit 122 may use the bounding box 411 of the object as the first object of the newly created object path. 4B, the bounding box 422 of the object in the current frame shares the same parent object (the bounding box 421 of the object) with the bounding box 423 of the object in the current frame, and the object path generating unit 122 can newly create the bounding box 422 of the object. The first object of the object path P b and the bounding box 423 of the object are the first object of the newly created object path P c . 4C shows that the bounding box 433 of the object in the current frame has a plurality of parent objects (the bounding boxes 431 and 432 of the object), and the object path generating unit 122 can use the bounding box 433 of the object as the newly created object path P f An object.

如果目前幀的該物件具有唯一的父物件,且目前幀的該物件為該父物件的唯一子物件,則物件路徑產生單元122將目前幀的該物件添加到父物件所屬的現有物件路徑中。例如,圖4A所示物件的邊界框412具有唯一的父物件(物件的邊界框411),且物件的邊界框412為父物件(物件的邊界框411)的唯一子物件,則物件路徑產生單元122可以將物件的邊界框412添加到父物件 (物件的邊界框411)所屬的現有物件路徑中。 If the object of the current frame has a unique parent object and the object of the current frame is the only child of the parent object, the object path generation unit 122 adds the object of the current frame to the existing object path to which the parent object belongs. For example, the bounding box 412 of the object shown in FIG. 4A has a unique parent object (the bounding box 411 of the object), and the bounding box 412 of the object is the only child of the parent object (the bounding box 411 of the object), then the object path generating unit 122 can add the bounding box 412 of the object to the parent object (The bounding box 411 of the object) belongs to the existing object path.

當出現以下條件至少其中之一時,物件路徑產生單元122結束所述物件路徑:1)所述物件路徑的最後一個物件沒有子物件;2)所述物件路徑的最後一個物件擁有不止一個子物件;或3)所述至少一物件路徑的最後一個物件和其他物件路徑共有子物件。例如,圖4A所示物件路徑的最後一個物件(物件的邊界框413)沒有子物件,則物件路徑產生單元122結束圖4A所示物件路徑。圖4B所示物件路徑Pa的最後一個物件(物件的邊界框421)擁有多個子物件(物件的邊界框422與423),則物件路徑產生單元122結束圖4B所示物件路徑Pa。圖4C所示物件路徑Pd的最後一個物件(物件的邊界框431)和物件路徑Pe的最後一個物件(物件的邊界框432)共有同一個子物件,則物件路徑產生單元122結束圖4C所示物件路徑Pd與Pe。當物件路徑的第一個物件具有父物件時,父物件的物件路徑被認為是目前物件路徑的父物件路徑。 The object path generating unit 122 ends the object path when at least one of the following conditions occurs: 1) the last object of the object path has no child objects; 2) the last object of the object path has more than one child object; Or 3) the last object of the at least one object path and the other object path share the sub-object. For example, if the last object of the object path shown in FIG. 4A (the bounding box 413 of the object) has no child objects, the object path generating unit 122 ends the object path shown in FIG. 4A. The last object of the object path P a shown in Fig. 4B (the bounding box 421 of the object) has a plurality of sub-objects (bounding frames 422 and 423 of the object), and the object path generating unit 122 ends the object path P a shown in Fig. 4B. As shown in the last object (bounding box object 431) object path P d and a last object path object (object boundary block 432) share the same sub-object P e in Figure 4C, the path of the object generation unit 122 in FIG. 4C ends The object paths P d and P e are shown . When the first object of the object path has a parent object, the object path of the parent object is considered to be the parent object path of the current object path.

在產生物件路徑12後,物件路徑12的資料被保存在存儲模組130(例如記憶體或存儲裝置)中。物件路徑12的資料包括:所述物件路徑12的時間長度、所述物件路徑12的第一個物件的時間戳記(timestamp)、所述物件路徑12的每個成員物件對應於該第一個物件的時間偏移(time shift)、每個成員物件的位置、每個成員物件的大小、及/或父物件路徑。因此,在一些實施例中,物件路徑12可以是三個維度資料,包括時間資訊和在兩個 維度中的位置資訊。 After the object path 12 is generated, the data of the object path 12 is stored in a storage module 130 (eg, a memory or storage device). The data of the object path 12 includes: a time length of the object path 12, a timestamp of the first object of the object path 12, and each member object of the object path 12 corresponds to the first object Time shift, the position of each member object, the size of each member object, and/or the parent object path. Thus, in some embodiments, the object path 12 can be three dimensional data, including time information and in two Location information in the dimension.

圖3所示使用者介面(user interface)150可以設置視頻合成程序的參數。所述參數包括原始視頻11的開始時間T b 與結束時間T e 、合成視頻13的播放時間長度(期待長度)、物件過濾器的參數(如物件大小、物件的顏色、物件運動......等等,將於後述其他實施例中詳加說明)、以及/或是用於產生合成視頻幀(frame of synthesis video)的參數。 The user interface 150 shown in FIG. 3 can set parameters of the video synthesis program. The parameters include the start time T b and the end time T e of the original video 11 , the play time length of the synthesized video 13 (expected length), the parameters of the object filter (such as the object size, the color of the object, the movement of the object.... . . and so on, which will be explained in detail in other embodiments to be described later, and/or parameters for generating a frame of synthesis video.

圖3所示視頻合成模組140包括視頻長度估算(video length evaluation)單元141、物件路徑蒐集(object path collection)單元142、物件路徑重排(object paths rearrangement)單元143以及視頻合成(video synthesis)單元144。在一些實施例中,物件路徑蒐集單元142可以從物件路徑提取模組120所產生的所述物件路徑12中蒐集部份或全部物件路徑。在另一些實施例中,物件路徑蒐集單元142可以依據使用者介面150所輸出的開始時間T b 與結束時間T e ,而從存儲模組130內的所述物件路徑中蒐集部份或全部物件路徑。在原始視頻11的時間長度超長的情況下,或是對於監控應用(surveillance application)而言,使用者通常要檢視的是原始視頻11的片段而不是完整視頻。使用者可以利用使用者介面150去設定感興趣的時間段(例如設定開始時間T b 與結束時間T e )。物件路徑蒐集單元142可以依據使用者介面150所提供的開始時間T b 與結束時間T e 而蒐集/選擇的對應的物件路徑。假設在存儲裝置(例如存儲模組130)中的某一個候選物件路徑的 發生時間和長度分別為P t P l 。若T b P t T e ,或T b P t +P l T e ,或P t T b T e P t +P l ,則視頻合成模組140的物件路徑蒐集單元142可以選擇此候選物件路徑作為使用者感興趣的所述物件路徑。 The video synthesis module 140 shown in FIG. 3 includes a video length evaluation unit 141, an object path collection unit 142, an object path rearrangement unit 143, and a video synthesis. Unit 144. In some embodiments, the object path collection unit 142 may collect some or all of the object paths from the object path 12 generated by the object path extraction module 120. In other embodiments, the object path collecting unit 142 may collect some or all of the objects from the object path in the storage module 130 according to the start time T b and the end time T e output by the user interface 150 . path. In the case where the length of the original video 11 is too long, or for a surveillance application, the user usually views a segment of the original video 11 instead of the full video. The user can use the user interface 150 to set a time period of interest (eg, set the start time T b and the end time T e ). The object path collecting unit 142 can collect/select the corresponding object path according to the start time T b and the end time T e provided by the user interface 150. It is assumed that the occurrence time and length of a certain candidate object path in the storage device (for example, the storage module 130) are P t and P l , respectively . If T b P t T e , or T b P t + P l T e , or P t T b and T e P t + P l, then the video object path synthesis module 140 collection unit 142 may select this path as a candidate object of interest to the user object path.

視頻長度估算單元141耦接至物件路徑蒐集單元142,以接收物件路徑蒐集單元142的蒐集結果。視頻長度估算單元141可以依據物件路徑蒐集單元142的所蒐集的物件路徑在一個場景中不同像素處的擠迫情形,而估算建議時間長度。對於使用者來說,為了縮短播放時間的適當視頻長度是很難決定的,因為在不同時間段中原始視頻11的複雜性是變動的。因此,便於幫助使用者的建議是必要的,以便決定適當播放長度。視頻長度估算單元141所估算的建議時間長度可以提示給使用者,以輔助該使用者決定合成視頻13的播放時間長度。 The video length estimating unit 141 is coupled to the object path collecting unit 142 to receive the collected result of the object path collecting unit 142. The video length estimating unit 141 can estimate the recommended length of time according to the crowding situation at the different pixels in one scene of the collected object path of the object path collecting unit 142. For the user, it is difficult to determine the appropriate video length for shortening the play time because the complexity of the original video 11 varies in different time periods. Therefore, suggestions to facilitate the user are necessary in order to determine the appropriate length of play. The recommended length of time estimated by the video length estimating unit 141 can be presented to the user to assist the user in determining the length of play time of the composite video 13.

於本實施例中,視頻長度估算單元141可以依據原始視頻11產生一個擠迫圖(crowdedness map),以描述所述至少一物件路徑在不同像素處的擠迫值。擠迫圖的概念為壓縮所有感興趣的物件在一個幀中。擠迫圖可以當作物件路徑的計數器。擠迫圖在每個像素C ij 的初始值為零,其中C ij 表示在幀的位置(i,j)處的擠迫值。物件路徑的定義是在時間序列(time sequence)中一組物件的邊界框。如果位置(i,j)是在某一物件路徑的邊界框中,那麼C ij =C ij +1。在計數所有有興趣的物件路徑的所有邊界框之後,擠迫圖便被創建了。 In this embodiment, the video length estimating unit 141 may generate a crowdedness map according to the original video 11 to describe the crowded value of the at least one object path at different pixels. The concept of a crowded map is to compress all objects of interest in one frame. The crowd map can be used as a counter for the crop piece path. The initial value of the squeeze map at each pixel C ij is zero, where C ij represents the squeezed value at the position ( i,j ) of the frame. An object path is defined as a bounding box of a set of objects in a time sequence. If the position ( i, j ) is in the bounding box of an object path, then C ij = C ij +1. After counting all the bounding boxes of all interested object paths, the crowded map is created.

圖5A是依照本發明實施例顯示了一個辦公室場景的圖 像示意圖。圖5B顯示了圖5A所示視頻所對應的擠迫圖。如圖5B所示,高擠迫值將出現在幀中央部位(對應於在圖5A所示辦公室裡的走道),因為所有物件路徑都穿過此走道。與此相反的是,圖6B不如圖5B那樣擁擠。圖6A是依照本發明另一實施例顯示了一個火車站月台場景的圖像示意圖。圖6B顯示了圖6A所示視頻所對應的擠迫圖。如圖6B所示,擠迫值的分佈較為均勻(對應於在圖6A所示火車站月台)。比較圖5B與圖6B可知,圖5A所示視頻的播放時間長度應要長於圖6A所示視頻。 5A is a diagram showing an office scene in accordance with an embodiment of the present invention. Like a schematic. Figure 5B shows an extruded view corresponding to the video shown in Figure 5A. As shown in Figure 5B, a high crowding value will appear in the center of the frame (corresponding to the aisle in the office shown in Figure 5A) as all object paths pass through the aisle. In contrast, FIG. 6B is not crowded as in FIG. 5B. 6A is a schematic diagram showing an image of a train station platform scene in accordance with another embodiment of the present invention. Fig. 6B shows an overcrowded view corresponding to the video shown in Fig. 6A. As shown in Fig. 6B, the distribution of the crowding values is relatively uniform (corresponding to the train station platform shown in Fig. 6A). Comparing FIG. 5B with FIG. 6B, the playback time of the video shown in FIG. 5A should be longer than the video shown in FIG. 6A.

於本實施例中,視頻長度估算單元141可以計算等式 ,其中F n 表示建議幀數,C m 表示擠迫圖中 該些擠迫值的關聯值,C th 表示閾值,T p 表示該建議時間長度,R f 表示該合成視頻的畫面播放速率(frame rate)。擠迫圖的最大值是有關於決定適當的播放時間長度。閾值C th 大於0且小於關聯值C m 。當關聯值C m 小於閾值C th 時,使用者可以很容易地在一個場景中分辨出不同物件。為了消除雜訊的影響,關聯值C m 與閾值C th 可以依據設計需求來決定。舉例來說(但不限於此),關聯值C m 可以是擠迫圖中所有像素的擠迫值的平均值。在另一些實施例中,關聯值C m 可以是於擠迫圖的這些擠迫值以降冪排序下,前50%擠迫值的平均值。在其他實施例中,關聯值C m 可以是前20%擠迫值的平均值,或前10%擠迫值的平均值。在一些實施例中,閾值C th 可以被設為72,以符合人類的視覺感知(visual perception)。 In the embodiment, the video length estimating unit 141 can calculate the equation. Where F n represents the number of suggested frames, C m represents the associated value of the crowded values in the crowded map, C th represents the threshold, T p represents the suggested length of time, and R f represents the frame playback rate of the composite video (frame Rate). The maximum value of the crowded map is related to determining the appropriate length of play time. The threshold value C th is smaller than the correlation value is greater than 0 and C m. When the correlation value C m is smaller than the threshold value C th, the user can easily distinguish the different objects in a scene. In order to eliminate the influence of noise, the correlation value C m and the threshold C th can be determined according to design requirements. For example (but not limited to), C m associated value may be an average value of the congestion in the congested all pixels in FIG. In other embodiments, the correlation value C m may be crowded in the crowded FIG these values in descending order, the front 50% of the average value of congestion. In other embodiments, the correlation value C m may be an average value of the top 20% crowded, crowded or average of the first 10% of the value. In some embodiments, the threshold value C th may be set to 72 to conform to the human visual perception (visual perception).

物件路徑重排單元143可以依照物件路徑蒐集單元142所選的物件路徑於原始視頻11中的出現順序,將所述物件路徑重新安排在合成視頻13中。舉例來說,圖7A與7B是依照本發明實施例說明原始視頻11與合成視頻13的示意圖。於圖7A與7B中,縱軸表示空間(物件的位置),而橫軸表示時間。在此假設原始視頻11中在不同時間出現了物件路徑P1、P2、P3與P4。物件路徑重排單元143可以依照物件路徑P1、P2、P3與P4於原始視頻11中的出現順序,將所述物件路徑P1、P2、P3與P4重新安排在合成視頻13中。由圖7A與7B可以看出,合成視頻13的時間長度TL小於原始視頻11的時間長度Tov,因此本實施例所述視頻播放方法與視頻播放裝置100可以縮短視頻播放時間。 The object path rearranging unit 143 may rearrange the object path in the synthesized video 13 in accordance with the order of appearance of the object path selected by the object path collecting unit 142 in the original video 11. For example, Figures 7A and 7B are schematic diagrams illustrating an original video 11 and a composite video 13 in accordance with an embodiment of the present invention. In Figs. 7A and 7B, the vertical axis represents space (position of objects), and the horizontal axis represents time. It is assumed here that the object paths P1, P2, P3 and P4 appear at different times in the original video 11. The object path rearranging unit 143 may rearrange the object paths P1, P2, P3, and P4 in the synthesized video 13 in accordance with the order of appearance of the object paths P1, P2, P3, and P4 in the original video 11. As can be seen from FIG. 7A and FIG. 7B, the time length T L of the composite video 13 is smaller than the time length T ov of the original video 11, so that the video playing method and the video playing device 100 of the embodiment can shorten the video playing time.

圖8是依照本發明實施例說明圖2所示步驟S240的詳細實施流程示意圖。於圖8所示實施例中,步驟S240包括子步驟S810、S244、S245與S246。在步驟S810中,物件路徑重排單元143可以選擇性地調整物件路徑蒐集單元142所選的物件路徑,以獲得至少一經調整物件路徑。舉例來說(但不限於此),本實施例所示步驟S810包括子步驟S241、S242與S243。 FIG. 8 is a schematic flow chart showing the detailed implementation of step S240 shown in FIG. 2 according to an embodiment of the invention. In the embodiment shown in FIG. 8, step S240 includes sub-steps S810, S244, S245, and S246. In step S810, the object path rearranging unit 143 may selectively adjust the object path selected by the object path collecting unit 142 to obtain at least one adjusted object path. For example, but not limited to, step S810 shown in this embodiment includes sub-steps S241, S242, and S243.

於步驟S241中,若所述物件路徑有父物件路徑,則合併所述物件路徑與其父物件路徑,作為所述經調整物件路徑。當多個物件彼此跨越時,物件路徑被劃分成幾個物件路徑。例如,圖4C所示狀況可能是兩個物件從不同位置移動到相同位置,而圖 4B所示狀況可能是兩個物件從相同位置移動到不同位置。因此,物件路徑重排的第一步是合併相對物件路徑(圖8所示步驟S241),以恢復交叉跨越物件路徑的整個故事。由於每個物件路徑記錄了其父物件路徑,此資訊用於建構繼承樹結構(inheriting tree structure)以便將所有相對物件路徑合併在一起。舉例來說,物件路徑重排單元143可以在步驟S241中將圖4B所示物件路徑Pa、Pb與Pc合併為一個個體物件路徑(individual object path),在此稱為經調整物件路徑。物件路徑重排單元143亦可以在步驟S241中將圖4C所示物件路徑Pd、Pe與Pf合併為一個經調整物件路徑。另外,沒有父物件路徑的物件路徑亦可以視為一個經調整物件路徑。 In step S241, if the object path has a parent object path, the object path and its parent object path are merged as the adjusted object path. When multiple objects span each other, the object path is divided into several object paths. For example, the condition shown in Figure 4C may be that two items move from different positions to the same position, while the condition shown in Figure 4B may be that two items move from the same position to a different position. Therefore, the first step in the object path rearrangement is to merge the relative object paths (step S241 shown in Figure 8) to recover the entire story of the cross-over object path. Since each object path records its parent object path, this information is used to construct an inheriting tree structure to merge all relative object paths together. For example, the object path rearranging unit 143 may merge the object paths P a , P b and P c shown in FIG. 4B into an individual object path, which is referred to herein as an adjusted object path, in step S241. . Object path rearrangement unit 143 may also object path P d shown in FIG. 4C in the step S241, P e P f and merged into a path of objects adjusted. In addition, an object path without a parent object path can also be considered an adjusted object path.

物件路徑重排的下一步,是把所有個體物件路徑放置在合成視頻13中。當使用者想要在很短的時間中進行壓縮播放時,物件路徑的長度可能會超過合成視頻13的時間長度,因此圖8所示實施例使用了兩種方法來縮短物件路徑。此兩種方法是物件路徑加速(object path speedup,圖8所示步驟S242)和物件路徑分裂(object path splitting,圖8所示步驟S243)。 The next step in the rearrangement of the object path is to place all individual object paths in the composite video 13. When the user wants to perform compressed playback in a short period of time, the length of the object path may exceed the length of the composite video 13, so the embodiment shown in Fig. 8 uses two methods to shorten the object path. The two methods are object path speedup (step S242 shown in Fig. 8) and object path splitting (step S243 shown in Fig. 8).

若步驟S241所提供的經調整物件路徑的時間長度大於閾長度P th ,則物件路徑重排單元143可以在步驟S242中依一加速因數(speedup factor)S p 加快步驟S241所提供經調整物件路徑的播放速度以縮短其的時間長度。閾長度P th 小於或等於合成視頻13的播放時間長度TL,而加速因數S p 為實數。本實施例並不限制 閾長度P th 與加速因數S p 的實施方式。舉例來說(但不限於此),閾長度P th 可以大於或等於合成視頻13的播放時間長度TL的四分之一。在本實施例中,閾長度P th 將被設定為等於播放時間長度TL的二分之一。以下將以不同實施例說明加速因數S p 的實施方式。 If the time length of the adjusted object path provided in step S241 is greater than the threshold length P th , the object path rearranging unit 143 may accelerate the adjusted object path provided in step S241 according to a speedup factor S p in step S242. The playback speed is shortened by the length of time. Threshold P th length of less than or equal composite video playback time length T L 13, and the acceleration factor S p is a real number. This embodiment does not limit the length of the threshold P th acceleration factor S p embodiments. For example (but not limited to), it may be greater than the threshold P th length equal to a quarter or synthetic video playback time length T L of 13. In the present embodiment, the length of the threshold P th is set equal to the playback time of one-half the length L T. The following Examples will illustrate various embodiments of the acceleration factor S p embodiments.

在一些實施例中,為加速物件路徑,需要關注的是物件路徑的時間長度P l 和合成視頻13的時間長度TL的比值。如果物件路徑的時間長度P l 是長於合成視頻13的時間長度TL的一半,則將縮短此物件路徑。物件路徑重排單元143可以在步驟S242中計算等式S p =(P l /P th ),以求得加速因數S p 。在本實施例中,閾長度P th =(TL/2)。若加速因數S p 大於最大加速值S max ,則將加速因數S p 設為最大加速值S max ,其中最大加速值S max 大於1且小於4。若加速因數S p 小於1,則將加速因數S p 設為1。因此,如果物件路徑的時間長度P l 是小於合成視頻13的時間長度TL的一半,則不縮短此物件路徑。 In some embodiments, the acceleration of the object path, concern is the ratio of the length L of the object path P l time and 13 times the length of the composite video T. If the time length P l of the object path is longer than half the time length T L of the composite video 13, the object path will be shortened. Object path rearrangement unit 143 in step S242 may calculate the equation S p = (P l / P th), in order to achieve acceleration factor S p. In the present embodiment, the threshold length P th = (T L /2). When the acceleration factor is greater than the maximum acceleration value S p S max, then the acceleration factor S p as the maximum acceleration value S max, where S max is greater than the maximum acceleration value of 1 and less than 4. When the acceleration factor is less than S p 1, S p factor is set to 1 will be accelerated. Therefore, if the time length P l of the object path is less than half of the time length T L of the composite video 13, the object path is not shortened.

由此可知,物件路徑的播放速度取決於設定的播放時間長度TL。舉例來說,於實際應用情境中,若第一物件路徑與第二物件路徑的播放時間長度均小於閾長度P th ,則第一物件路徑與第二物件路徑的播放速度可能相同於在原始視頻11中的播放速度,即第一物件路徑與第二物件路徑的播放速度可能不需要加速。若第一物件路徑與/或第二物件路徑的播放時間長度大於閾長度P th ,則視頻合成模組140可以依據閾長度P th 來調整第一物件路徑 與/或第二物件路徑的播放速度,使得第一物件路徑與第二物件路徑的播放速度可能不同(或相同)。 It can be seen that the playback speed of the object path depends on the set playback time length T L . For example, in a practical application scenario, if the playing time length of the first object path and the second object path are both less than the threshold length P th , the playback speed of the first object path and the second object path may be the same as in the original video. The playback speed in 11, that is, the playback speed of the first object path and the second object path may not need to be accelerated. If the playing time length of the first object path and/or the second object path is greater than the threshold length P th , the video synthesizing module 140 may adjust the playing speed of the first object path and/or the second object path according to the threshold length P th So that the playback speed of the first object path and the second object path may be different (or the same).

在另一些實施例中,穿越擠迫圖的熱區(hot zone of crowdedness map)的物件路徑需要加速來減少與其他物件路徑的重疊。在此將所述「熱區」定義為:某一區域的擠迫值相對大於其他區域的擠迫值,則所述某一區域可以稱為熱區。藉由使用擠迫圖來計算物件路徑於一擠迫圖中的一代表擠迫值C p 。本實施例先將所有物件路徑的邊界框投影至擠迫圖,然後在投影區域中找出擠迫圖的所述代表擠迫值C p 。本實施例假設某一物件路徑的代表擠迫值C p 是該物件路徑在擠迫圖的最大擠迫值。若所述物件路徑於擠迫圖中的代表擠迫值C p 大於或等於上限擠迫值C U ,則物件路徑重排單元143可以在步驟S242中將加速因數S p 設為最大加速值S max ,其中上限擠迫值C U 與最大加速值S max 為實數。上限擠迫值C U 與最大加速值S max 可以依照設計需求來決定。若代表擠迫值C p 小於或等於下限擠迫值C L ,則物件路徑重排單元143可以在步驟S242中將加速因數S p 設為1,其中下限擠迫值C L 為實數,且下限擠迫值C L 小於上限擠迫值C U 。下限擠迫值C L 可以依照設計需求來決定。若代表擠迫值C p 大於下限擠迫值C L 且小於上限擠迫值C U ,則物件路徑重排單元143可以在步驟S242中將加速因數S p 設為[(C p -C L )/(C U -C L )]*(S max -1)+1。 In other embodiments, the object path that traverses the hot zone of crowdedness map requires acceleration to reduce overlap with other object paths. Here, the "hot zone" is defined as a region where the crowded value is relatively larger than the crowded value of other regions, and the certain region may be referred to as a hot zone. FIG calculated by using the object path congestion in a crowded figures representative of a congestion value C p. In this embodiment, the bounding box of all the object paths is first projected onto the crowded map, and then the representative forced value C p of the pressed map is found in the projected area. The present embodiment assumes a representative of object path C p value of the congestion is congestion at the maximum value in the object path crowded FIG. If the path to the object on behalf of overcrowding FIG overcrowded C p value of equal to or greater than the upper limit value of the congestion C U, the object path rearrangement unit 143 may be set to the maximum value of acceleration at step S acceleration factor S p S242 will Max , wherein the upper limit forced value C U and the maximum acceleration value S max are real numbers. The upper limit forced value C U and the maximum acceleration value S max can be determined according to design requirements. If the representative value of the congestion C p value of less than or equal to the lower limit C L congestion, the object path rearrangement unit 143 may be set in a step in the acceleration factor S p S242 1, wherein the lower limit value of the congestion C L is a real number, and the lower limit The forced value C L is less than the upper limit forced value C U . The lower limit of the forced value C L can be determined according to the design requirements. If the representative congestion crowded than the lower limit value of C p and C L than the upper limit value of the congestion value C U, the object path rearrangement unit 143 may be set in a step in the acceleration factor S p S242 [(C p - C L) /( C U - C L )]*( S max -1)+1.

在其他實施例中,物件路徑重排單元143可以可以在步驟S242中計算第一因數Sg=(P l /P th ),以求得加速因數S p 。在本實 施例中,閾長度P th =(TL/2)。若第一因數Sg大於最大加速值S max ,則物件路徑重排單元143將第一因數Sg設為最大加速值S max ,其中最大加速值S max 為大於1且小於4的實數。若第一因數Sg小於1,則物件路徑重排單元143將第一因數Sg設為1。若所述物件路徑於擠迫圖中的代表擠迫值C p 大於或等於上限擠迫值C U ,則物件路徑重排單元143將第二因數Sc設為最大加速值S max 。若代表擠迫值C p 小於或等於下限擠迫值C L ,則物件路徑重排單元143將第二因數Sc設為1。若代表擠迫值C p 大於下限擠迫值C L 且小於上限擠迫值C U ,則物件路徑重排單元143將第二因數Sg設為[(C p -C L )/(C U -C L )]*(S max -1)+1。物件路徑重排單元143可以在步驟S242中從第一因數Sg與第二因數Sc中選取大者,作為加速因數S p In other embodiments, article 143 may route rearrangement unit may calculate the first factor S g at step S242 = (P l / P th ), in order to achieve acceleration factor S p. In the present embodiment, the threshold length P th = (T L /2). If the first factor S g is greater than the maximum acceleration value S max , the object path rearranging unit 143 sets the first factor S g to the maximum acceleration value S max , wherein the maximum acceleration value S max is a real number greater than 1 and less than 4. If the first factor S g is less than 1, the object path rearranging unit 143 sets the first factor S g to 1. If the representative forced value C p of the object path in the forced map is greater than or equal to the upper limit forced value C U , the object path rearranging unit 143 sets the second factor S c to the maximum acceleration value S max . If the representative forced value C p is less than or equal to the lower limit forced value C L , the object path rearranging unit 143 sets the second factor S c to 1. If the representative forced value C p is greater than the lower limit forced value C L and less than the upper limit forced value C U , the object path rearranging unit 143 sets the second factor S g to [( C p - C L )/( C U - C L )]*( S max -1)+1. Object path rearrangement unit 143 may be selected from the big to the first factor and the second factor S g S c in step S242, the acceleration factor as S p.

在完成加速物件路徑(步驟S242)後,有可能還會有一些物件路徑的時間長度長於合成視頻13的播放時間長度TL。為了處理極端長的物件路徑,步驟S243可以將此極端長的物件路徑拆分為幾個短的子路徑。於步驟S243中,若經步驟S242處理後的物件路徑的時間長度P l 大於閾長度P th ,則物件路徑重排單元143可以將所述物件路徑拆分為多個子路徑,作為所述經調整物件路徑。物件路徑重排單元143可以調整這些子路徑中的第一子路徑至其他子路徑的幀偏移(frame shift),以降低這些子路徑的重疊面積。 After completing the accelerated object path (step S242), there may be some object paths longer than the playback time length T L of the composite video 13. In order to handle an extremely long object path, step S243 can split this extremely long object path into several short sub-paths. In step S243, if the time length P l of the object path processed in step S242 is greater than the threshold length P th , the object path rearranging unit 143 may split the object path into a plurality of sub paths as the adjusted Object path. The object path rearranging unit 143 can adjust the frame shift of the first sub-paths of the sub-paths to other sub-paths to reduce the overlapping area of the sub-paths.

舉例來說,圖9是依照本發明實施例說明了圖8所示步 驟S243的操作過程示意圖。在此假設圖9所示物件路徑P9的時間長度大於閾長度P th 。物件路徑重排單元143可以於步驟S243中將物件路徑P9拆分為多個子路徑,例如子路徑P9_1與P9_2。在時間序列中的兩個相鄰的子路徑P9_1與P9_2,在時間空間中它們具有小的重疊(如圖9所示),也就是第一子路徑P9_1的尾端部相同於第二子路徑P9_2的頭端部。物件路徑重排單元143可以將子路徑P9_1與P9_2的時間重排,以將後發生的子路徑P9_2的發生時間提前至與子路徑P9_1相同。 For example, FIG. 9 is a schematic diagram showing the operation process of step S243 shown in FIG. 8 according to an embodiment of the present invention. P9 object path length of time shown in FIG. 9 is assumed here that the length is greater than the threshold P th. The object path rearranging unit 143 may split the object path P9 into a plurality of sub-paths, for example, sub-paths P9_1 and P9_2 in step S243. The two adjacent sub-paths P9_1 and P9_2 in the time series have a small overlap in time space (as shown in FIG. 9), that is, the tail end of the first sub-path P9_1 is identical to the second sub-path. The head end of P9_2. The object path rearrangement unit 143 may rearrange the times of the sub-paths P9_1 and P9_2 to advance the occurrence time of the sub-path P9_2 that occurs later to be the same as the sub-path P9_1.

在將子路徑P9_1與P9_2的時間重排後,在一些情況下(例如游蕩的物件),在位置空間上這些子路徑P9_1與P9_2之間可能有大量的重疊。為了解決這個問題,時間偏移(time shift)被添加到每個子路徑。這個問題可以用公式表示成一個最低成本問題(minimum cost problem)。成本函數E(t)的定義為 。其中,t={t 0,t 1,…,t N }表示對應於所述第一 子路徑P9_1的每個子路徑的一組幀偏移(frame shift),而t 0=0、t 0 t 1 t N ,N是子路徑數量。例如,第一子路徑P9_1的幀偏移為t 0,而子路徑P9_2的幀偏移為t 1P i (t i )是具有幀偏移t i 的第i個子路徑,而函數O(P x ,P y )計算兩子路徑P x P y 之間的重疊區域。要最小化成本函數E(t)必須滿足以下條件:R×T L >L N ,其中L N 是所有子路徑的時間範圍,而R是小於1的常數。如果R值越小,則子路徑的時間長度越短。因此,物件路徑重排單元143可 以在步驟S243中調整這些子路徑中的第一子路徑P9_1至其他子路徑P9_2的幀偏移,以降低這些子路徑的重疊面積。 After rearranging the sub-paths P9_1 and P9_2, in some cases (such as a wandering object), there may be a large amount of overlap between these sub-paths P9_1 and P9_2 in the position space. To solve this problem, a time shift is added to each subpath. This problem can be formulated as a minimum cost problem. The cost function E(t) is defined as . Where t = { t 0 , t 1 , ..., t N } represents a set of frame shifts corresponding to each sub-path of the first sub-path P9_1, and t 0 =0, t 0 t 1 ... t N , N is the number of sub-paths. For example, the frame offset of the first sub-path P9_1 is t 0 , and the frame offset of the sub-path P9_2 is t 1 . P i ( t i ) is the i-th sub-path with the frame offset t i , and the function O ( P x , P y ) calculates the overlap region between the two sub-paths P x and P y . To minimize the cost function E(t), the following conditions must be met: R x T L > L N , where L N is the time range of all sub-paths and R is a constant less than one. If the R value is smaller, the length of the subpath is shorter. Therefore, the object path rearranging unit 143 can adjust the frame offsets of the first sub-paths P9_1 to the other sub-paths P9_2 among the sub-paths in step S243 to reduce the overlapping area of the sub-paths.

在完成步驟S243後,物件路徑重排單元143可以進行步驟S244。依照所述物件路徑於原始視頻11中的出現順序,物件路徑重排單元143可以在步驟S244中初始化步驟S243所提供的經調整物件路徑於合成視頻13中的時間位置。所有物件路徑先重新排列。若所述經調整物件路徑中的第一物件路徑與第二物件路徑之間在時間上有空隙,則物件路徑重排單元143將第一物件路徑與第二物件路徑中後發生者的時間提前,且使該後發生者的時間晚於第一物件路徑與第二物件路徑中先發生者的時間。物件路徑重排單元143將所述經調整物件路徑的時間偏移分別乘同一個調整值,以使所述經調整物件路徑的時間範圍被包含於合成視頻13的播放時間長度TL的範圍中。 After the step S243 is completed, the object path rearranging unit 143 can proceed to step S244. In accordance with the order of appearance of the object path in the original video 11, the object path rearranging unit 143 may initialize the time position of the adjusted object path provided in step S243 in the synthesized video 13 in step S244. All object paths are rearranged first. If there is a gap between the first object path and the second object path in the adjusted object path, the object path rearranging unit 143 advances the time of the first object path and the second object path. And the time of the subsequent occurrence is later than the time of the first object path and the first occurrence in the second object path. The object path rearranging unit 143 multiplies the time offsets of the adjusted object paths by the same adjustment value, respectively, such that the time range of the adjusted object path is included in the range of the playback time length T L of the composite video 13 .

舉例來說,圖10A、圖10B與圖10C是依照本發明實施例說明物件路徑重排單元143初始化經調整物件路徑的時間位置示意圖。圖10A繪示了物件路徑P101、P102與P103,其中物件路徑P102與P103之間在時間上有空隙。因此,請參照圖10B,物件路徑重排單元143在步驟S244中可以將物件路徑P103的時間提前,使物件路徑P102與P103之間沒有空隙。被提前的物件路徑P103的時間晚於物件路徑P102的時間。如圖10B所示,所有物件路徑的時間範圍L N 超出了合成視頻13的播放時間長度TL。物件路徑重排單元143可以依據物件路徑P102的幀偏移 FS2、物件路徑P102的時間長度與合成視頻13的播放時間長度TL,而求得將物件路徑P102的調整值SC2(小於1的實數)。其中,當幀偏移FS2乘以調整值SC2後,可以使物件路徑P102被包含於合成視頻13的播放時間長度TL的範圍中。物件路徑重排單元143還可以依據物件路徑P103的幀偏移FS3、物件路徑P103的時間長度與合成視頻13的播放時間長度TL,而求得將物件路徑P103的調整值SC3(小於1的實數)。其中,當幀偏移FS3乘以調整值SC3後,可以使物件路徑P103被包含於合成視頻13的播放時間長度TL的範圍中。接下來,物件路徑重排單元143可以從這些調整值(例如SC2與SC3)中選擇最小者,作為調整值SC。物件路徑重排單元143在步驟S244中將所述物件路徑P102與P103的時間偏移FS2與FS3分別乘同一個調整值SC,以使物件路徑P101、P102與P103的時間範圍L N 被包含於合成視頻13的播放時間長度TL的範圍中,如圖10C所示。由於所有物件路徑的時間偏移都乘以同一個調整值,因此所述物件路徑於原始視頻11中的出現順序可以被保留。 For example, FIG. 10A, FIG. 10B and FIG. 10C are schematic diagrams illustrating the time position of the object path rearranging unit 143 initializing the adjusted object path according to an embodiment of the present invention. FIG. 10A illustrates object paths P101, P102, and P103 with time gaps between object paths P102 and P103. Therefore, referring to FIG. 10B, the object path rearranging unit 143 can advance the time of the object path P103 in step S244 so that there is no gap between the object paths P102 and P103. The time of the object path P103 advanced is later than the time of the object path P102. As shown in FIG. 10B, the time range L N of all object paths exceeds the playback time length T L of the composite video 13. The object path rearranging unit 143 can determine the adjustment value SC2 of the object path P102 according to the frame offset FS2 of the object path P102, the length of the object path P102, and the playback time length T L of the synthesized video 13 (a real number less than 1) ). Wherein, after the frame offset FS2 is multiplied by the adjustment value SC2, the object path P102 can be included in the range of the playback time length T L of the synthesized video 13. The object path rearranging unit 143 can also determine the adjustment value SC3 of the object path P103 according to the frame offset FS3 of the object path P103, the length of the object path P103, and the playing time length T L of the synthesized video 13 (less than 1). Real number). Wherein, after the frame offset FS3 is multiplied by the adjustment value SC3, the object path P103 can be included in the range of the playback time length T L of the synthesized video 13. Next, the object path rearranging unit 143 may select the smallest one of these adjustment values (for example, SC2 and SC3) as the adjustment value SC. The object path rearranging unit 143 multiplies the time offsets FS2 and FS3 of the object paths P102 and P103 by the same adjustment value SC, respectively, in step S244, so that the time range L N of the object paths P101, P102, and P103 is included in The range of the playback time length T L of the composite video 13 is as shown in FIG. 10C. Since the time offsets of all object paths are multiplied by the same adjustment value, the order of appearance of the object paths in the original video 11 can be preserved.

如果步驟S245判斷還有物件路徑的的時間長度大於閾長度P th ,或是所有物件路徑的時間範圍L N 超出了合成視頻13的播放時間長度TL,則再一次進行步驟S242、S243、S244和S245。 If it is determined in step S245 that the time length of the object path is greater than the threshold length P th , or the time range L N of all the object paths exceeds the play time length T L of the synthesized video 13, the steps S242, S243, and S244 are performed again. And S245.

物件路徑重排的最後一步是優化物件路徑於時間域中的位置(圖8所示步驟S246)。依照步驟S244所提供的物件路徑於合成視頻13中的重疊情形,物件路徑重排單元143於步驟S246 中調整所述物件路徑於合成視頻13中的時間位置。例如,物件路徑重排單元143可以調整所述經調整物件路徑中的第一物件路徑與其他物件路徑之間的幀偏移,以降低所述經調整物件路徑的彼此重疊面積。優化程序的目的是為了獲得物件路徑在時間域中的最佳位置。多個物件路徑在時間域中的最佳位置,表示這些物件路徑具有最小重疊區域。再次應用所述成本函數 ,其中物件路徑重排的結果是對應於物件路 徑的一組幀偏移S={S 0,S 1,…,S x },其中S 0=0、S 0 S 1 S x ,而x是物件路徑的數量。P i (t i )是具有幀偏移t i 的第i個物件路徑,而函數O(P x ,P y )計算兩物件路徑P x P y 之間的重疊區域。 The final step in the object path rearrangement is to optimize the position of the object path in the time domain (step S246 shown in Figure 8). In accordance with the overlapping situation of the object path in the composite video 13 provided in step S244, the object path rearranging unit 143 adjusts the time position of the object path in the composite video 13 in step S246. For example, the object path rearranging unit 143 can adjust a frame offset between the first object path and the other object path in the adjusted object path to reduce overlapping areas of the adjusted object paths. The purpose of the optimizer is to get the best position of the object path in the time domain. The optimal position of multiple object paths in the time domain indicates that these object paths have minimal overlap areas. Apply the cost function again The result of the object path rearrangement is a set of frame offsets S = { S 0 , S 1 , ..., S x } corresponding to the object path, where S 0 =0, S 0 S 1 ... S x , and x is the number of object paths. P i ( t i ) is the i-th object path with the frame offset t i , and the function O ( P x , P y ) calculates the overlap region between the two object paths P x and P y .

請參照圖3,視頻合成單元144耦接至物件路徑重排單元143,以接收物件路徑重排單元143的重排結果。視頻合成單元144可以將物件路徑重排單元143所提供的物件路徑與存儲模組130所提供的背景圖像合成為合成視頻13。當合成視頻13的一個幀包含多個物件時,這些物件可以來自於原始視頻11的不同幀。例如,考慮兩個物件路徑P1和P2,假定Tb 1和Tb 2各自是物件路徑P1和P2在原始視頻11的開始時間,而且經過物件路徑重排單元143重排後的對應幀偏移分別是S1和S2。如果S1+k=S2+m,則物件路徑P1的第k個幀和物件路徑P2的第m個幀將顯示在合成視頻13的同一幀中。這兩個物件的時間戳記是Tb 1+k和Tb 2+m。Tb 1+k和Tb 2+m不是相同的,除非Tb 1-Tb 2=S1-S2Referring to FIG. 3, the video synthesizing unit 144 is coupled to the object path rearranging unit 143 to receive the rearrangement result of the object path rearranging unit 143. The video synthesizing unit 144 can synthesize the object path provided by the object path rearranging unit 143 and the background image provided by the storage module 130 into the synthesized video 13. When a frame of composite video 13 contains multiple objects, these objects may be from different frames of the original video 11. For example, considering two object paths P 1 and P 2 , it is assumed that T b 1 and T b 2 are each the object paths P 1 and P 2 at the start time of the original video 11 and are rearranged by the object path rearranging unit 143. The corresponding frame offsets are S 1 and S 2 , respectively . If S 1 + k = S 2 + m, the object path P k-th frame and an article path P 2 m-th frame in the composite video will be displayed in the same frame 13. The time stamps of the two objects are T b 1 +k and T b 2 +m. T b 1 + k and T b 2 + m is not the same, unless T b 1 -T b 2 = S 1 -S 2.

為了合成視頻,第一步是確定哪些物件是在視頻幀中。假設合成視頻13在使用者預設的時間長度TL中有N個幀,則物件路徑P i 的第k個幀的物件是合成視頻13的第(k+S i )個幀的成員(member),其中S i 是物件路徑P i 的幀偏移,而k+S i N。視頻播放裝置100可以將所有感興趣的物件合成到合成視頻13中。 To synthesize the video, the first step is to determine which objects are in the video frame. Assuming that the composite video 13 has N frames in the user-predetermined time length T L , the object of the k- th frame of the object path P i is a member of the (k+ S i )th frame of the composite video 13 Where S i is the frame offset of the object path P i and k+ S i N. The video playback device 100 can synthesize all of the objects of interest into the composite video 13.

背景圖像是視頻合成的另一個關鍵因素。視頻合成程序將物件粘貼在背景圖像上,以產生視頻幀。在本實施例中,背景圖像的選擇是基於幀中物件的時間戳記。對於視頻幀,其包含n個物件O={O 0,O 1,...,O n 1},這些物件的時間戳記為T={T 0,T 1,...,T n 1}。視頻合成單元144可以從存儲模組130中的多個背景圖像中選擇其時間戳記最接近於時間戳記值T bg 的背景圖像,以便進行視頻幀的合成。其中,時間戳記值T bg 是從視頻幀中各物件的時間戳記計算獲得。舉例來說(但不限於此),時間戳記值T bg 等於一個視頻幀中各物件的時間戳記的平均值,即 。在其他應用情況下,時間戳記值T bg 可以是這些物件 的時間戳記T的中間值,或在時間戳記T的其中一值。例如,如果物件O i 是一個重要的物件(significant object),則視頻合成單元144可以選擇一個背景圖像的時間戳記是最接近T i Background images are another key factor in video synthesis. The video compositing program pastes the object onto the background image to produce a video frame. In this embodiment, the selection of the background image is based on the timestamp of the object in the frame. For a video frame, it contains n objects O = { O 0 , O 1 ,..., O n 1 }, and the time stamp of these objects is T = { T 0 , T 1 ,..., T n 1 } . The video synthesizing unit 144 may select a background image whose time stamp is closest to the time stamp value T bg from among the plurality of background images in the storage module 130 to perform video frame synthesis. The timestamp value T bg is obtained from the time stamp of each object in the video frame. For example, but not limited to, the timestamp value T bg is equal to the average of the time stamps of the objects in a video frame, ie . In other applications, the timestamp value T bg may be the intermediate value of the time stamp T of these objects, or one of the values of the time stamp T. For example, if the object O i is an important significant object, the video synthesis unit 144 can select a time stamp of the background image that is closest to T i .

完成背景圖像的選擇後,視頻合成單元144可以進行物件和背景混合(object and background blending)。本實施例並不限定物件和背景混合的方式。舉例來說,在一些實施例中,視頻 合成單元144可以利用高斯混合方法(Gaussian blending method)混合背景圖像與所述物件路徑的物件圖像。物件圖像(邊界框區域)可能包含了原始視頻11的完整物件和部分背景。因此,如果直接將物件圖像粘貼在背景圖像上,則物件圖像的邊界可能變得非常明顯/不自然。為了解決這一問題,本實施例可以採用高斯混合方法混合背景圖像與物件圖像。施用於物件圖像與背景圖像的高斯混合定義為F’ij=wij*Fij+(1-wij)*Bij,其中Fij是在位置(i,j)處物件圖像的像素,Bij是在位置(i,j)處背景圖像的像素,F’ij是在應用高斯混合之後物件圖像的像素,wij是高斯混合的權重值(weight value)。當th |Fij-Bij|時,wij=1,否則。其中,th 是用以判斷背景相似度的閾值(threshold value),而b是高斯函數的常數值(constant value for Gaussian function)。 After the selection of the background image is completed, the video synthesizing unit 144 can perform object and background blending. This embodiment does not limit the manner in which the object and the background are mixed. For example, in some embodiments, video synthesis unit 144 can blend the background image with the object image of the object path using a Gaussian blending method. The object image (boundary frame area) may contain the complete object and partial background of the original video 11. Therefore, if the object image is directly pasted on the background image, the boundary of the object image may become very noticeable/unnatural. In order to solve this problem, the present embodiment can mix a background image and an object image by a Gaussian mixture method. The Gaussian mixture applied to the object image and the background image is defined as F' ij =w ij *F ij +(1-w ij )*B ij , where F ij is the image of the object at position (i,j) Pixels, B ij are the pixels of the background image at position (i, j), F' ij is the pixel of the object image after Gaussian blending is applied, and w ij is the weight value of the Gaussian mixture. When th |F ij -B ij |, w ij =1, otherwise . Where th is the threshold value used to determine the background similarity, and b is the constant value for Gaussian function.

在另一些實施例中,視頻合成單元144可以利用Alpha混合方法以半透明方式混合所述物件路徑中相互重疊的物件圖像。當合成視頻13的幀包含多個物件時,在物件圖像之間有可能具有重疊區域。讓一個物件重疊覆蓋另一物件,其將丟失物件的資訊。Alpha混合方法用於以半透明顯示重疊物件,以避免資訊 丟失問題。Alpha混合方法以公式表示為,其中F’ij是 在應用Alpha混合後位於位置(i,j)的像素(pixel),n表示在位置(i,j)處重疊物件的數量;而是第k個物件在重疊區域中位置(i,j)處的像素。 In other embodiments, video synthesis unit 144 may utilize an alpha blending method to blend object images that overlap each other in the object path in a translucent manner. When the frame of the composite video 13 contains a plurality of objects, there is a possibility of overlapping regions between the object images. Let one object overlap to cover another, which will lose information about the object. The alpha blending method is used to display overlapping objects in a semi-transparent manner to avoid information loss issues. The alpha blending method is expressed as a formula , where F' ij is the pixel (pixel) at position (i, j) after applying alpha blending, and n represents the number of overlapping objects at position (i, j); Is the pixel at the position (i, j) of the kth object in the overlap region.

在其他實施例中,視頻合成單元144可以利用z順序(z-ordering)來產生合成視頻13。Z順序的概念是近的物體(near object)將遮蓋遠的物體(distant object)。這種方法需要在二維圖像中定義物件的z軸距離(z-distance)。視頻合成單元144可以計算所述物件路徑中多個物件的z軸距離。根據這些物件的z軸距離的降冪順序,視頻合成單元144可以將這些物件依序粘貼在背景圖像上。也就是說,具有最大z軸距離的物件最先被粘貼在背景圖像上,而具有最小z軸距離的物件被最後粘貼在背景圖像上。 In other embodiments, video synthesis unit 144 may utilize z-ordering to generate composite video 13. The concept of the Z sequence is that a near object will cover a distant object. This method requires defining the z-distance of the object in a two-dimensional image. The video synthesis unit 144 can calculate the z-axis distance of the plurality of objects in the object path. Based on the order of power reduction of the z-axis distance of these objects, the video synthesizing unit 144 can sequentially paste the objects onto the background image. That is, the object with the largest z-axis distance is first pasted on the background image, and the object with the smallest z-axis distance is finally pasted on the background image.

對於從一般相機獲得的圖像而言,物件的z軸距離是反比於物件的y座標的最大值。舉例來說,圖11是依照本發明實施例說明一般相機獲得的圖像的示意圖。於圖11中,縱軸表示圖像幀1100的Y軸,而橫軸表示圖像幀1100的X軸,其中此X-Y座標系的原點位於圖像幀1100的左上角。物件越靠近攝影模組110(即z軸距離越小),則物件的Y座標的最大值(即物件圖像的下緣的Y座標)越大。以圖11為例,物件1101的Y座標的最大值大於物件1102的Y座標的最大值,而物件1102的Y座標的最大值大於物件1103的Y座標的最大值。因此可知,物件1101的z軸距離小於物件1102的z軸距離,而物件1102的z軸距離小於物件1103的z軸距離。在計算物件的z軸距離後,這些物件1101、1102與1103根據它們的z軸距離而被排序。根據這些物件1101、1102與1103的z軸距離的降冪順序,視頻合成單元144可 以先將物件1103粘貼在背景圖像上,然後將物件1102粘貼在背景圖像上,接下來才將物件1101粘貼在背景圖像上。 For an image obtained from a general camera, the z-axis distance of the object is inversely proportional to the maximum value of the y-coordinate of the object. For example, Figure 11 is a schematic diagram illustrating an image obtained by a general camera in accordance with an embodiment of the present invention. In FIG. 11, the vertical axis represents the Y-axis of the image frame 1100, and the horizontal axis represents the X-axis of the image frame 1100, wherein the origin of this X-Y coordinate system is located at the upper left corner of the image frame 1100. The closer the object is to the photographic module 110 (ie, the smaller the z-axis distance), the larger the maximum value of the Y coordinate of the object (ie, the Y coordinate of the lower edge of the object image). Taking FIG. 11 as an example, the maximum value of the Y coordinate of the object 1101 is greater than the maximum value of the Y coordinate of the object 1102, and the maximum value of the Y coordinate of the object 1102 is greater than the maximum value of the Y coordinate of the object 1103. Therefore, it can be seen that the z-axis distance of the object 1101 is smaller than the z-axis distance of the object 1102, and the z-axis distance of the object 1102 is smaller than the z-axis distance of the object 1103. After calculating the z-axis distance of the object, the objects 1101, 1102, and 1103 are ordered according to their z-axis distance. According to the order of power reduction of the z-axis distances of the objects 1101, 1102 and 1103, the video synthesizing unit 144 may The object 1103 is first pasted on the background image, and then the object 1102 is pasted on the background image, and then the object 1101 is pasted on the background image.

對於從魚眼相機獲得的圖像而言,物件的z軸距離是正比於從物件到圖像中心(image centroid)的最小距離。舉例來說,圖12是依照本發明實施例說明魚眼相機獲得的圖像的示意圖。於圖12中,魚眼圖像幀1200為圓形,其半徑為r。物件越靠近攝影模組110(即z軸距離越小),則物件到圖像中心1210的最小距離越小。以圖12為例,物件1201到圖像中心1210的最小距離小於物件1202到圖像中心1210的最小距離,而物件1202到圖像中心1210的最小距離小於物件1203到圖像中心1210的最小距離。 因此可知,物件1201的z軸距離小於物件1202的z軸距離,而物件1202的z軸距離小於物件1203的z軸距離。在計算物件的z軸距離後,這些物件1201、1202與1203根據它們的z軸距離而被排序。根據這些物件1201、1202與1203的z軸距離的降冪順序,視頻合成單元144可以先將物件1203粘貼在背景圖像上,然後將物件1202粘貼在背景圖像上,接下來才將物件1201粘貼在背景圖像上。 For an image obtained from a fisheye camera, the z-axis distance of the object is proportional to the minimum distance from the object to the image centroid. For example, Figure 12 is a schematic illustration of an image obtained by a fisheye camera in accordance with an embodiment of the present invention. In Figure 12, the fisheye image frame 1200 is circular with a radius r. The closer the object is to the photographic module 110 (ie, the smaller the z-axis distance), the smaller the minimum distance of the object from the image center 1210. Taking FIG. 12 as an example, the minimum distance from the object 1201 to the image center 1210 is less than the minimum distance from the object 1202 to the image center 1210, and the minimum distance from the object 1202 to the image center 1210 is less than the minimum distance from the object 1203 to the image center 1210. . Therefore, it can be seen that the z-axis distance of the object 1201 is smaller than the z-axis distance of the object 1202, and the z-axis distance of the object 1202 is smaller than the z-axis distance of the object 1203. After calculating the z-axis distance of the object, the objects 1201, 1202, and 1203 are ordered according to their z-axis distance. According to the order of power reduction of the z-axis distances of the objects 1201, 1202 and 1203, the video synthesizing unit 144 may first paste the object 1203 on the background image, and then paste the object 1202 on the background image, and then the object 1201 is next. Paste on the background image.

圖13是依照本發明另一實施例說明一種視頻播放方法的流程示意圖。步驟S1310提供原始視頻,其中該原始視頻是由攝影模組拍攝場景而獲得。於步驟S1320中,由物件路徑提取模組從原始視頻中提取至少一物件路徑與一背景圖像。圖13所示步驟S1310與步驟S1320可以參照圖2所示步驟S210與步驟S220的 相關說明而類推。步驟S1330計算所述物件路徑中多個物件的z軸距離。根據該些物件的z軸距離的降冪順序,步驟S1340將該些物件依序粘貼在該背景圖像上,以將所述物件路徑合成至合成視頻中。其中,該合成視頻的時間長度小於該原始視頻的時間長度。步驟S1340所述將所述物件路徑合成至合成視頻,可以參照圖2所示步驟S240的相關說明而類推,故不再贅述。 FIG. 13 is a schematic flowchart diagram of a video playing method according to another embodiment of the present invention. Step S1310 provides an original video, wherein the original video is obtained by shooting a scene by the photography module. In step S1320, the object path extraction module extracts at least one object path and a background image from the original video. Steps S1310 and S1320 shown in FIG. 13 may refer to steps S210 and S220 shown in FIG. 2. Related instructions and so on. Step S1330 calculates a z-axis distance of the plurality of objects in the object path. According to the power-down sequence of the z-axis distances of the objects, the object is sequentially pasted on the background image in step S1340 to synthesize the object path into the composite video. The length of the synthesized video is less than the length of the original video. The process of synthesizing the object path to the synthesized video in step S1340 can be analogized with reference to the related description of step S240 shown in FIG. 2, and therefore will not be described again.

在一些實施例中,該些物件的z軸距離反比於該些物件的y座標的最大值。在另一些實施例中,該些物件的z軸距離正比於從該些物件到圖像中心的最小距離。 In some embodiments, the z-axis distances of the objects are inversely proportional to the maximum of the y-coordinates of the objects. In other embodiments, the z-axis distances of the objects are proportional to the minimum distance from the objects to the center of the image.

圖14是依照本發明另一實施例說明圖1所示視頻合成模組140的電路方塊示意圖。圖14所示實施例可以參照圖1至圖13的相關說明而類推之,故不再贅述。於圖14所示實施例中,視頻合成模組140還包括物件篩選器145。物件篩選器145耦接至物件路徑蒐集單元142,以接收物件路徑蒐集單元142的蒐集結果。圖14所示使用者介面150還可以設置物件屬性的參數。當使用者想要利用物件的屬性來搜索物件時,需應用物件篩選器145。其中,依據不同的應用需求,物件屬性可能包括大小、顏色、紋理(texture)、材質、臉、運動方向、其他物理屬性、或行為。物件篩選器145可以依據物件屬性,篩選物件路徑蒐集單元142所提供的物件路徑。物件篩選器145檢查篩選物件路徑蒐集單元142所提供的每個物件路徑,並選擇符合物件屬性的物件路徑。所選的物件路徑可能是完整的物件路徑或物件路徑的一部分。物 件篩選器145所選出的物件路徑將提供給物件路徑重排單元143。物件路徑重排單元143可以依照物件篩選器145所選的物件路徑於原始視頻11中的出現順序,將所述物件路徑重新安排在合成視頻13中。圖14所示物件路徑重排單元143以及視頻合成單元144可以參照圖3的相關說明而類推之,故不再贅述。 FIG. 14 is a block diagram showing the circuit of the video synthesizing module 140 of FIG. 1 according to another embodiment of the present invention. The embodiment shown in FIG. 14 can be analogized with reference to the related description of FIG. 1 to FIG. 13 and will not be described again. In the embodiment shown in FIG. 14, the video synthesis module 140 further includes an object filter 145. The object filter 145 is coupled to the object path collecting unit 142 to receive the collected result of the object path collecting unit 142. The user interface 150 shown in FIG. 14 can also set parameters of the object attributes. The object filter 145 is applied when the user wants to search for an object using the attributes of the object. Object properties may include size, color, texture, material, face, direction of motion, other physical properties, or behavior, depending on the application requirements. The object filter 145 may filter the object path provided by the object path collecting unit 142 according to the object attribute. The object filter 145 checks each object path provided by the screen object path collecting unit 142 and selects an object path that conforms to the object attribute. The selected object path may be part of a complete object path or object path. Object The object path selected by the piece filter 145 will be provided to the object path rearranging unit 143. The object path rearranging unit 143 may rearrange the object paths in the synthesized video 13 in accordance with the order of appearance of the object paths selected by the object filter 145 in the original video 11. The object path rearranging unit 143 and the video synthesizing unit 144 shown in FIG. 14 can be analogized with reference to the related description of FIG. 3, and therefore will not be described again.

綜上所述,本發明諸實施例所述視頻播放方法與視頻播放裝置可以依據固定設置或由使用者動態決定的播放時間長度,來決定合成視頻的時間長度。所述視頻播放方法與視頻播放裝置可以從原始視頻中提取至少一物件路徑,以及選擇性地調整所述物件路徑,以將所述物件路徑合成至合成視頻中。因此,所述視頻播放方法與視頻播放裝置可以在預設的播放時間長度中顯示感興趣的所有物件。所述視頻播放方法與視頻播放裝置可以縮短視頻播放時間,即合成視頻的時間長度小於原始視頻的時間長度。 In summary, the video playing method and the video playing device according to the embodiments of the present invention can determine the length of time of the synthesized video according to a fixed setting or a playing time length dynamically determined by the user. The video playback method and video playback device may extract at least one object path from the original video and selectively adjust the object path to composite the object path into the composite video. Therefore, the video playing method and the video playing device can display all the items of interest in a preset playing time length. The video playing method and the video playing device can shorten the video playing time, that is, the length of the synthesized video is less than the length of the original video.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.

S210~S250‧‧‧步驟 S210~S250‧‧‧Steps

Claims (13)

一種視頻播放方法,包括:提供一原始視頻,其中該原始視頻是由一攝影模組拍攝一場景而獲得;提供一播放時間長度,以決定一合成視頻的時間長度,其中該合成視頻的時間長度小於該原始視頻的時間長度;從該原始視頻中提取至少一物件路徑;以及依據該播放時間長度選擇性地調整所述至少一物件路徑,以將所述至少一物件路徑合成至該合成視頻中。 A video playing method includes: providing an original video, wherein the original video is obtained by capturing a scene by a photography module; providing a length of playing time to determine a length of time of a composite video, wherein the length of the composite video a time length less than the original video; extracting at least one object path from the original video; and selectively adjusting the at least one object path according to the play time length to synthesize the at least one object path into the composite video . 如申請專利範圍第1項所述的視頻播放方法,其中所述至少一物件路徑包括一第一物件路徑與一第二物件路徑,而於該合成視頻中該第一物件路徑的播放速度不同於該第二物件路徑的播放速度。 The video playing method of claim 1, wherein the at least one object path comprises a first object path and a second object path, and the playback speed of the first object path is different in the composite video. The playback speed of the second object path. 如申請專利範圍第1項所述的視頻播放方法,其中所述至少一物件路徑包括一第一物件路徑與一第二物件路徑,該第一物件路徑的一第一物件與該第二物件路徑的一第二物件出現於該原始視頻中的時間不重疊,而該第一物件與該第二物件出現於該合成視頻中的時間相重疊。 The video playing method of claim 1, wherein the at least one object path comprises a first object path and a second object path, and a first object and the second object path of the first object path The time at which a second object appears in the original video does not overlap, and the first object overlaps with the time at which the second object appears in the composite video. 如申請專利範圍第1項所述的視頻播放方法,其中所述至少一物件路徑於該合成視頻中的空間位置相同於所述至少一物件路徑於該原始視頻中的空間位置。 The video playing method of claim 1, wherein the spatial position of the at least one object path in the composite video is the same as the spatial position of the at least one object path in the original video. 如申請專利範圍第1項所述的視頻播放方法,其中所述從 該原始視頻中提取所述至少一物件路徑之步驟包括:進行物件檢測和背景提取程序,以從該原始視頻中提取至少一物件與至少一背景圖像;依據在該原始視頻中一目前幀的所述至少一物件與一先前幀的所述至少一物件的關係,創建所述至少一物件路徑;以及將所述至少一背景圖像與所述至少一物件路徑存儲在一存儲裝置。 The video playing method according to claim 1, wherein the slave The step of extracting the at least one object path in the original video comprises: performing an object detection and a background extraction process to extract at least one object and at least one background image from the original video; according to a current frame in the original video Generating the at least one object path in relation to the at least one object of a previous frame; and storing the at least one background image and the at least one object path in a storage device. 如申請專利範圍第5項所述的視頻播放方法,其中所述創建所述至少一物件路徑之步驟包括:若該目前幀的該物件沒有在該先前幀的一父物件,或在該目前幀中的該物件與其他物件共有該父物件,或該目前幀的該物件擁有多個父物件,則創建一新物件路徑,其中該目前幀的該物件是該新物件路徑的第一個物件;若該目前幀的該物件具有唯一的該父物件,且該目前幀的該物件為該父物件的唯一子物件,則將該目前幀的該物件添加到該父物件所屬的一現有物件路徑;以及當所述至少一物件路徑的最後一個物件沒有子物件時,或當所述至少一物件路徑的最後一個物件擁有不止一個子物件時,或當所述至少一物件路徑的最後一個物件和其他物件路徑共有子物件時,所述至少一物件路徑為結束。 The video playing method of claim 5, wherein the step of creating the at least one object path comprises: if the object of the current frame is not in a parent object of the previous frame, or in the current frame If the object in the current object shares the parent object with other objects, or the object of the current frame has multiple parent objects, a new object path is created, wherein the object of the current frame is the first object of the new object path; If the object of the current frame has a unique parent object, and the object of the current frame is the only child object of the parent object, adding the object of the current frame to an existing object path to which the parent object belongs; And when the last object of the at least one object path has no child objects, or when the last object of the at least one object path has more than one child object, or when the last object of the at least one object path and the other When the object path shares a child object, the at least one object path ends. 如申請專利範圍第5項所述的視頻播放方法,其中所述至少一物件路徑的資料包括:所述至少一物件路徑的時間長度、所 述至少一物件路徑的一第一個物件的時間戳記、所述至少一物件路徑的每個成員物件對應於該第一個物件的時間偏移、每個成員物件的位置、每個成員物件的大小、或父物件路徑。 The video playing method of claim 5, wherein the at least one object path data comprises: a time length of the at least one object path, a time stamp of a first object of the at least one object path, a time offset of each member object of the at least one object path, a position of each member object, a position of each member object Size, or parent object path. 如申請專利範圍第5項所述的視頻播放方法,更包括:提供在該原始視頻中的一開始時間T b 與一結束時間T e ;以及若T b P t T e ,或T b P t +P l T e ,或P t T b T e P t +P l ,其中P t P l 分別是在該存儲裝置中的一候選物件路徑的發生時間和長度,則此候選物件路徑被選中作為所述至少一物件路徑。 The video playing method of claim 5, further comprising: providing a start time T b and an end time T e in the original video; and if T b P t T e , or T b P t + P l T e , or P t T b and T e P t + P l , where P t and P l are the occurrence time and length of a candidate object path in the storage device, respectively, and the candidate object path is selected as the at least one object path. 如申請專利範圍第1項所述的視頻播放方法,更包括:依據所述至少一物件路徑在一個場景中不同像素處的擠迫情形,計算一建議時間長度;以及將該建議時間長度提示給一使用者,以輔助該使用者決定該播放時間長度。 The video playing method of claim 1, further comprising: calculating a recommended length of time according to the crowded situation of the at least one object path at different pixels in a scene; and prompting the suggested time length A user to assist the user in determining the length of the play. 一種視頻播放方法,包括:提供一原始視頻,其中該原始視頻是由一攝影模組拍攝一場景而獲得;提供一播放時間長度,以決定一合成視頻的時間長度,其中該合成視頻的時間長度小於該原始視頻的時間長度;從該原始視頻中提取多個物件路徑;以及依照該些物件路徑於該原始視頻中的出現順序,將該些物件路徑重新安排在該合成視頻中。 A video playing method includes: providing an original video, wherein the original video is obtained by capturing a scene by a photography module; providing a length of playing time to determine a length of time of a composite video, wherein the length of the composite video Less than the length of time of the original video; extracting a plurality of object paths from the original video; and rearranging the object paths in the composite video in accordance with the order of appearance of the object paths in the original video. 一種視頻播放方法,包括: 提供一原始視頻,其中該原始視頻是由一攝影模組拍攝一場景而獲得;提供一播放時間長度,以決定一合成視頻的時間長度,其中該合成視頻的時間長度小於該原始視頻的時間長度;從該原始視頻中提取至少一物件路徑;選擇性地調整所述至少一物件路徑,以將所述至少一物件路徑合成至該合成視頻中;依據所述至少一物件路徑在一個場景中不同像素處的擠迫情形,計算一建議時間長度;以及將該建議時間長度提示給一使用者,以輔助該使用者決定該播放時間長度。 A video playing method includes: Providing an original video, wherein the original video is obtained by capturing a scene by a photography module; providing a length of playing time to determine a length of time of the composite video, wherein the length of the composite video is less than the length of the original video Extracting at least one object path from the original video; selectively adjusting the at least one object path to synthesize the at least one object path into the composite video; different in one scene according to the at least one object path The crowded situation at the pixel calculates a suggested length of time; and prompts the suggested length of time to a user to assist the user in determining the length of the play. 一種視頻播放裝置,包括:一物件路徑提取模組,經配置以從一原始視頻中提取至少一物件路徑;一視頻合成模組,耦接至該物件路徑提取模組以接收所述至少一物件路徑,經配置以依據一播放時間長度選擇性地調整所述至少一物件路徑以將所述至少一物件路徑合成至一合成視頻中,其中該播放時間長度決定該合成視頻的時間長度,而該合成視頻的時間長度小於該原始視頻的時間長度;一攝影模組,耦接至該物件路徑提取模組,經配置以拍攝一場景而獲得該原始視頻;以及一顯示模組,耦接至該視頻合成模組,經配置以播放該合成 視頻,其中所述至少一物件路徑包括一第一物件路徑與一第二物件路徑,而於該合成視頻中該第一物件路徑的播放速度不同於該第二物件路徑的播放速度,該第一物件路徑的一第一物件與該第二物件路徑的一第二物件出現於該原始視頻中的時間不重疊,而該第一物件與該第二物件出現於該合成視頻中的時間相重疊。 A video playback device includes: an object path extraction module configured to extract at least one object path from an original video; a video synthesis module coupled to the object path extraction module to receive the at least one object a path configured to selectively adjust the at least one object path to synthesize the at least one object path into a composite video according to a length of play time, wherein the length of the play time determines a length of time of the composite video, and the path The length of the composite video is less than the length of the original video; a camera module coupled to the object path extraction module configured to capture a scene to obtain the original video; and a display module coupled to the Video synthesis module configured to play the composite a video, wherein the at least one object path includes a first object path and a second object path, and in the composite video, a playback speed of the first object path is different from a playback speed of the second object path, the first A time when a first object of the object path and a second object of the second object path appear in the original video do not overlap, and the first object overlaps with a time when the second object appears in the composite video. 一種視頻播放方法,包括:提供一原始視頻,其中該原始視頻是由一攝影模組拍攝一場景而獲得;由一物件路徑提取模組從該原始視頻中提取至少一物件路徑與一背景圖像;計算所述至少一物件路徑中多個物件的z軸距離;以及根據該些物件的z軸距離的降冪順序,將該些物件粘貼在該背景圖像上,以將所述至少一物件路徑合成至一合成視頻中,其中該合成視頻的時間長度小於該原始視頻的時間長度。 A video playing method includes: providing an original video, wherein the original video is obtained by capturing a scene by a camera module; extracting at least one object path and a background image from the original video by an object path extraction module Calculating a z-axis distance of the plurality of objects in the at least one object path; and pasting the objects on the background image according to a power-down sequence of the z-axis distances of the objects, to at least one object The path is synthesized into a composite video, wherein the length of the composite video is less than the length of time of the original video.
TW103136646A 2014-10-23 2014-10-23 Video playback method and apparatus TWI536838B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW103136646A TWI536838B (en) 2014-10-23 2014-10-23 Video playback method and apparatus
US14/689,038 US9959903B2 (en) 2014-10-23 2015-04-16 Video playback method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103136646A TWI536838B (en) 2014-10-23 2014-10-23 Video playback method and apparatus

Publications (2)

Publication Number Publication Date
TW201616862A TW201616862A (en) 2016-05-01
TWI536838B true TWI536838B (en) 2016-06-01

Family

ID=56508713

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103136646A TWI536838B (en) 2014-10-23 2014-10-23 Video playback method and apparatus

Country Status (1)

Country Link
TW (1) TWI536838B (en)

Also Published As

Publication number Publication date
TW201616862A (en) 2016-05-01

Similar Documents

Publication Publication Date Title
US9959903B2 (en) Video playback method
US9595296B2 (en) Multi-stage production pipeline system
US10958854B2 (en) Computer-implemented method for generating an output video from multiple video sources
US20190289359A1 (en) Intelligent video interaction method
US9576202B1 (en) Systems and methods for identifying a scene-change/non-scene-change transition between frames
US10762653B2 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
KR100931311B1 (en) Depth estimation device and its method for maintaining depth continuity between frames
US11750786B2 (en) Providing apparatus, providing method and computer readable storage medium for performing processing relating to a virtual viewpoint image
US20160198097A1 (en) System and method for inserting objects into an image or sequence of images
JP7034666B2 (en) Virtual viewpoint image generator, generation method and program
US9324374B2 (en) Method and system for automatic generation of clips from a plurality of images based on an inter-objects relationship score
US20190364211A1 (en) System and method for editing video contents automatically technical field
CN108605119B (en) 2D to 3D video frame conversion
Silva et al. Towards semantic fast-forward and stabilized egocentric videos
US9786055B1 (en) Method and apparatus for real-time matting using local color estimation and propagation
US20200092444A1 (en) Playback method, playback device and computer-readable storage medium
JP2018206292A (en) Video summary creation device and program
TWI536838B (en) Video playback method and apparatus
US9967546B2 (en) Method and apparatus for converting 2D-images and videos to 3D for consumer, commercial and professional applications
US20200265622A1 (en) Forming seam to join images
KR101826463B1 (en) Method and apparatus for synchronizing time line of videos
JP2014170980A (en) Information processing apparatus, information processing method, and information processing program
KR101629414B1 (en) Method of image extraction based on human factors and apparatus thereof
Yan et al. Analogies based video editing
Phillipson et al. Automated Analysis of the Framing of Faces in a Large Video Archive.