TWI632811B - Film and television interactive system and method - Google Patents

Film and television interactive system and method Download PDF

Info

Publication number
TWI632811B
TWI632811B TW106125678A TW106125678A TWI632811B TW I632811 B TWI632811 B TW I632811B TW 106125678 A TW106125678 A TW 106125678A TW 106125678 A TW106125678 A TW 106125678A TW I632811 B TWI632811 B TW I632811B
Authority
TW
Taiwan
Prior art keywords
video
film
television
interactive
viewer
Prior art date
Application number
TW106125678A
Other languages
Chinese (zh)
Other versions
TW201911877A (en
Inventor
李銘淮
朱瑞琪
梁甄昀
黃茁淳
翁志維
謝佳育
Original Assignee
中華電信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中華電信股份有限公司 filed Critical 中華電信股份有限公司
Priority to TW106125678A priority Critical patent/TWI632811B/en
Application granted granted Critical
Publication of TWI632811B publication Critical patent/TWI632811B/en
Publication of TW201911877A publication Critical patent/TW201911877A/en

Links

Abstract

本發明係揭露一種影視互動系統與方法,包含影視互動子系統、顯示裝置、生物特徵擷取裝置及影視互動伺服器等組件所構成。影視互動子系統係一可執行之影視互動程式可連結自影視互動伺服器以播放影視內容並輸出於顯示裝置,當播放特定的影視片段或時間點時,影視互動子系統能控制生物特徵擷取裝置,以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一生物特徵及反應等回饋訊息,經辨識和處理回饋訊息後,產生與影視時間融合的統整資訊並傳送回影視互動伺服器,讓影視商或實況主獲得影視播放過程中的收視者行為及分析影視播放效應。 The invention discloses a video interactive system and method, which comprises a video interactive subsystem, a display device, a biometric capture device and a video interactive server. The video interactive subsystem is an executable video interactive program that can be connected to the video interactive server to play the video content and output to the display device. When playing a specific video clip or time point, the video interactive subsystem can control the biometric capture. The device automatically collects at least one biometric feature and response feedback message of the viewer to watch the video in the manner of not obscuring the viewer's viewing of the video content, and after the recognition and processing of the feedback message, generates a unified information and transmits the video and time integration. Back to the film and television interactive server, let the film and television business or the live master get the viewer behavior during the video playback and analyze the video playback effect.

Description

影視互動系統與方法 Film and television interactive system and method

本發明屬於一種影視互動系統與方法,尤指一種當播放特定的影視片段或達特定時間點時,能以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一生物特徵及反應等回饋訊息,經處理收視者之回饋訊息,並與影視時間軸融合產生統整資訊後,能獲得影視播放過程中的收視者行為及分析影視播放效應之影視互動系統與方法。 The invention belongs to a film and television interactive system and method, and particularly relates to a method for automatically collecting at least one biometric feature of a viewer watching a movie when the specific movie clip is played or a specific time point is reached, so that the viewer can watch the video content without disturbing the viewer. And feedback information such as response, after processing the feedback message of the viewer, and integrating the video and time axis to generate unified information, the film and television interactive system and method for obtaining the viewer's behavior during the video playback and analyzing the effect of the video playback.

以往的電視互動系統,是以利用人臉表情來辨識輔助判斷並校正測驗結果,或是以分析臉部表情以判斷所停留的時間瞳孔偏移角度產生回饋結果以了解廣告效應,並無法利用人臉表情辨識來獲得影視收視者行為與分析影視效應,再者,常見以分析使用者觀看複數個廣告區塊中之一特定區塊時的臉部表情、停留時間、瞳孔偏移角度產生回饋結果以了解廣告效應,但仍無法依影視時間軸分析收視者複數個生物特徵回饋訊息不同。 In the past, the TV interactive system used the facial expression to identify the auxiliary judgment and correct the test result, or to analyze the facial expression to determine the time of the pupil dip offset to generate feedback results to understand the advertising effect, and to use the person. Face expression recognition to obtain film viewer behavior and analysis of film and television effects. Furthermore, it is common to analyze the facial expression, dwell time, and pupil offset angle when a user views a specific block in a plurality of advertisement blocks to generate feedback results. In order to understand the effect of advertising, but still can not analyze the viewer's multiple biometric feedback messages according to the film and television timeline.

本案發明人鑑於上述習用方式所衍生的各項缺點,乃亟思加以改良創新,並經多年苦心孤詣潛心研究後,終於成功研發完成本影視互動系統與方法。 In view of the shortcomings derived from the above-mentioned conventional methods, the inventor of the present invention has improved and innovated, and after years of painstaking research, he finally succeeded in researching and developing the interactive system and method of this film.

為達上述目的,本發明提出提供一種影視互動系統與方法,線上影視和直播視訊的應用層面相當廣泛,例如直播健身,推廣健身器材或健身房;也有人直播教學、做菜、繪畫、遊戲實況等,然而影視商和實況主(Streamer)僅能藉由觀眾收視人次紀錄來評量哪些影視較受注目,卻無法更細部掌握哪些影視片段較受青睞,更無法獲得哪些影視內容受到人們高度的關注。 In order to achieve the above object, the present invention provides a film and television interactive system and method, and the application of online video and live video is quite extensive, such as live fitness, promotion of fitness equipment or gym; there are also live teaching, cooking, painting, game live, etc. However, film and television dealers and Streamer can only assess which movies are more noticeable by the viewers' ratings, but they can't grasp which movie clips are more popular, and they can't get much video content. .

因此,本發明之目的,在於揭露一種影視互動系統與方法,以解決習知線上影視和直播視訊,僅具有以觀眾收視人次做大範圍的統計和預測,但無法得知特定影視的某片段或某視訊時間點的內容,是否受到高度關注之缺點。 Therefore, the object of the present invention is to disclose a video interactive system and method for solving the conventional online video and live video, which only has a large range of statistics and predictions for viewers, but cannot know a certain piece of a particular movie or Whether the content of a video time point is subject to high attention.

基於上述目的,本發明係提供一種影視互動系統與方法,其包含影視互動子系統、顯示裝置、生物特徵擷取裝置及影視互動伺服器等組件所構成。影視互動子系統係一可執行之影視互動程式,執行影視互動程式可連結自影視互動伺服器以播放影視內容並輸出於顯示裝置,當播放特定的影視片段或時間點時,影視互動子系統能控制生物特徵擷取裝置,以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一生物特徵及反應等回饋訊息,經辨識和處理回饋訊息後,產生與影視時間融合的統整資訊並傳送回影視互動伺服器,讓影視商或實況主獲得影視播放過程中的收視者行為及分析影視播放效應。 Based on the above object, the present invention provides a video interactive system and method, which comprises a video interactive subsystem, a display device, a biometric capture device, and a video interactive server. The film and television interactive subsystem is an executable film and television interactive program. The interactive film and television interactive program can be connected to the video interactive server to play the video content and output to the display device. When playing a specific video clip or time point, the video interactive subsystem can The biometric feature capture device is controlled to automatically capture the feedback information of the at least one biometric feature and response of the viewer while watching the video content without disturbing the viewer, and after the recognition and processing of the feedback message, the video and time are merged. Integrate the information and send it back to the video interactive server, so that the film and television dealer or the live master can obtain the viewer behavior during the video playback and analyze the video playback effect.

一種影視互動系統,其包括一顯示裝置,是為影視內容播放之輸出端;一生物特徵擷取裝置,是可以為內建於手持式裝置的傳感器;一影視互動伺服器,是為提供預錄好的影 視內容、提供即時直播方式之影視內容及提供收視者回饋統整資訊藉以獲得影視播放過程中的收視者行為及分析影視播放效應;以及一影視互動子系統,是連結於顯示裝置、及生物特徵擷取裝置、且另與影視互動伺服器連接。 A video interactive system, comprising a display device, which is an output end for playing video content; a biometric capture device, which can be a built-in sensor for a handheld device; a video interactive server for providing a pre-recorded Good shadow Depending on the content, providing live and live video content and providing viewers with feedback to obtain the viewer's behavior during film and television playback and analyzing the effects of video playback; and a video interactive subsystem that is connected to the display device and biometrics The device is captured and connected to the video interactive server.

其中顯示裝置,是可以為手持式裝置或個人電腦之顯示介面,另亦可以為外接螢幕顯示器設備,其中生物特徵擷取裝置,是可以為相機或麥克風,另亦可以外接連線方式運作的心率計數感測器、呼吸計數感測器或穿戴式設備之感測器。 The display device can be a display interface of a handheld device or a personal computer, or can be an external screen display device, wherein the biometric capture device can be a camera or a microphone, or can be operated by an external connection. A sensor that counts a sensor, a breath count sensor, or a wearable device.

其中影視互動子系統,是包含播放模組,是以播放影視互動伺服器的影視內容並輸出到顯示裝置;時間觸發模組,是偵測影視播放之時間軸,且當達設定的欲分析時間點時,即觸發擷取模組以取得收視者至少一生物特徵;擷取模組,是經由生物特徵擷取裝置自動採集收視者觀看影視當下的至少一個生物特徵及反應;辨識模組,是將回饋訊息與臉部表情資料、聲音資料進行回音消除、辨識比對,以得對應的回饋結果,及將聲音分貝和心率數值轉換成正規化數值;融合統整模組,是將辨識後的回饋結果與影視時間點融合成統整資料,對照設定的時間軸事件可獲得收視者觀看影視當下之回饋結果統計;以及傳送模組,是將互動統整結果傳回影視互動伺服器,讓影視商或實況主後續可以獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interactive subsystem includes a playing module, which is to play the video content of the video interactive server and output to the display device; the time triggering module is to detect the time axis of the video playing, and when the set time is to be analyzed When the point is triggered, the capture module is triggered to obtain at least one biometric feature of the viewer; the capture module automatically collects at least one biometric feature and response of the viewer through the biometric capture device; the identification module is The feedback message is combined with the facial expression data and the sound data to perform echo cancellation and identification comparison, so as to obtain corresponding feedback results, and convert the sound decibel and heart rate values into normalized values; the integrated integration module is to be recognized. The feedback result is integrated with the film and television time point to integrate the data. According to the set timeline event, the viewer can watch the statistics of the feedback result of the film and television; and the transmission module returns the interactive result to the video interactive server for film and television. The business or the live master can obtain the viewer behavior during the video playback and analyze the video playback effect.

其中生物特徵,是為臉部表情、環境聲音、語音辨識、或心律計數之回饋訊息,以及觀看影視內容者之反應。 Among them, biometrics are feedbacks for facial expressions, environmental sounds, speech recognition, or heart rate counting, and responses to viewers watching movies and television.

一種影視互動方法,其包括:步驟一、將影視內容存放於影視互動伺服器; 步驟二、關連影視時間軸觸發採集事件;步驟三、播放影視內容與使用者;步驟四、以不打擾收視者觀賞影視內容的方式採集回饋訊息;步驟五、分析回饋訊息以及融合影視時間成統整資訊結果並傳送統整資訊。 A film and television interaction method, comprising: step one, storing the film and television content in a video interactive server; Step 2: Connect the film and television timeline to trigger the collection event; Step 3: Play the video content and the user; Step 4: Collect the feedback message in a way that does not disturb the viewer to watch the video content; Step 5: Analyze the feedback message and integrate the film and television time The entire information result and the integrated information.

其步驟一之放於影視互動伺服器,是將影視商上傳預錄好的影視內容至影視互動伺服器存放,或直播實況主以即時直播方式提供影視內容。 Step 1 is placed on the video interactive server, which uploads the pre-recorded video content to the video interactive server for live broadcast, or broadcasts live video to provide live video content.

其步驟二之觸發採集事件,是設定影視內容關連至少一影視時間軸,影視時間軸可以對應至少一採集事件,做為後續當影視播放時間達設定時間軸,觸發生物特徵擷取裝置來執行採集收視者的生物特徵和行為之依據。其中影視時間軸,是可以為特定的影視片段、特定播放時間點或間隔時間。其中間隔時間,是可以為重複之間隔時間或複數個間隔時間。其中採集事件,是可以為觸發至少一生物特徵擷取裝置。 The triggering acquisition event of step 2 is to set the video content to be associated with at least one video timeline, and the video timeline can correspond to at least one acquisition event, as a subsequent time when the video playing time reaches a set time axis, triggering the biometric capturing device to perform the acquisition. The basis for the biometrics and behavior of the viewer. The movie timeline can be a specific movie clip, a specific playback time point or an interval time. The interval time can be an interval of repetition or a plurality of intervals. The collecting event may be to trigger at least one biometric capturing device.

其步驟三,其收視者是使用影視互動子系統,連結自影視互動伺服器,播放影視內容並輸出於顯示裝置。其影視互動子系統為手持式裝置之APP(Application),為iOS、Android或Windows Phone之版本之APP方式,亦可以在Windows、Mac或Linux操作系统上執行之個人電腦應用程式方式。 In the third step, the viewer uses the video interactive subsystem to connect to the video interactive server, play the video content and output it to the display device. Its video interactive subsystem is the APP (Application) of the handheld device, the APP mode of the version of iOS, Android or Windows Phone, and the PC application mode which can be executed on the Windows, Mac or Linux operating system.

其步驟四之採集回饋訊息,是當影視播放時間達設定時間軸,則影視互動子系統能觸發控制生物特徵擷取裝置,以不打擾收視者觀賞影視內容,自動採集收視者觀看影視當下的至少一個生物特徵及反應之回饋訊息。其中生物特徵 擷取裝置,是可以為內建於手持式裝置的傳感器,亦可以外接連線方式運作的感測器。 The step 4 of collecting the feedback message is that when the film and television playing time reaches the set time axis, the film and television interactive subsystem can trigger the control of the biological feature capturing device, so as not to disturb the viewer to watch the film and television content, and automatically collect the viewer to watch at least the current film and television. A biometric and response feedback message. Biological characteristics The pick-up device is a sensor that can be built into the hand-held device or can be operated in an external connection.

其步驟五,是由影視互動子系統採集回饋訊息後即進行分析,再與影視時間融合處理後成統整資訊結果,最後送回影視互動伺服器,回饋及統整資訊可讓影視商或直播實況主獲得影視播放過程中的收視者行為,與分析影視播放效應。其中回饋訊息,是擷取相關生物特徵,包含收視者之臉部表情、環境聲音擷取、語音辨識或心率數值之生物特徵,以及觀看影視內容者之反應。 The fifth step is to analyze the feedback message after the film and television interactive subsystem collects the information, and then integrate it with the film and television time to process the information result, and finally send it back to the video interactive server. The feedback and integration information can be used by the film or television company or live broadcast. The live master gets the behavior of the viewer in the process of film and television playback, and analyzes the effect of video playback. The feedback message is a biometric feature that includes the facial expression of the viewer, the environmental sound capture, the biometric characteristics of the speech recognition or heart rate, and the reaction of the viewer.

本發明所提供一種影視互動系統與方法,與其他習用技術相互比較時,更具備下列優點: The invention provides a film and television interaction system and method, and has the following advantages when compared with other conventional technologies:

1.本發明當播放特定的影視片段或時間點時,能以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一生物特徵及反應等回饋訊息。 1. When playing a specific movie clip or time point, the present invention can automatically collect at least one biometric and reaction feedback message of the viewer watching the movie without disturbing the viewer's viewing of the video content.

2.本發明分析處理回饋訊息並與影視時間軸融合產生統整資訊結果,可幫助影視商或實況主獲得影視播放過程中的收視者行為和反應等回饋訊息,進而了解與分析影視播放之效應。 2. The invention analyzes and processes the feedback message and integrates with the film and television timeline to generate the integrated information result, which can help the film and television dealer or the live master to obtain the feedback information such as the viewer's behavior and reaction during the film and television playing process, thereby understanding and analyzing the effect of the video playing. .

3.本發明解決習知線上影視和直播視訊僅具有觀眾收視人次做大範圍的統計和預測,無法得知特定影視片段或視訊時間點的內容是否受到高度關注缺點。 3. The invention solves the problem that the online video and live video have only a large number of statistics and predictions of the viewers, and it is impossible to know whether the content of the specific video clip or the video time point is highly concerned.

110‧‧‧顯示裝置 110‧‧‧ display device

120‧‧‧生物特徵擷取裝置 120‧‧‧Biometric extraction device

130‧‧‧影視互動子系統 130‧‧‧Video Interactive Subsystem

131‧‧‧播放模組 131‧‧‧Playing module

132‧‧‧時間觸發模組 132‧‧‧Time Trigger Module

133‧‧‧擷取模組 133‧‧‧Capture module

134‧‧‧辨識模組 134‧‧‧ Identification Module

135‧‧‧融合統整模組 135‧‧‧Integrated module

136‧‧‧傳送模組 136‧‧‧Transmission module

140‧‧‧影視互動伺服器 140‧‧‧Video Interactive Server

S210~S250‧‧‧流程 S210~S250‧‧‧ Process

請參閱有關本發明之詳細說明及其附圖,將可進 一步瞭解本發明之技術內容及其目的功效;有關附圖為:圖1為本發明影視互動系統與方法之架構圖;圖2為本發明影視互動系統與方法之流程圖。 Please refer to the detailed description of the invention and its drawings, which will be The technical content of the present invention and its effect are understood in one step; the related drawings are: FIG. 1 is an architectural diagram of a video interactive system and method according to the present invention; and FIG. 2 is a flow chart of a video interactive system and method according to the present invention.

為了使本發明的目的、技術方案及優點更加清楚明白,下面結合附圖及實施例,對本發明進行進一步詳細說明。應當理解,此處所描述的具體實施例僅用以解釋本發明,但並不用於限定本發明。 The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

以下,結合附圖對本發明進一步說明:請參閱圖1所示,為一種影視互動系統與方法之架構圖,其包括一顯示裝置110,是為影視內容播放之輸出端;一生物特徵擷取裝置120,是可以為內建於手持式裝置的傳感器;一影視互動伺服器140,是為提供預錄好的影視內容、提供即時直播方式之影視內容及提供收視者回饋統整資訊藉以獲得影視播放過程中的收視者行為及分析影視播放效應;以及一影視互動子系統130,是連結於顯示裝置、及生物特徵擷取裝置、且另與影視互動伺服器連接。 The present invention is further described with reference to the accompanying drawings: FIG. 1 is an architectural diagram of a video interactive system and method, including a display device 110, which is an output end for playing video content; and a biometric capture device. 120 is a sensor that can be built in a handheld device; a video interactive server 140 is provided for providing pre-recorded video content, providing live video content and providing viewers with feedback information to obtain video playback. The viewer behavior in the process and the analysis of the video playback effect; and a video interaction subsystem 130 is connected to the display device, and the biometric capture device, and is also connected to the video interactive server.

其中顯示裝置110,是可以為手持式裝置或個人電腦之顯示介面,另亦可以為外接螢幕顯示器設備,其中生物特徵擷取裝置120,是可以為相機或麥克風,另亦可以外接連線方式運作的心率計數感測器、呼吸計數感測器或穿戴式設備之感測器。 The display device 110 can be a display interface of a handheld device or a personal computer, and can also be an external display device. The biometric capture device 120 can be a camera or a microphone, or can be connected by external connection. Heart rate counting sensor, breath count sensor or sensor for wearable devices.

其中影視互動子系統130,是包含播放模組131,是以播放影視互動伺服器140的影視內容並輸出到顯示裝置110;時間觸發模組132,是偵測影視播放之時間軸,且當達設定的欲分析時間點時,即觸發擷取模組133以取得收視者 至少一生物特徵;擷取模組133,是經由生物特徵擷取裝置120自動採集收視者觀看影視當下的至少一個生物特徵及反應;辨識模組134,是將回饋訊息與臉部表情資料、聲音資料進行回音消除、辨識比對,以得對應的回饋結果,及將聲音分貝和心率數值轉換成正規化數值;融合統整模組135,是將辨識後的回饋結果與影視時間點融合成統整資料,對照設定的時間軸事件可獲得收視者觀看影視當下之回饋結果統計;以及傳送模組136,是將互動統整結果傳回影視互動伺服器140,讓影視商或實況主後續可以獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interaction subsystem 130 includes a play module 131 for playing the video content of the video interactive server 140 and outputting it to the display device 110. The time trigger module 132 is a time axis for detecting the video play, and When the time point to be analyzed is set, the capture module 133 is triggered to obtain the viewer. At least one biometric feature; the capture module 133 automatically collects at least one biometric feature and response of the viewer through the biometric capture device 120; the recognition module 134 is to feed back the message with facial expression data and sound. The data is echo-cancelled, the identification is compared, the corresponding feedback result is obtained, and the sound decibel and heart rate values are converted into normalized values; the fusion integration module 135 is to integrate the recognized feedback result with the film and television time point. The entire data, according to the set timeline event, can obtain the statistics of the feedback result of the viewer watching the movie; and the transmission module 136 returns the interactive integration result to the video interactive server 140, so that the film and television dealer or the live master can obtain the follow-up The behavior of viewers during film and television playback and the analysis of film and television playback effects.

其中生物特徵,是為臉部表情、環境聲音、語音辨識、或心律計數之回饋訊息,以及觀看影視內容者之反應。 Among them, biometrics are feedbacks for facial expressions, environmental sounds, speech recognition, or heart rate counting, and responses to viewers watching movies and television.

請參閱圖1所示,為一種影視互動系統與方法之流程圖,其包括:步驟一、S210將影視內容存放於影視互動伺服器;步驟二、S220關連影視時間軸觸發採集事件;步驟三、S230播放影視內容與使用者;步驟四、S240以不打擾收視者觀賞影視內容的方式採集回饋訊息;步驟五、S250分析回饋訊息以及融合影視時間成統整資訊結果並傳送統整資訊。 Please refer to FIG. 1 , which is a flow chart of a video interactive system and method, which includes: Step 1: S210 stores the video content in the video interactive server; Step 2: S220 connects the video time axis to trigger the collection event; Step 3: S230 plays the video content and the user; in step 4, the S240 collects the feedback message in a manner that does not disturb the viewer to view the video content; in step 5, the S250 analyzes the feedback message and integrates the video time to integrate the information result and transmit the unified information.

由上述步驟可知,其步驟一之放於影視互動伺服器,是將影視商上傳預錄好的影視內容至影視互動伺服器存放,或直播實況主以即時直播方式提供影視內容。 It can be seen from the above steps that the first step is placed on the video interactive server, which is to upload the pre-recorded video content to the video interactive server for recording, or to broadcast the live content to provide the video content.

其步驟二之觸發採集事件,是設定影視內容關連至少一影視時間軸,影視時間軸可以對應至少一採集事件,做 為後續當影視播放時間達設定時間軸,觸發生物特徵擷取裝置來執行採集收視者的生物特徵和行為之依據。其中影視時間軸,是可以為特定的影視片段、特定播放時間點或間隔時間。其中間隔時間,是可以為重複之間隔時間或複數個間隔時間。其中採集事件,是可以為觸發至少一生物特徵擷取裝置。 The triggering acquisition event of step 2 is to set the video content to be associated with at least one video timeline, and the video timeline can correspond to at least one collection event. In order to set the time axis for the subsequent movie playing time, the biometric capturing device is triggered to perform the basis for collecting the biometric characteristics and behavior of the viewer. The movie timeline can be a specific movie clip, a specific playback time point or an interval time. The interval time can be an interval of repetition or a plurality of intervals. The collecting event may be to trigger at least one biometric capturing device.

其步驟三,其收視者是使用影視互動子系統,連結自影視互動伺服器,播放影視內容並輸出於顯示裝置。其影視互動子系統為手持式裝置之APP(Application),為iOS、Android或Windows Phone之版本之APP方式,亦可以在Windows、Mac或Linux操作系统上執行之個人電腦應用程式方式。 In the third step, the viewer uses the video interactive subsystem to connect to the video interactive server, play the video content and output it to the display device. Its video interactive subsystem is the APP (Application) of the handheld device, the APP mode of the version of iOS, Android or Windows Phone, and the PC application mode which can be executed on the Windows, Mac or Linux operating system.

其步驟四之採集回饋訊息,是當影視播放時間達設定時間軸,則影視互動子系統能觸發控制生物特徵擷取裝置,以不打擾收視者觀賞影視內容,自動採集收視者觀看影視當下的至少一個生物特徵及反應之回饋訊息。其中生物特徵擷取裝置,是可以為內建於手持式裝置的傳感器,亦可以外接連線方式運作的感測器。 The step 4 of collecting the feedback message is that when the film and television playing time reaches the set time axis, the film and television interactive subsystem can trigger the control of the biological feature capturing device, so as not to disturb the viewer to watch the film and television content, and automatically collect the viewer to watch at least the current film and television. A biometric and response feedback message. The biometric extraction device is a sensor that can be built in a handheld device or can be operated in an external connection mode.

其步驟五,是由影視互動子系統採集回饋訊息後即進行分析,再與影視時間融合處理後成統整資訊結果,最後送回影視互動伺服器,回饋及統整資訊可讓影視商或直播實況主獲得影視播放過程中的收視者行為,與分析影視播放效應。其中回饋訊息,是擷取相關生物特徵,包含收視者之臉部表情、環境聲音擷取、語音辨識或心率數值之生物特徵,以及觀看影視內容者之反應。 The fifth step is to analyze the feedback message after the film and television interactive subsystem collects the information, and then integrate it with the film and television time to process the information result, and finally send it back to the video interactive server. The feedback and integration information can be used by the film or television company or live broadcast. The live master gets the behavior of the viewer in the process of film and television playback, and analyzes the effect of video playback. The feedback message is a biometric feature that includes the facial expression of the viewer, the environmental sound capture, the biometric characteristics of the speech recognition or heart rate, and the reaction of the viewer.

由上述可知,將影視內容存放於影視互動伺服器。影視商上傳預錄好的影視內容至影視互動伺服器存放,或直 播實況主以即時直播方式提供影視內容;關連影視時間軸觸發採集事件。其特徵是設定影視內容關連至少一影視時間軸,影視時間軸對應至少一採集事件,做為後續當影視播放時間達設定時間軸,觸發生物特徵擷取裝置來執行採集收視者的生物特徵和行為之依據。其中影視時間軸可以是特定的影視片段、特定播放時間點或間隔時間。其中間隔時間可以是重複之間隔時間或複數個間隔時間。其中採集事件可以觸發至少一生物特徵擷取裝置;觀看影視內容。收視者使用影視互動子系統,連結自影視互動伺服器,播放影視內容並輸出於顯示裝置。其中影視互動子系統可以為手持式裝置之APP(Application),例如:iOS、Android或Windows Phone等版本之APP方式,亦可在Windows、Mac或Linux等操作系统上執行之個人電腦應用程式方式。其中顯示裝置可以為手持式裝置,例如:智慧型手機或平板電腦等,亦可為螢幕顯示器設備,例如電視、電視牆、液晶螢幕、電漿電視、投影機或LED電子看板等;以不打擾收視者觀賞影視內容的方式採集回饋訊息。影視內容播放過程中,當影視播放時間達設定時間軸,則影視互動子系統能觸發控制生物特徵擷取裝置,其特徵是以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一個生物特徵及反應等回饋訊息。其中生物特徵擷取裝置可以為內建於手持式裝置的傳感器,例如:相機或麥克風等,亦可以外接連線方式運作的感測器,例如:心率計數、呼吸計數或穿戴式設備等。其中回饋訊息可包含收視者之臉部表情、環境聲音擷取、語音辨識或心率數值等生物特徵,以及觀看影視內容者之反應;分析回饋訊息以及融合影視時間成統整資訊結果 並傳送該統整資訊。其特徵是影視互動子系統採集回饋訊息後即進行分析,再與影視時間融合處理後成統整資訊結果,最後送回影視互動伺服器,回饋及統整資訊可讓影視商或直播實況主獲得影視播放過程中的收視者行為,與分析影視播放效應。 It can be seen from the above that the video content is stored in the video interactive server. The film and television company uploads the pre-recorded video content to the video interactive server for storage, or straight The broadcaster provides live video content in live broadcast mode; the connected video timeline triggers the collection event. The feature is that the film and television content is associated with at least one film and television timeline, and the film and television time axis corresponds to at least one acquisition event, and as a follow-up, when the film and television playing time reaches a set time axis, the biometric feature capturing device is triggered to perform the biometrics and behaviors of the collecting viewer. The basis. The movie timeline can be a specific movie clip, a specific playback time point or an interval time. The interval time may be a repetition interval or a plurality of interval times. The collecting event can trigger at least one biometric capturing device; watching the video content. The viewer uses the video interactive subsystem to connect to the video interactive server, play the video content and output it to the display device. The video interactive subsystem can be an APP (Application) of a handheld device, for example, an APP mode such as iOS, Android or Windows Phone, or a personal computer application mode executed on an operating system such as Windows, Mac or Linux. The display device may be a handheld device, such as a smart phone or a tablet computer, or a screen display device, such as a television, a video wall, a liquid crystal screen, a plasma TV, a projector, or an LED electronic signboard; The viewer collects feedback messages by watching the content of the film and television. During the playback of video content, when the video playback time reaches the set time axis, the video interactive subsystem can trigger the control of the biometric capture device, which is characterized in that the viewer is automatically disturbed by watching the video content without disturbing the viewer. At least one biometric and reaction feedback message. The biometric extraction device may be a sensor built into the handheld device, such as a camera or a microphone, or an externally connected sensor, such as a heart rate counter, a respiratory count, or a wearable device. The feedback message may include the biometrics of the viewer's facial expressions, environmental sounds, speech recognition or heart rate values, as well as the reaction of the viewers watching the video content; analyzing the feedback message and integrating the film and television time into a unified information result And transmit the unified information. The feature is that the video interactive subsystem collects the feedback message and analyzes it, then integrates it with the film and television time to process the information result, and finally sends it back to the video interactive server. The feedback and integration information can be obtained by the film or television broadcaster or the live broadcaster. The behavior of viewers during film and television playback, and the analysis of film and television playback effects.

下列實施例可同時參考圖1所示,一種影視互動系統包括顯示裝置110、生物特徵擷取裝置120、影視互動子系統130及影視互動伺服器140。顯示裝置110,為影視內容播放之輸出端。顯示裝置可為手持式裝置或個人電腦原有之顯示介面,亦可為外接螢幕顯示器設備。生物特徵擷取裝置120,主要特徵為以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一個生物特徵及反應等回饋訊息。其中生物特徵擷取裝置120可以為內建於手持式裝置的傳感器,例如:相機或麥克風等,亦可以外接連線方式運作的感測器,例如:心率計數、呼吸計數或穿戴式設備等。其中回饋訊息可包含收視者之臉部表情、環境聲音擷取、語音辨識或心率數值等生物特徵,以及觀看影視內容者之反應。 The following embodiments can be simultaneously referred to FIG. 1. A video interactive system includes a display device 110, a biometric capture device 120, a video interaction subsystem 130, and a video interactive server 140. The display device 110 is an output terminal for playing video content. The display device can be the original display interface of the handheld device or the personal computer, or can be an external screen display device. The biometric feature extraction device 120 is mainly characterized in that the viewer is automatically captured to view at least one biometric feature and response information of the current movie and television in a manner that does not disturb the viewer to view the video content. The biometric capture device 120 can be a sensor built into the handheld device, such as a camera or a microphone, or a sensor that operates in an external connection, such as a heart rate counter, a respiratory count, or a wearable device. The feedback message may include the biometrics of the viewer's facial expressions, environmental sounds, speech recognition or heart rate values, and the reaction of the viewer.

影視互動子系統130,該影視互動子系統可執行於手持式裝置或個人電腦,可連結自影視互動伺服器140以播放影視內容並輸出於顯示裝置110,當播放特定的影視片段或時間點時,影視互動子系統130能控制生物特徵擷取裝置120,以不打擾收視者觀賞影視內容的方式,自動採集收視者觀看影視當下的至少一生物特徵及反應等回饋訊息,經辨識和處理回饋訊息後,產生與影視時間融合的統整資訊並傳送回影視互動伺服器140,讓影視商或實況主獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interaction subsystem 130 can be implemented in a handheld device or a personal computer, and can be connected to the video interactive server 140 to play the video content and output to the display device 110 when playing a specific video clip or time point. The video interaction subsystem 130 can control the biometric capture device 120 to automatically capture the feedback information of the at least one biometric feature and response, and identify and process the feedback message, without disturbing the manner in which the viewer views the video content. After that, the integrated information integrated with the film and television time is generated and transmitted back to the video interactive server 140, so that the film and television dealer or the live master can obtain the viewer behavior during the video playing process and analyze the effect of the video playing.

影視互動伺服器140,該影視互動伺服器主要特徵為提供預錄好的影視內容、提供即時直播方式之影視內容及提供收視者回饋統整資訊藉以獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interactive server 140, the main feature of the video interactive server is to provide pre-recorded video content, provide real-time live video content and provide viewers feedback information to obtain viewers' behavior during film and television playback and analyze video Playback effect.

影視互動子系130更進一步包含播放模組131、時間觸發模組132、擷取模組133、辨識模組134、融合統整模組135、傳送模組136其播放模組131以播放影視互動伺服器的影視及輸出到顯示裝置。時間觸發模組132偵測影視播放之時間軸,當達設定的欲分析時間點時,即觸發擷取模組133以取得收視者至少一生物特徵,例如:臉部表情、聲音、心律計數等等,成為回饋訊息。擷取模組133用以取得生物特徵裝置擷取的收視者至少一生物特徵之回饋訊息。辨識模組134係將回饋訊息與臉部表情資料、聲音資料進行回音消除、辨識比對,以得對應的回饋結果,及將聲音分貝和心率數值轉換成正規化數值。135融合統整模組將辨識後的回饋結果與影視時間點融合成統整資料,對照設定的時間軸事件可獲得收視者觀看影視當下之回饋結果統計。傳送模組136將互動統整結果傳回影視互動伺服器,讓影視商或實況主後續可以獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interaction subsystem 130 further includes a play module 131, a time trigger module 132, a capture module 133, an identification module 134, a fusion integration module 135, and a transmission module 136. The playback module 131 plays a video interaction. The video and output of the server are output to the display device. The time triggering module 132 detects the time axis of the video playing. When the set time point to be analyzed is reached, the capturing module 133 is triggered to obtain at least one biometric feature of the viewer, for example, facial expression, sound, heart rate counting, etc. Wait, become a feedback message. The capture module 133 is configured to obtain at least one biometric feedback message of the viewer captured by the biometric device. The identification module 134 performs echo cancellation and recognition comparison between the feedback message and the facial expression data and the sound data to obtain a corresponding feedback result, and converts the sound decibel and heart rate values into normalized values. The 135 fusion integration module combines the recognized feedback result with the film and television time point to integrate the data, and compares the set timeline event to obtain the statistics of the feedback result of the viewer watching the movie. The transmission module 136 transmits the interactive integration result back to the video interactive server, so that the film and television dealer or the live master can obtain the viewer behavior during the video playback and analyze the video playback effect.

例如,教學影視內容在特定的影視片段有講師說了一則笑話,在笑話結束之時間軸會觸發採集事件,採集收視者的臉部表情及反應成回饋訊息,經由影視互動子系統分析臉部表情後,統計收視者臉部表情呈現開心、平靜、不悅等比例,再將統計結果和時間軸融合成統整資訊後,傳送回影視互動伺服器。 For example, the teaching film and television content has a lecturer telling a joke in a particular film and television clip. At the end of the joke, the time axis triggers the collection event, collects the viewer's facial expression and reacts to the feedback message, and analyzes the facial expression through the video interactive subsystem. After that, the statistical viewer's facial expressions are presented in a happy, calm, unpleasant proportion, and then the statistical results and the timeline are merged into a unified information, and then transmitted back to the video interactive server.

以一手持式裝置使用為例,影視互動子系統自影 視互動伺服器播放影視內容,並輸出影視內容於顯示裝置。收視者透過顯示裝置觀看影視。播放過程中若達欲分析的時間點,則影視互動子系統自動觸發生物特徵擷取裝置,採集收視者的生物特徵及反應,例如:採集收視者之臉部表情或環境聲音等。影視互動子系統可延伸使用穿戴式裝置取得收視者的生物特徵及反應,例如:以穿戴式裝置採集收視者的心率數值。影視互動子系統處理回饋訊息,轉換並與影視時間融合成統整資訊。影視互動子系統傳送統整資訊至影視互動伺服器。影視供應商或實況主獲得收視者行為與分析影視效應。 Taking a handheld device as an example, the video interactive subsystem takes a picture The interactive server plays the video content and outputs the video content to the display device. The viewer watches the video through the display device. During the playback process, if the time point is to be analyzed, the video interactive subsystem automatically triggers the biometric capture device to collect the biometric characteristics and responses of the viewer, for example, collecting the facial expression of the viewer or the ambient sound. The video interactive subsystem can extend the wearer's device to obtain the biometric characteristics and response of the viewer, for example, collecting the viewer's heart rate value with the wearable device. The video interactive subsystem processes the feedback message, converts it and integrates it with the film and television time to integrate the information. The video interactive subsystem transmits the integrated information to the video interactive server. The film and television supplier or the live master gets the viewer's behavior and analyzes the film and television effect.

以個人電腦使用為例,影視互動子系統序自影視互動伺服器播放影視,並輸出影視內容於顯示裝置。收視者透過顯示裝置觀看影視。播放過程中若達欲分析的時間點,則影視互動子系統自動觸發生物特徵擷取裝置,採集收視者的生物特徵及反應,例如:採集收視者之臉部表情或環境聲音等。 Taking the personal computer as an example, the video interactive subsystem sequentially plays the video from the video interactive server, and outputs the video content to the display device. The viewer watches the video through the display device. During the playback process, if the time point is to be analyzed, the video interactive subsystem automatically triggers the biometric capture device to collect the biometric characteristics and responses of the viewer, for example, collecting the facial expression of the viewer or the ambient sound.

影視互動子系統處理回饋訊息,轉換並與影視時間融合成統整資訊。 The video interactive subsystem processes the feedback message, converts it and integrates it with the film and television time to integrate the information.

影視互動子系統傳送統整資訊至影視互動伺服器。 The video interactive subsystem transmits the integrated information to the video interactive server.

影視供應商或實況主獲得收視者行為與分析影視效應。 The film and television supplier or the live master gets the viewer's behavior and analyzes the film and television effect.

當採集收視者回饋訊息,再與影視時間融合成統整資訊結果。影視應用如線上學習或直播,可自影視互動伺服器播放影視內容,設定關連連續間隔時間或特定視訊片段觸發生物特徵裝置,以不打擾收視者觀賞影視內容之方式自動採集收視者的臉部表情成回饋訊息,經與影視時間融合成統整資訊後,可供影視商或實況主分析研究視訊播放成效。其中臉部表情包含生氣、厭惡、害怕、快樂、難過、驚訝和無表情 等狀態。 When collecting the viewer feedback message, it is combined with the film and television time to integrate the information result. Video application, such as online learning or live broadcast, can play video content from the video interactive server, set the continuous interval or specific video clip to trigger the biometric device, and automatically collect the viewer's facial expression without disturbing the viewer's viewing of the video content. In the feedback message, after integration with the film and television time into a unified information, it can be used by film and television dealers or live masters to analyze the effectiveness of video playback. The facial expressions include anger, disgust, fear, happiness, sadness, surprise, and expressionlessness. Wait for the status.

上列詳細說明乃針對本發明之一可行實施例進行具體說明,惟該實施例並非用以限制本發明之專利範圍,凡未脫離本發明技藝精神所為之等效實施或變更,均應包含於本案之專利範圍中。 The detailed description of the present invention is intended to be illustrative of a preferred embodiment of the invention, and is not intended to limit the scope of the invention. The patent scope of this case.

綜上所述,本案不僅於技術思想上確屬創新,並具備習用之傳統方法所不及之上述多項功效,已充分符合新穎性及進步性之法定發明專利要件,爰依法提出申請,懇請 貴局核准本件發明專利申請案,以勵發明,至感德便。 To sum up, this case is not only innovative in terms of technical thinking, but also has many of the above-mentioned functions that are not in the traditional methods of the past. It has fully complied with the statutory invention patent requirements of novelty and progressiveness, and applied for it according to law. Approved this invention patent application, in order to invent invention, to the sense of virtue.

Claims (16)

一種影視互動系統,其包括:一顯示裝置,係為影視內容播放之輸出端;一生物特徵擷取裝置,係得以為內建於手持式裝置的傳感器;一影視互動伺服器,係為提供預錄好的影視內容、提供即時直播方式之影視內容及提供收視者回饋統整資訊藉以獲得影視播放過程中的收視者行為及分析影視播放效應;以及一影視互動子系統,係連結於該顯示裝置、及該生物特徵擷取裝置、且另與該影視互動伺服器連接,該影視互動子系統係包含:時間觸發模組,係偵測影視播放之時間軸,且當達設定的欲分析時間點時,即觸發擷取模組以取得收視者至少一生物特徵;及擷取模組,係經由該生物特徵擷取裝置自動採集收視者觀看影視當下的至少一個生物特徵及反應。 A video interactive system includes: a display device, which is an output end of a video content playback; a biometric capture device, which is a sensor built into the handheld device; and a video interactive server, which provides a pre- Recording of film and television content, providing live and live video content and providing viewers with feedback to obtain viewers' behavior during film and television playback and analyzing video playback effects; and a video interactive subsystem connected to the display device And the biometric feature capture device and the video interaction server is further connected to the video interaction system, the video interaction subsystem includes: a time trigger module, which detects the time axis of the video play, and when the set time point is to be analyzed At the same time, the capture module is triggered to obtain at least one biometric feature of the viewer; and the capture module automatically collects at least one biometric and reaction of the viewer through the biometric feature capture device. 如申請專利範圍第1項所述之影視互動系統,其中該顯示裝置,係得以為手持式裝置或個人電腦之顯示介面,另亦得以為外接螢幕顯示器設備。 The video interactive system according to claim 1, wherein the display device is a display interface of a handheld device or a personal computer, and is also an external display device. 如申請專利範圍第1項所述之影視互動系統,其中該生物特徵擷取裝置,係得以為為內建於手持式裝置的傳感器之相機或麥克風,另亦得以外接連線方式運作的心率計數感測器、呼吸計數感測器或穿戴式設備之感測器。 The video interactive system according to claim 1, wherein the biometric capturing device is a camera or a microphone for a sensor built in the handheld device, and the heart rate is calculated by an external connection. Sensor, breath count sensor or sensor for wearable devices. 如申請專利範圍第1項所述之影視互動系統,其中該影視互動子系統進一步包含:播放模組,係以播放該影視互動伺服器的影視內容並輸 出到該顯示裝置;辨識模組,係將回饋訊息與臉部表情資料、聲音資料進行回音消除、辨識比對,以得對應的回饋結果,及將聲音分貝和心率數值轉換成正規化數值;融合統整模組,係將辨識後的回饋結果與影視時間點融合成統整資料,對照設定的時間軸事件可獲得收視者觀看影視當下之回饋結果統計;以及傳送模組,係將互動統整結果傳回影視互動伺服器,讓影視商或實況主後續可以獲得影視播放過程中的收視者行為及分析影視播放效應。 The video interactive system according to claim 1, wherein the video interactive subsystem further comprises: a playing module, which plays the video content of the video interactive server and loses Going out to the display device; the recognition module is to perform echo cancellation and recognition comparison between the feedback message and the facial expression data and the sound data to obtain a corresponding feedback result, and convert the sound decibel and heart rate values into normalized values; The integrated integration module combines the recognized feedback result with the film and television time point to integrate the data, and compares the set timeline event to obtain the statistics of the feedback result of the viewer watching the movie; and the transmission module The whole result is sent back to the video interactive server, so that the film and television dealer or the live master can obtain the viewer behavior during the video playback and analyze the video playback effect. 如申請專利範圍第4項所述之影視互動系統,其中該生物特徵,係為臉部表情、環境聲音、語音辨識、或心律計數之回饋訊息,以及觀看影視內容者之反應。 For example, the video interactive system described in claim 4, wherein the biometric is a facial expression, an environmental sound, a voice recognition, or a feedback message of a heart rate count, and a response of a person watching the video content. 一種影視互動方法,其包括:步驟一、將影視內容存放於影視互動伺服器;步驟二、關連影視時間軸觸發採集事件,其中,該觸發採集事件係設定影視內容關連至少一影視時間軸,影視時間軸得以對應至少一採集事件,做為後續當影視播放時間達設定時間軸,觸發生物特徵擷取裝置來執行採集收視者的生物特徵和行為之依據;步驟三、播放影視內容與使用者;步驟四、以不打擾收視者觀賞影視內容的方式採集回饋訊息;步驟五、分析回饋訊息以及融合影視時間成統整資訊結果並傳送該統整資訊。 A method for interacting with a film and television, comprising: Step 1: storing the content of the film and television in the interactive video server; Step 2: triggering the event of the film and television time axis triggering, wherein the triggering event is set to associate the film and television content with at least one film and television timeline, film and television The time axis can correspond to at least one acquisition event, as a subsequent time when the video playing time reaches a set time axis, triggering the biometric capturing device to perform the basis for collecting the biometric characteristics and behavior of the viewer; and step 3, playing the video content and the user; Step 4: Collect feedback information in a manner that does not disturb the viewer to watch the video content; Step 5: Analyze the feedback message and integrate the video time into a unified information result and transmit the unified information. 如申請專利範圍第6項所述之影視互動方法,其中該步驟一之該放於影視互動伺服器,係將影視商上傳預錄好的影視內容至影視互動伺服器存放,或直播實況主以即時直播方式提供影視內容。 For example, the film and television interactive method described in claim 6 of the patent scope, wherein the step 1 is placed on the video interactive server, the film and television merchant uploads the pre-recorded video content to the video interactive server for storage, or broadcasts the live event to Live video provides video content. 如申請專利範圍第6項所述之影視互動方法,其中該步驟三,其收視者係使用影視互動子系統,連結自影視互動伺服器,播放影視內容並輸出於顯示裝置。 For example, in the film and television interactive method described in claim 6, in the third step, the viewer uses the video interactive subsystem to connect to the video interactive server, and play the video content and output it to the display device. 如申請專利範圍第6項所述之影視互動方法,其中該步驟四之該採集回饋訊息,係當影視播放時間達設定時間軸,則影視互動子系統能觸發控制生物特徵擷取裝置,以不打擾收視者觀賞影視內容,自動採集收視者觀看影視當下的至少一個生物特徵及反應之回饋訊息。 For example, in the film and television interactive method described in claim 6, wherein the step 4 of the collection feedback message is when the film and television playing time reaches a set time axis, the film and television interactive subsystem can trigger the control of the biometric capturing device, Disturbing the viewer to watch the video content, automatically collecting the viewer to watch at least one biometric and response feedback message of the film and television. 如申請專利範圍第6項所述之影視互動方法,其中該步驟五,係由影視互動子系統採集回饋訊息後即進行分析,再與影視時間融合處理後成統整資訊結果,最後送回影視互動伺服器,回饋及統整資訊可讓影視商或直播實況主獲得影視播放過程中的收視者行為,與分析影視播放效應。 For example, the film and television interactive method described in claim 6 of the patent scope, wherein the step 5 is performed after the film and television interactive subsystem collects the feedback message, and then integrates with the film and television time to process the information result, and finally returns the film and television. The interactive server, feedback and integration information can enable the film and television dealer or live broadcast master to obtain the viewer behavior during the video playback, and analyze the video playback effect. 如申請專利範圍第6項所述之影視互動方法,其中該回饋訊息,係擷取相關生物特徵,包含收視者之臉部表情、環境聲音擷取、語音辨識或心率數值之生物特徵,以及觀看影視內容者之反應。 The method for interacting with the film and television according to item 6 of the patent application scope, wherein the feedback message is to acquire relevant biological features, including facial expressions of the viewer, environmental sound extraction, speech recognition or heart rate numerical biometrics, and viewing The reaction of the film and television content. 如申請專利範圍第6項所述之影視互動方法,其中該影視時間軸,係得以為特定的影視片段、特定播放時間點或間隔時間。 For example, the film and television interactive method described in claim 6 wherein the film timeline is a specific movie clip, a specific playing time point or an interval time. 如申請專利範圍第6項所述之影視互動方法,其中該間隔時間,係得以為重複之間隔時間或複數個間隔時間。 For example, the film and television interaction method described in claim 6 wherein the interval time is a repetition interval or a plurality of intervals. 如申請專利範圍第8項所述之影視互動方法,其中該採集事件,係得以為觸發至少一生物特徵擷取裝置。 The film and television interactive method according to claim 8 , wherein the collecting event is to trigger at least one biometric capturing device. 如申請專利範圍第8項所述之影視互動方法,其中該影視互動子系統,係為手持式裝置之APP(Application),為iOS、Android或Windows Phone之版本之APP方式,亦得以在Windows、Mac或Linux操作系統上執行之個人電腦應用程式方式。 For example, the video interactive method described in claim 8 of the patent scope, wherein the video interactive subsystem is an APP (Application) of a handheld device, and an APP mode of an iOS, Android or Windows Phone version is also available in Windows, The way a PC application is executed on a Mac or Linux operating system. 如申請專利範圍第8項所述之影視互動方法,其中該生物特徵擷取裝置,係得以為為內建於手持式裝置的傳感器之相機或麥克風,另亦得以外接連線方式運作的心率計數感測器、呼吸計數感測器或穿戴式設備之感測器。 The method for interacting with the film and television according to Item 8 of the patent application, wherein the biometric feature capturing device is a camera or a microphone for a sensor built in the handheld device, and the heart rate is calculated by an external connection mode. Sensor, breath count sensor or sensor for wearable devices.
TW106125678A 2017-07-31 2017-07-31 Film and television interactive system and method TWI632811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW106125678A TWI632811B (en) 2017-07-31 2017-07-31 Film and television interactive system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106125678A TWI632811B (en) 2017-07-31 2017-07-31 Film and television interactive system and method

Publications (2)

Publication Number Publication Date
TWI632811B true TWI632811B (en) 2018-08-11
TW201911877A TW201911877A (en) 2019-03-16

Family

ID=63959728

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106125678A TWI632811B (en) 2017-07-31 2017-07-31 Film and television interactive system and method

Country Status (1)

Country Link
TW (1) TWI632811B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11949967B1 (en) 2022-09-28 2024-04-02 International Business Machines Corporation Automatic connotation for audio and visual content using IOT sensors

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201346809A (en) * 2012-05-07 2013-11-16 Ind Tech Res Inst System and method for allocating advertisements
US20140198017A1 (en) * 2013-01-12 2014-07-17 Mathew J. Lamb Wearable Behavior-Based Vision System
CN102473264B (en) * 2009-06-30 2016-04-20 高智83基金会有限责任公司 The method and apparatus of image display and control is carried out according to beholder's factor and reaction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473264B (en) * 2009-06-30 2016-04-20 高智83基金会有限责任公司 The method and apparatus of image display and control is carried out according to beholder's factor and reaction
TW201346809A (en) * 2012-05-07 2013-11-16 Ind Tech Res Inst System and method for allocating advertisements
US20140198017A1 (en) * 2013-01-12 2014-07-17 Mathew J. Lamb Wearable Behavior-Based Vision System

Also Published As

Publication number Publication date
TW201911877A (en) 2019-03-16

Similar Documents

Publication Publication Date Title
US20220159341A1 (en) Systems and methods for assessing viewer engagement
CN106605218B (en) Method for collecting and processing computer user data during interaction with network-based content
US7889073B2 (en) Laugh detector and system and method for tracking an emotional response to a media presentation
EP3659344B1 (en) Calibration system for audience response capture and analysis of media content
CN103237248B (en) Media program is controlled based on media reaction
US20120159527A1 (en) Simulated group interaction with multimedia content
US7698238B2 (en) Emotion controlled system for processing multimedia data
US20140075465A1 (en) Time varying evaluation of multimedia content
US20120093481A1 (en) Intelligent determination of replays based on event identification
US20150006281A1 (en) Information processor, information processing method, and computer-readable medium
EP3803758A1 (en) Computer-implemented system and method for determining attentiveness of user
CN108898592A (en) Prompt method and device, the electronic equipment of camera lens degree of fouling
WO2020089717A1 (en) Recommendation based on dominant emotion using user-specific baseline emotion and emotion analysis
WO2017018012A1 (en) Information processing system, information processing method, and storage medium
KR20180063051A (en) METHODS, SYSTEMS AND APPARATUS FOR CONTROLLING MEDIA CONTENT BASED ON ATTENTION DETECTION
US11540009B2 (en) Systems and methods for assessing viewer engagement
TWI632811B (en) Film and television interactive system and method
US20140012792A1 (en) Systems and methods for building a virtual social network
JP2006254145A (en) Audience concentration degree measuring method of tv program
US20220335246A1 (en) System And Method For Video Processing
US20220408153A1 (en) Information processing device, information processing method, and information processing program
CN111090402A (en) Method for controlling display screen to output content and terminal equipment
US20230319348A1 (en) Systems and methods for assessing viewer engagement
Al-Hames et al. Using audio, visual, and lexical features in a multi-modal virtual meeting director

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees