TWM600411U - Behavior data processing system - Google Patents

Behavior data processing system Download PDF

Info

Publication number
TWM600411U
TWM600411U TW109206990U TW109206990U TWM600411U TW M600411 U TWM600411 U TW M600411U TW 109206990 U TW109206990 U TW 109206990U TW 109206990 U TW109206990 U TW 109206990U TW M600411 U TWM600411 U TW M600411U
Authority
TW
Taiwan
Prior art keywords
interactive
module
sensor
virtual
voice
Prior art date
Application number
TW109206990U
Other languages
Chinese (zh)
Inventor
鐘煒凱
Original Assignee
應愛科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 應愛科技有限公司 filed Critical 應愛科技有限公司
Priority to TW109206990U priority Critical patent/TWM600411U/en
Publication of TWM600411U publication Critical patent/TWM600411U/en

Links

Images

Abstract

本創作提供一種行為數據處理系統,其包含一傳感器、一互動感測器以及一虛擬實境頭戴裝置。傳感器配置在使用者身上,用於偵測使用者的位置變化軌跡,以產生一動作行為訊號;互動感測器設置於一人體模型,互動感測器能根據使用者與人體模型間的互動關係而判斷出一互動行為訊號;虛擬實境頭戴裝置根據傳感器偵測的動作行為訊號,以及互動感測器偵測的互動行為訊號,能準確模擬使用者與虛擬人物間的互動關係,進而提高模擬體驗的真實性。This creation provides a behavior data processing system, which includes a sensor, an interactive sensor, and a virtual reality headset. The sensor is arranged on the user to detect the position change of the user to generate an action signal; the interactive sensor is set on a human body model, and the interactive sensor can be based on the interactive relationship between the user and the human body model An interactive behavior signal is determined; the virtual reality headset based on the motion behavior signal detected by the sensor and the interactive behavior signal detected by the interactive sensor can accurately simulate the interactive relationship between the user and the virtual character, thereby improving The authenticity of the simulated experience.

Description

行為數據處理系統Behavioral Data Processing System

本創作係關於一種虛擬實境處理技術,尤指一種行為數據處理系統。This creation is about a virtual reality processing technology, especially a behavioral data processing system.

現代人們生活步調日趨緊湊,許多人的時間都花費在工作上面,而在日常工作之餘,通常會使用電子裝置的社群軟體與周遭親友互動,其中,社群軟體具有文字互動功能及語音互動功能,而無法提供模擬體驗的真實性,進而降低社交互動的趣味性。The pace of life of modern people is becoming more and more compact. Many people spend their time at work. In their daily work, they usually use the social software of electronic devices to interact with their relatives and friends around them. Among them, the social software has text interaction and voice interaction. Function, and cannot provide the authenticity of the simulation experience, thereby reducing the fun of social interaction.

近年來虛擬科技發展快速,如虛擬實境、擴增實境及混合實境,能提供使用者與虛擬人物進行模擬社交;然而,對於虛擬實境體驗來說,虛擬實境設備僅能單一判斷使用者的行為及動作,導致容易誤判使用者的的實際行為,進而降低模擬社交的準確性,以致於降低模擬互動的真實性。In recent years, the rapid development of virtual technology, such as virtual reality, augmented reality and mixed reality, can provide users and virtual characters to simulate social interaction; however, for virtual reality experience, virtual reality equipment can only make a single judgment The behaviors and actions of the user may easily misjudge the actual behavior of the user, thereby reducing the accuracy of the simulated social interaction, thereby reducing the authenticity of the simulated interaction.

為解決上述課題,本創作提供一種行為數據處理系統,其能提升使用者與虛擬人物的互動真實性,進而增進模擬體驗的效果。In order to solve the above-mentioned problems, this creation provides a behavior data processing system that can enhance the authenticity of the interaction between the user and the virtual character, thereby enhancing the effect of the simulation experience.

為達到上述目的,本創作提供一種行為數據處理系統,其包含一傳感器、一互動感測器以及一虛擬實境頭戴裝置。傳感器配置在使用者身上,傳感器用於偵測使用者的位置變化軌跡,以產生一動作行為訊號;互動感測器設置於一人體模型,互動感測器能根據使用者與人體模型間的互動關係而判斷出一互動行為訊號;虛擬實境頭戴裝置耦接傳感器以及互動感測器,虛擬實境頭戴裝置設有一虛擬互動資料庫、一虛擬實境模組、一影像顯示模組以及一訊號處理模組,虛擬互動資料庫儲存有複數虛擬互動資料,虛擬互動資料設有對應人體模型之一虛擬人物,虛擬實境模組自虛擬互動資料庫取得虛擬互動資料,並透過影像顯示模組進行顯示,使用者能在一虛擬社交場域中觀看虛擬人物,訊號處理模組至少根據動作行為訊號及互動行為訊號與虛擬人物進行模擬互動。To achieve the above objective, this creation provides a behavior data processing system, which includes a sensor, an interactive sensor, and a virtual reality headset. The sensor is arranged on the user, and the sensor is used to detect the position change track of the user to generate an action signal; the interactive sensor is set in a human body model, and the interactive sensor can be based on the interaction between the user and the human body model The virtual reality headset is coupled to the sensor and the interactive sensor. The virtual reality headset is provided with a virtual interactive database, a virtual reality module, an image display module, and A signal processing module. The virtual interactive database stores a plurality of virtual interactive data. The virtual interactive data is provided with a virtual character corresponding to the human body model. The virtual reality module obtains virtual interactive data from the virtual interactive database and displays the virtual interactive data through the image display module. The group performs display, the user can watch the virtual character in a virtual social field, and the signal processing module performs simulated interaction with the virtual character at least according to the action behavior signal and the interactive behavior signal.

藉此,傳感器能準確獲得使用者的身體變化,且互動感測器能感測使用者接觸人體模型的狀態,以致於虛擬實境頭戴裝置根據傳感器偵測的動作行為訊號,以及互動感測器偵測的互動行為訊號,能準確模擬使用者與虛擬人物間的互動關係,進而提高模擬社交的真實性,達到沉浸式體驗的效果。In this way, the sensor can accurately obtain the user's body change, and the interactive sensor can sense the state of the user touching the human body model, so that the virtual reality headset can detect the motion and behavior signals according to the sensor and interactive sensing The interactive behavior signal detected by the device can accurately simulate the interaction between the user and the virtual character, thereby improving the reality of simulated social interaction and achieving the effect of immersive experience.

請參閱圖1至圖5所示,本創作提供一種行為數據處理系統,其包含一傳感器10、一互動感測器20以及一虛擬實境頭戴裝置30。Please refer to FIG. 1 to FIG. 5. The present creation provides a behavior data processing system, which includes a sensor 10, an interactive sensor 20 and a virtual reality headset 30.

傳感器10,其穿戴在使用者100身上,傳感器10用於偵測使用者100的位置變化軌跡,以產生一動作行為訊號,在一較佳實施例中,傳感器10為速度計、陀螺儀、氣壓計、磁力計、電子羅盤及心律偵測計所組成之群組,且傳感器10係穿戴在使用者100的腰部、胸部及背部,用於偵測使用者100身體的姿態、方向、角度、位移、心律及速度等多項功能;請配合圖1及圖2所示,傳感器10包括一動作偵測模組11以及一第一通訊模組12,動作偵測模組11係以使用者100的姿勢變化、方向變化、角度變化、位移變化及速度變化所組成之群組,作為判斷使用者100位置變化軌跡的依據,且動作偵測模組11產生動作行為訊號,第一通訊模組12用於傳輸動作行為訊號。The sensor 10 is worn on the user 100. The sensor 10 is used to detect the position change track of the user 100 to generate an action signal. In a preferred embodiment, the sensor 10 is a speedometer, a gyroscope, and air pressure A group consisting of a meter, a magnetometer, an electronic compass, and a heart rate detector, and the sensor 10 is worn on the waist, chest and back of the user 100 to detect the posture, direction, angle, and displacement of the user’s 100 body , Heart rhythm, speed and other functions; please cooperate with Figure 1 and Figure 2. The sensor 10 includes a motion detection module 11 and a first communication module 12, the motion detection module 11 is based on the posture of the user 100 The group consisting of change, direction change, angle change, displacement change, and speed change is used as a basis for judging the position change track of the user 100, and the motion detection module 11 generates motion behavior signals, and the first communication module 12 is used for Transmission action signal.

互動感測器20,其設置於一人體模型200,互動感測器20能感測使用者100接觸人體模型200之複數動作,並根據動作判斷出一互動行為訊號,在一較佳實施例中,人體模型200係為人形玩偶,互動感測器20可為壓力感測器、溫度感測器及濕度感測器其中一種;請配合圖1及圖3所示,互動感測器20包括一互動偵測模組21以及一第二通訊模組22,互動偵測模組21根據使用者100接觸人體模型200所產生的施壓位置、壓力變化、溫度變化以及濕度變化,而判斷出互動行為訊號,第二通訊模組22用於傳輸互動行為訊號。The interactive sensor 20 is arranged in a human body model 200. The interactive sensor 20 can sense the multiple actions of the user 100 touching the human body model 200 and determine an interactive behavior signal based on the actions. In a preferred embodiment , The human body model 200 is a humanoid doll, and the interactive sensor 20 can be one of a pressure sensor, a temperature sensor and a humidity sensor; please cooperate with FIG. 1 and FIG. 3, the interactive sensor 20 includes a The interaction detection module 21 and a second communication module 22. The interaction detection module 21 determines the interaction behavior based on the pressure position, pressure change, temperature change and humidity change generated by the user 100 touching the human body model 200 The second communication module 22 is used to transmit interactive behavior signals.

虛擬實境頭戴裝置30,其穿戴於使用者100的頭部,並耦接傳感器10以及互動感測器20,虛擬實境頭戴裝置30設有一虛擬互動資料庫31、一虛擬實境模組32、一影像顯示模組33以及一訊號處理模組34,虛擬互動資料庫31儲存有複數虛擬互動資料,虛擬互動資料設有對應人體模型200之一虛擬人物,在一較佳實施例中,請配合圖5所示,虛擬互動資料庫31設有一影像資料單元311以及一語音資料單元312,影像資料單元311用於儲存虛擬互動資料,語音資料單元312用於儲存複數筆互動語音資料。The virtual reality headset 30 is worn on the head of the user 100 and is coupled to the sensor 10 and the interactive sensor 20. The virtual reality headset 30 is provided with a virtual interactive database 31 and a virtual reality model. Group 32, an image display module 33, and a signal processing module 34. The virtual interactive database 31 stores a plurality of virtual interactive data. The virtual interactive data is provided with a virtual character corresponding to the human body model 200. In a preferred embodiment Please cooperate with FIG. 5, the virtual interactive database 31 is provided with an image data unit 311 and a voice data unit 312. The image data unit 311 is used to store virtual interactive data, and the voice data unit 312 is used to store a plurality of interactive voice data.

虛擬實境模組32自虛擬互動資料庫31取得虛擬互動資料,並透過影像顯示模組33進行顯示,使用者100能在一虛擬社交場域中觀看虛擬人物,訊號處理模組34至少根據動作行為訊號及互動行為訊號與虛擬人物進行模擬互動,請配合圖5所示,本實施例訊號處理模組34設有一視角調整單元341以及一互動模擬單元342,視角調整單元341能根據動作行為訊號,而調整使用者100觀看虛擬人物之視角及距離,互動模擬單元342能根據互動行為訊號,而判斷使用者100模擬接觸虛擬人物的肢體位置,也就是說,當使用者100在接觸人體模型200時,互動模擬單元342能對應呈現虛擬人物被接觸的肢體位置。The virtual reality module 32 obtains virtual interactive data from the virtual interactive database 31 and displays it through the image display module 33. The user 100 can watch the virtual character in a virtual social field. The signal processing module 34 at least acts according to the action The behavior signal and the interactive behavior signal perform simulated interaction with the virtual character. Please cooperate with FIG. 5. The signal processing module 34 of this embodiment is provided with a viewing angle adjustment unit 341 and an interactive simulation unit 342. The viewing angle adjustment unit 341 can be based on the action signal , And adjust the viewing angle and distance of the user 100 to view the virtual character, the interactive simulation unit 342 can determine the position of the limb of the user 100 in contact with the virtual character according to the interactive behavior signal, that is, when the user 100 is in contact with the human model 200 At this time, the interactive simulation unit 342 can correspondingly present the position of the body of the virtual character being touched.

在一較佳實施例中,請配合圖2至圖5所示,虛擬實境頭戴裝置30設有一第三通訊模組35以及一語音播放模組36,第三通訊模組35耦接傳感器10之第一通訊模組12與互動感測器20之第二通訊模組22,其中,第三通訊模組35與第一通訊模組12及第二通訊模組22的連結方式包括但不限於,射頻辨識(Radio Frequency Identification,RFID)及藍芽(Blue tooth)等無線連結,第三通訊模組35用於接收第一通訊模組12所傳輸之動作行為訊號,以及第二通訊模組22所傳輸之該互動行為訊號。In a preferred embodiment, as shown in FIGS. 2 to 5, the virtual reality headset 30 is provided with a third communication module 35 and a voice playback module 36, and the third communication module 35 is coupled to the sensor The first communication module 12 of 10 and the second communication module 22 of the interactive sensor 20, wherein the connection of the third communication module 35 with the first communication module 12 and the second communication module 22 includes but not Limited to wireless connections such as Radio Frequency Identification (RFID) and Bluetooth (Blue tooth), the third communication module 35 is used to receive the action signal transmitted by the first communication module 12, and the second communication module 22 The interactive behavior signal transmitted.

語音播放模組36能語音播放語音資料單元312所輸出的互動語音資料,其中,當虛擬實境頭戴裝置30接收互動感測器20所觸發互動行為訊號時,訊號處理模組34之模擬互動單元能依據互動行為訊號自語音資料單元312取得對應的互動語音資料,並藉由語音播放模組36進行播放,如此,使用者100接觸人體模型200而能引發聲音,猶如達到真實互動的真實性。The voice play module 36 can play the interactive voice data output by the voice data unit 312. When the virtual reality headset 30 receives the interactive behavior signal triggered by the interactive sensor 20, the signal processing module 34 simulates the interaction The unit can obtain the corresponding interactive voice data from the voice data unit 312 according to the interactive behavior signal, and play it by the voice playback module 36. In this way, the user 100 touches the human body model 200 and can generate sounds, as if the authenticity of real interaction is achieved .

值得一提的是,互動感測器20更包括一模型狀態模組23,模型狀態模組23判斷人體模型200之整體輪廓及目前位置,以產生一模型狀態資訊,並透過第二通訊模組22傳輸至第三通訊模組35,虛擬實境模組32能依據模型狀態資訊對應呈現虛擬人物,如此使用者100在虛擬社交場域中即能獲得虛擬人物的目前位置及姿勢狀態,進而提高模擬觀察的準確性。It is worth mentioning that the interactive sensor 20 further includes a model status module 23. The model status module 23 determines the overall outline and current position of the human body model 200 to generate model status information, and uses the second communication module 22 is transmitted to the third communication module 35. The virtual reality module 32 can correspondingly present the virtual character according to the model state information, so that the user 100 can obtain the current position and posture state of the virtual character in the virtual social field, thereby improving The accuracy of simulated observation.

藉此,傳感器10能準確獲得使用者100的身體變化,且互動感測器20能感測使用者100接觸人體模型200的狀態,以致於虛擬實境頭戴裝置30根據傳感器10偵測的動作行為訊號,以及互動感測器20偵測的互動行為訊號,能準確模擬使用者100與虛擬人物間的互動關係,能讓使用者100感覺到猶如真正的在互動或接觸,使體驗更加真實,進而提高模擬社交的真實性,達到沉浸式體驗的效果。In this way, the sensor 10 can accurately obtain the body change of the user 100, and the interactive sensor 20 can sense the state of the user 100 contacting the human body model 200, so that the virtual reality headset 30 can detect the movement according to the sensor 10 The behavior signal and the interaction behavior signal detected by the interaction sensor 20 can accurately simulate the interaction between the user 100 and the virtual character, allowing the user 100 to feel as if they are actually interacting or touching, making the experience more real. In turn, the authenticity of simulated social interaction is improved, and the effect of immersive experience is achieved.

在一較佳實施例中,請配合圖5所示,虛擬實境頭戴裝置30更設有一聲音處理模組37以及一語音互動模組38。In a preferred embodiment, as shown in FIG. 5, the virtual reality headset 30 is further provided with a sound processing module 37 and a voice interaction module 38.

聲音處理模組37能接收使用者100所發出的一待識別語音,並將待識別語音處理成一語音訊號;語音互動模組38接收聲音處理模組37之語音訊號,並依據語音訊號自虛擬互動資料庫31之語音資料單元312取得相符的互動語音資料,最後藉由語音播放模組36進行播放,如此一來,本創作能進一步提供聲音的回饋,以致於使用者100能感覺到猶如真正的在跟虛擬人物進行對話互動,進而提高模擬社交的趣味性。The voice processing module 37 can receive a voice to be recognized from the user 100, and process the voice to be recognized into a voice signal; the voice interaction module 38 receives the voice signal from the voice processing module 37, and interacts virtually according to the voice signal The voice data unit 312 of the database 31 obtains the corresponding interactive voice data, and finally it is played by the voice playback module 36. In this way, the present creation can further provide sound feedback, so that the user 100 can feel as if it is real In dialogue and interaction with virtual characters, the fun of simulated social interaction is enhanced.

綜上所述,本創作具有下列功效:In summary, this creation has the following effects:

1.虛擬實境頭戴裝置30藉由傳感器10與互動感測器20的輔助偵測,能準確模擬使用者100與虛擬人物間的互動關係,進而提高模擬社交的真實性,達到沉浸式體驗的效果。1. The virtual reality headset 30 can accurately simulate the interaction between the user 100 and the virtual character through the auxiliary detection of the sensor 10 and the interactive sensor 20, thereby improving the reality of simulated social interaction and achieving an immersive experience Effect.

2.本創作能提供影像及聲音的回饋,以致於使用者100能感覺到猶如真正的在跟虛擬人物進行互動,進而提高模擬社交的趣味性。2. This creation can provide video and sound feedback, so that the user 100 can feel as if they are actually interacting with virtual characters, thereby enhancing the fun of simulated social interaction.

以上所舉實施例僅用以說明本創作而已,非用以限制本創作之範圍。舉凡不違本創作精神所從事的種種修改或變化,俱屬本創作意欲保護之範疇。The above-mentioned embodiments are only used to illustrate the creation, not to limit the scope of the creation. All modifications or changes that do not violate the spirit of this creation belong to the scope of this creation.

100:使用者 200:人體模型 10:傳感器 11:動作偵測模組 12:第一通訊模組 20:互動感測器 21:互動偵測模組 22:第二通訊模組 23:模型狀態模組 30:虛擬實境頭戴裝置 31:虛擬互動資料庫 311:影像資料單元 312:語音資料單元 32:虛擬實境模組 33:影像顯示模組 34:訊號處理模組 341:視角調整單元 342:互動模擬單元 35:第三通訊模組 36:語音播放模組 37:聲音處理模組 38:語音互動模組 100: user 200: Mannequin 10: Sensor 11: Motion detection module 12: The first communication module 20: Interactive sensor 21: Interactive detection module 22: The second communication module 23: Model State Module 30: Virtual reality headset 31: Virtual Interactive Database 311: Image Data Unit 312: Voice Data Unit 32: Virtual Reality Module 33: Image display module 34: signal processing module 341: viewing angle adjustment unit 342: Interactive Simulation Unit 35: The third communication module 36: Voice playback module 37: Sound processing module 38: Voice interaction module

圖1係為本創作實施例之系統架構圖。 圖2係為本創作實施例傳感器之架構示意圖。 圖3係為本創作實施例互動感測器之架構示意圖。 圖4係為本創作實施例虛擬實境頭戴裝置之架構示意圖。 圖5係為本創作實施例虛擬實境頭戴裝置之實施功能圖。 Figure 1 is a system architecture diagram of this creative embodiment. FIG. 2 is a schematic diagram of the structure of the sensor according to the creative embodiment. FIG. 3 is a schematic diagram of the structure of the interactive sensor according to the creative embodiment. FIG. 4 is a schematic diagram of the structure of the virtual reality headset of the creative embodiment. Figure 5 is a functional diagram of the implementation of the virtual reality headset of the creative embodiment.

100:使用者 100: user

200:人體模型 200: Mannequin

10:傳感器 10: Sensor

20:互動感測器 20: Interactive sensor

30:虛擬實境頭戴裝置 30: Virtual reality headset

Claims (10)

一種行為數據處理系統,其包含: 一傳感器,其配置在使用者身上,該傳感器用於偵測使用者的位置變化軌跡,以產生一動作行為訊號; 一互動感測器,其設置於一人體模型,該互動感測器能感測使用者接觸該人體模型之複數動作,並根據該等動作判斷出一互動行為訊號;以及 一虛擬實境頭戴裝置,其耦接該傳感器以及該互動感測器,該虛擬實境頭戴裝置設有一虛擬互動資料庫、一虛擬實境模組、一影像顯示模組以及一訊號處理模組,該虛擬互動資料庫儲存有複數虛擬互動資料,該虛擬互動資料設有對應該人體模型之一虛擬人物,該虛擬實境模組自該虛擬互動資料庫取得該虛擬互動資料,並透過該影像顯示模組進行顯示,使用者能在一虛擬社交場域中觀看該虛擬人物,該訊號處理模組至少根據該動作行為訊號及該互動行為訊號與該虛擬人物進行模擬互動。 A behavioral data processing system, which includes: A sensor configured on the user, the sensor is used to detect the position change track of the user to generate an action signal; An interactive sensor, which is set on a human body model, the interactive sensor can sense a plurality of actions of the user touching the human body model, and determine an interactive behavior signal based on the actions; and A virtual reality headset coupled to the sensor and the interactive sensor. The virtual reality headset is provided with a virtual interactive database, a virtual reality module, an image display module, and a signal processing Module, the virtual interactive database stores a plurality of virtual interactive data, the virtual interactive data is provided with a virtual character corresponding to the human body model, the virtual reality module obtains the virtual interactive data from the virtual interactive database, and uses The image display module displays, the user can watch the virtual character in a virtual social field, and the signal processing module performs simulated interaction with the virtual character at least according to the motion behavior signal and the interactive behavior signal. 如請求項1所述之行為數據處理系統,其中,該傳感器包括一動作偵測模組以及一第一通訊模組,該動作偵測模組係以使用者的姿勢變化、方向變化、角度變化、位移變及速度變化所組成之群組,作為判斷使用者位置變化軌跡的依據,且該動作偵測模組產生該動作行為訊號,該第一通訊模組用於傳輸該動作行為訊號。The behavior data processing system according to claim 1, wherein the sensor includes a motion detection module and a first communication module, and the motion detection module uses the user's posture change, direction change, and angle change The group consisting of displacement change and speed change is used as a basis for judging the user's position change trajectory, and the motion detection module generates the motion behavior signal, and the first communication module is used to transmit the motion behavior signal. 如請求項2所述之行為數據處理系統,其中,該傳感器選自於由速度計、陀螺儀、氣壓計、磁力計、電子羅盤及心律偵測計所組成之群組。The behavior data processing system according to claim 2, wherein the sensor is selected from the group consisting of a speedometer, a gyroscope, a barometer, a magnetometer, an electronic compass, and a heart rhythm detector. 如請求項2所述之行為數據處理系統,其中,該互動感測器包括一互動偵測模組以及一第二通訊模組,該互動偵測模組根據使用者接觸該人體模型所產生的施壓位置、壓力變化、溫度變化以及濕度變化,而判斷出該互動行為訊號,該第二通訊模組用於傳輸該互動行為訊號。The behavior data processing system according to claim 2, wherein the interactive sensor includes an interactive detection module and a second communication module, and the interactive detection module is generated based on the user contacting the human body model The pressure position, pressure change, temperature change, and humidity change are used to determine the interactive behavior signal, and the second communication module is used to transmit the interactive behavior signal. 如請求項4所述之行為數據處理系統,其中,該互動感測器可為壓力感測器、溫度感測器、濕度感測器其中一種。The behavior data processing system according to claim 4, wherein the interactive sensor can be one of a pressure sensor, a temperature sensor, and a humidity sensor. 如請求項4所述之行為數據處理系統,其中,該虛擬實境頭戴裝置設有一第三通訊模組,該第三通訊模組耦接該第一通訊模組與該第二通訊模組,該第三通訊模組用於接收該第一通訊模組所傳輸之該動作行為訊號,以及該第二通訊模組所傳輸之該互動行為訊號。The behavioral data processing system according to claim 4, wherein the virtual reality headset is provided with a third communication module, and the third communication module is coupled to the first communication module and the second communication module , The third communication module is used to receive the action signal transmitted by the first communication module and the interactive behavior signal transmitted by the second communication module. 如請求項6所述之行為數據處理系統,其中,該互動感測器更包括一模型狀態模組,該模型狀態模組判斷該人體模型之整體輪廓及目前位置,以產生一模型狀態資訊,並透過該第二通訊模組傳輸至該第三通訊模組,該虛擬實境模組能依據該模型狀態資訊對應呈現該虛擬人物。The behavior data processing system according to claim 6, wherein the interactive sensor further includes a model state module, and the model state module determines the overall outline and current position of the human body model to generate model state information, It is transmitted to the third communication module through the second communication module, and the virtual reality module can correspondingly present the virtual character according to the model state information. 如請求項6所述之行為數據處理系統,其中,該訊號處理模組設有一視角調整單元以及一互動模擬單元,該視角調整單元能根據該動作行為訊號而調整使用者觀看該虛擬人物之視角及距離,該互動模擬單元能根據該互動行為訊號而判斷使用者模擬接觸該虛擬人物的肢體位置。The behavior data processing system according to claim 6, wherein the signal processing module is provided with a viewing angle adjustment unit and an interactive simulation unit, and the viewing angle adjustment unit can adjust the user's viewing angle of the virtual character according to the action signal And distance, the interactive simulation unit can determine the position of the limbs of the user simulated touching the virtual character according to the interactive behavior signal. 如請求項8所述之行為數據處理系統,其中,該虛擬互動資料庫設有一影像資料單元以及一語音資料單元,該影像資料單元用於儲存該等虛擬互動資料,該語音資料單元用於儲存複數筆互動語音資料,該模擬互動單元依據該互動行為訊號自該語音資料單元取得對應的所述互動語音資料,該虛擬實境頭戴裝置更設有一播放所述互動語音資料之語音播放模組。The behavior data processing system according to claim 8, wherein the virtual interactive database is provided with an image data unit and a voice data unit, the image data unit is used to store the virtual interactive data, and the voice data unit is used to store A plurality of interactive voice data, the analog interactive unit obtains the corresponding interactive voice data from the voice data unit according to the interactive behavior signal, and the virtual reality headset is further provided with a voice playback module for playing the interactive voice data . 如請求項9所述之行為數據處理系統,其中,該虛擬實境頭戴裝置設有一聲音處理模組以及一語音互動模組,該聲音處理模組能接收使用者所發出的一待識別語音,並將該待識別語音處理成一語音訊號,該語音互動模組依據該語音訊號自該語音資料單元取得相符的所述互動語音資料,並藉由該語音播放模組進行播放。The behavior data processing system according to claim 9, wherein the virtual reality headset is provided with a voice processing module and a voice interaction module, and the voice processing module can receive a voice to be recognized from the user , And process the to-be-recognized voice into a voice signal, the voice interactive module obtains the matching interactive voice data from the voice data unit according to the voice signal, and plays it by the voice playing module.
TW109206990U 2020-06-04 2020-06-04 Behavior data processing system TWM600411U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109206990U TWM600411U (en) 2020-06-04 2020-06-04 Behavior data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109206990U TWM600411U (en) 2020-06-04 2020-06-04 Behavior data processing system

Publications (1)

Publication Number Publication Date
TWM600411U true TWM600411U (en) 2020-08-21

Family

ID=73004779

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109206990U TWM600411U (en) 2020-06-04 2020-06-04 Behavior data processing system

Country Status (1)

Country Link
TW (1) TWM600411U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI816089B (en) * 2021-02-24 2023-09-21 臺北醫學大學 Interpersonal interaction state analysis suggestion system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI816089B (en) * 2021-02-24 2023-09-21 臺北醫學大學 Interpersonal interaction state analysis suggestion system and method

Similar Documents

Publication Publication Date Title
US20050113167A1 (en) Physical feedback channel for entertainement or gaming environments
US4540176A (en) Microprocessor interface device
US10376785B2 (en) Audio, video, simulation, and user interface paradigms
US8019121B2 (en) Method and system for processing intensity from input devices for interfacing with a computer program
US7627139B2 (en) Computer image and audio processing of intensity and input devices for interfacing with a computer program
US20070149282A1 (en) Interactive gaming method and apparatus with emotion perception ability
EP3364272A1 (en) Automatic localized haptics generation system
JP5116679B2 (en) Intensive computer image and sound processing and input device for interfacing with computer programs
US10166477B2 (en) Image processing device, image processing method, and image processing program
CN106652656A (en) Learning and playing method and device by means of virtual musical instrument and glasses or helmet using the same
US8517837B2 (en) Control unit for a video games console provided with a tactile screen
KR100897537B1 (en) Toy robot
TWI427573B (en) Limb interactively learning method and apparatus
TWM600411U (en) Behavior data processing system
US11947399B2 (en) Determining tap locations on a handheld electronic device based on inertial measurements
Kawazoe et al. Tactile echoes: Multisensory augmented reality for the hand
CN103419204A (en) Finger guessing game robot system
US20220043505A1 (en) Behavior data processing system
CN111610857A (en) Gloves with interactive installation is felt to VR body
WO2016004846A1 (en) Interactive gaming device
TWI807372B (en) Virtualized user-interface device
CN214504972U (en) Intelligent musical instrument
Kojima et al. SHITARA: Sending Haptic Induced Touchable Alarm by Ring-shaped Air vortex
CN109420235A (en) A kind of virtual reality depressurized system
TWI715079B (en) Network learning system and method thereof