TWI835289B - Virtual and real interaction method, computing system used for virtual world, and virtual reality system - Google Patents

Virtual and real interaction method, computing system used for virtual world, and virtual reality system Download PDF

Info

Publication number
TWI835289B
TWI835289B TW111134062A TW111134062A TWI835289B TW I835289 B TWI835289 B TW I835289B TW 111134062 A TW111134062 A TW 111134062A TW 111134062 A TW111134062 A TW 111134062A TW I835289 B TWI835289 B TW I835289B
Authority
TW
Taiwan
Prior art keywords
object model
sensing
space
virtual
sensing data
Prior art date
Application number
TW111134062A
Other languages
Chinese (zh)
Other versions
TW202311912A (en
Inventor
張智凱
陳千茱
林肯平
季成亜
張耀霖
伍瀅杰
Original Assignee
仁寶電腦工業股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 仁寶電腦工業股份有限公司 filed Critical 仁寶電腦工業股份有限公司
Publication of TW202311912A publication Critical patent/TW202311912A/en
Application granted granted Critical
Publication of TWI835289B publication Critical patent/TWI835289B/en

Links

Images

Abstract

A virtual and real interaction method, a computing system used for a virtual world, and a virtual reality system are provided. In the method, a first object model is generated according to a first sensing data, a second object model is generated according to a second sensing data, the behaviors of the first object model and the second object model in a virtual scene are determined according to the first sensing data and the second sensing data, a first image stream is generated according to the behavior of the first object in the virtual scene, and a second image stream is generated according to the behavior of the second object in the virtual scene. The first image stream is provided to be displayed by a remote display apparatus. The second image stream is provided to be displayed by a local display apparatus. Accordingly, the interaction experience could be improved.

Description

虛實互動方法、用於虛擬世界的運算系統及虛擬實境系統Virtual-real interaction method, computing system for virtual world and virtual reality system

本發明是有關於一種模擬體驗技術,且特別是有關於一種虛實互動方法、用於虛擬世界的運算系統及虛擬實境系統。The present invention relates to a simulation experience technology, and in particular, to a virtual-real interaction method, a computing system for a virtual world, and a virtual reality system.

現今,諸如虛擬實境(Virtual Reality,VR)、增強現實(Augmented Reality,AR)、混合現實(Mixed Reality,MR)及擴展現實(Extended Reality,XR)等用於模擬感覺、感知和/或環境的技術受到歡迎。上述技術可應用於多種領域(例如,遊戲、軍事訓練、醫療保健、遠端工作等)中。在過往的虛擬世界中,通常都是使用已建造好的實境內容或是預先錄製好的場地,使得使用者在進行體驗時無法獲得即時的雙向互動。Today, technologies such as Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and Extended Reality (XR) are used to simulate feelings, perceptions, and/or environments. technology is welcomed. The above-mentioned technology can be applied in a variety of fields (eg, gaming, military training, medical care, remote work, etc.). In the past virtual worlds, built reality content or pre-recorded venues were usually used, making it impossible for users to obtain real-time two-way interaction during the experience.

有鑑於此,本發明實施例提供一種虛實互動方法、用於虛擬世界的運算系統及虛擬實境系統,可融合對應於不同地點上的物件的影像串流,並讓兩物件可在相同虛擬場景中互動。In view of this, embodiments of the present invention provide a virtual-real interaction method, a computing system for a virtual world, and a virtual reality system that can fuse image streams corresponding to objects in different locations and allow two objects to interact in the same virtual scene. in interaction.

本發明實施例的虛實互動方法包括(但不僅限於)下列步驟:依據第一感測資料產生第一物件模型;依據第二感測資料產生第二物件模型;依據第一感測資料及第二感測資料決定第一物件模型及第二物件模型在虛擬場景中的行為;依據第一物件模型在虛擬場景中的行為產生第一影像串流;依據第二物件模型在虛擬場景的行為產生第二影像串流。第一感測資料是對第一實體物件感測所得的。第二感測資料是對第二實體物件感測所得的。第一影像串流用於供遠端顯示裝置顯示。第二影像串流用於本地顯示裝置顯示。The virtual-real interaction method in the embodiment of the present invention includes (but is not limited to) the following steps: generate a first object model based on the first sensing data; generate a second object model based on the second sensing data; generate a second object model based on the first sensing data and the second sensing data. The sensing data determines the behavior of the first object model and the second object model in the virtual scene; the first image stream is generated based on the behavior of the first object model in the virtual scene; and the second image stream is generated based on the behavior of the second object model in the virtual scene. Two video streaming. The first sensing data is obtained by sensing the first physical object. The second sensing data is obtained by sensing the second physical object. The first image stream is used for display by the remote display device. The second image stream is used for display on the local display device.

本發明實施例的用於虛擬世界的運算系統包括(但不僅限於)一個或更多個記憶體及一個或更多個處理器。記憶體用以儲存一個或更多個程式碼。處理器耦接記憶體。處理器經配置用以載入程式碼以執行:依據第一感測資料產生第一物件模型;依據第二感測資料產生第二物件模型;依據第一感測資料及第二感測資料決定第一物件模型及第二物件模型在虛擬場景中的行為;依據第一物件模型在虛擬場景中的行為產生第一影像串流;依據第二物件模型在虛擬場景的行為產生第二影像串流。第一感測資料是對第一實體物件感測所得的。第二感測資料是對第二實體物件感測所得的。第一影像串流用於供遠端顯示裝置顯示。第二影像串流用於本地顯示裝置顯示。The computing system used in the virtual world according to the embodiment of the present invention includes (but is not limited to) one or more memories and one or more processors. Memory is used to store one or more programs. The processor is coupled to the memory. The processor is configured to load the program code to execute: generate a first object model based on the first sensing data; generate a second object model based on the second sensing data; determine based on the first sensing data and the second sensing data. The behavior of the first object model and the second object model in the virtual scene; the first image stream is generated based on the behavior of the first object model in the virtual scene; the second image stream is generated based on the behavior of the second object model in the virtual scene . The first sensing data is obtained by sensing the first physical object. The second sensing data is obtained by sensing the second physical object. The first image stream is used for display by the remote display device. The second image stream is used for display on the local display device.

本發明實施例的虛擬實境系統包括(但不僅限於)兩台第一空間感測裝置、一台或更多台運算裝置及本地顯示裝置。空間感測裝置用以對第一實體物件感測,以取得第一感測資料。運算裝置經配置用以:依據第一感測資料產生第一物件模型;依據第二感測資料產生第二物件模型;依據第一感測資料及第二感測資料決定第一物件模型及第二物件模型在虛擬場景中的行為;依據第一物件模型在虛擬場景的行為產生第一影像串流;依據第二物件模型在虛擬場景的行為產生第二影像串流。第二感測資料是透過兩台第二空間感測裝置對第二實體物件感測所得的。第一影像串流用於供遠端顯示裝置顯示。本地顯示裝置用以顯示第二影像串流。The virtual reality system according to the embodiment of the present invention includes (but is not limited to) two first space sensing devices, one or more computing devices and a local display device. The space sensing device is used to sense the first physical object to obtain first sensing data. The computing device is configured to: generate a first object model based on the first sensing data; generate a second object model based on the second sensing data; determine the first object model and the second object model based on the first sensing data and the second sensing data. The behavior of the two object models in the virtual scene; the first image stream is generated based on the behavior of the first object model in the virtual scene; the second image stream is generated based on the behavior of the second object model in the virtual scene. The second sensing data is obtained by sensing the second physical object through two second space sensing devices. The first image stream is used for display by the remote display device. The local display device is used to display the second image stream.

基於上述,依據本發明實施例的虛實互動方法、用於虛擬世界的運算系統及虛擬實境系統,分別感測不同物件以產生對應的物件模型,決定兩物件模型在同一個虛擬場景中的行為,並產生供不同顯示裝置顯示的影像串流。藉此,可即時感測物件的動作,並使兩物件在虛擬場景中有合理且順暢的互動行為,進而改進虛擬世界的體驗。Based on the above, according to the virtual-real interaction method, the computing system for the virtual world, and the virtual reality system of the embodiments of the present invention, different objects are sensed respectively to generate corresponding object models, and the behaviors of the two object models in the same virtual scene are determined. , and generate image streams for display on different display devices. In this way, the movement of objects can be sensed in real time, and the two objects can interact reasonably and smoothly in the virtual scene, thus improving the experience of the virtual world.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above features and advantages of the present invention more clearly understood, embodiments are given below and described in detail with reference to the accompanying drawings.

圖1是依據本發明一實施例的虛擬實境系統1的元件方塊圖。請參照圖1,虛擬實境系統1包括(但不僅限於)一台或更多台空間感測裝置11、一台或更多台穿戴式裝置12、一台或更多台行動裝置13、本地顯示裝置14、一台或更多台空間感測裝置21、一台或更多台穿戴式裝置22、一台或更多台行動裝置23、遠端顯示裝置24及伺服器30。FIG. 1 is a component block diagram of a virtual reality system 1 according to an embodiment of the present invention. Please refer to Figure 1. The virtual reality system 1 includes (but is not limited to) one or more space sensing devices 11, one or more wearable devices 12, one or more mobile devices 13, local The display device 14 , one or more space sensing devices 21 , one or more wearable devices 22 , one or more mobile devices 23 , the remote display device 24 and the server 30 .

在一實施例中,一台或更多台空間感測裝置11、一台或更多台穿戴式裝置12、一台或更多台行動裝置13及本地顯示裝置14位於第一地點/環境/空間/場域(下文統稱為第一地點),且一台或更多台空間感測裝置21、一台或更多台穿戴式裝置22、一台或更多台行動裝置23及遠端顯示裝置24位於第二地點/環境/空間/場域(下文統稱為第二地點)。在本實施例中,假設本地使用者位於第一地點,且遠端使用者位於第二地點。然而,本發明實施例不限制兩地點的距離及位於其上的物件(也就是,不限於人,也可能是諸如球、玩具、控制器、擊球工具等與運動有關的物件)。In one embodiment, one or more space sensing devices 11 , one or more wearable devices 12 , one or more mobile devices 13 and the local display device 14 are located at the first location/environment/ Space/field (hereinafter collectively referred to as the first location), and one or more space sensing devices 21, one or more wearable devices 22, one or more mobile devices 23 and remote display The device 24 is located in a second location/environment/space/field (hereinafter collectively referred to as the second location). In this embodiment, it is assumed that the local user is located at the first location and the remote user is located at the second location. However, embodiments of the present invention do not limit the distance between two locations and the objects located thereon (that is, not limited to people, but may also be sports-related objects such as balls, toys, controllers, hitting tools, etc.).

須說明的是,「本地」及「遠端」是以本地使用者的觀點命名,因此對於遠端使用者或其他使用者而言可能有不同定義或命名方式。It should be noted that "local" and "remote" are named from the perspective of local users, so they may have different definitions or naming methods for remote users or other users.

另須說明的是,於圖1中,本地使用者為學員,而遠端使用者為教練。然而,在一些應用情境中,由於教練不一定需要呈現自己的全息影像,能夠只透過語音的方式對學員教學,因此圖1中以虛線示意遠端使用者(教練)可選擇性使用或不使用空間感測裝置21與穿戴式裝置22。It should also be noted that in Figure 1, the local user is the student and the remote user is the coach. However, in some application scenarios, since the coach does not necessarily need to present his or her own holographic image and can teach students only through voice, the dotted line in Figure 1 indicates that the remote user (coach) can selectively use or not use it. Space sensing device 21 and wearable device 22.

圖2A是依據本發明一實施例的空間感測裝置11的元件方塊圖。請參照圖2A,空間感測裝置11包括(但不僅限於)影像感測模組111、動作追蹤模組112、通訊模組113、距離感測器114、記憶體115及處理器116。FIG. 2A is a component block diagram of the space sensing device 11 according to an embodiment of the present invention. Referring to FIG. 2A , the space sensing device 11 includes (but is not limited to) an image sensing module 111 , a motion tracking module 112 , a communication module 113 , a distance sensor 114 , a memory 115 and a processor 116 .

影像感測模組111可以是相機、影像掃描器、攝影機、深度相機、立體相機等用以擷取影像的裝置。在一實施例中,影像感測模組111可包括影像感測器(例如,電荷耦合裝置(Charge Coupled Device,CCD)、互補式金氧半導體(Complementary Metal-Oxide-Semiconductor,CMOS)等)、光學鏡頭、影像控制電路等元件。須說明的是,影像感測模組111的鏡頭規格(例如,取像光圈、倍率、焦距、取像可視角度、影像感測器大小等)及其數量可依據實際需求而調整。例如,影像感測模組111包括180度鏡頭,從而提供較大視野的擷取範圍。在一實施例中,影像感測模組111用以擷取影像及/或深度資訊,並據以作為感測資料。The image sensing module 111 may be a camera, an image scanner, a video camera, a depth camera, a stereo camera, or other devices used to capture images. In one embodiment, the image sensing module 111 may include an image sensor (such as a Charge Coupled Device (CCD), a Complementary Metal-Oxide-Semiconductor (CMOS), etc.), Optical lenses, image control circuits and other components. It should be noted that the lens specifications (for example, imaging aperture, magnification, focal length, imaging viewing angle, image sensor size, etc.) of the image sensing module 111 and its quantity can be adjusted according to actual needs. For example, the image sensing module 111 includes a 180-degree lens, thereby providing a capture range with a larger field of view. In one embodiment, the image sensing module 111 is used to capture image and/or depth information as sensing data.

動作追蹤模組112可以是加速度計、陀螺儀、磁力計、電子羅盤、慣性感測單元、3軸或更多軸向的感測器。在一實施例中,動作追蹤模組112用以取得諸如速度、加速度、角速度、傾角、位移等動作相關資訊,並據以作為感測資料。The motion tracking module 112 may be an accelerometer, a gyroscope, a magnetometer, an electronic compass, an inertial sensing unit, or a sensor with three or more axes. In one embodiment, the motion tracking module 112 is used to obtain motion-related information such as speed, acceleration, angular velocity, inclination, displacement, etc., and use it as sensing data.

通訊模組113可以是支援諸如第四代(4G)或其他世代行動通訊、Wi-Fi、藍芽、紅外線、無線射頻辨識(Radio Frequency Identification,RFID)、乙太網路(Ethernet)、光纖網路等通訊收發器、序列通訊介面(例如RS-232),也可以是通用串列匯流排(Universal Serial Bus,USB)、Thunderbolt或其他通訊傳輸介面。在本發明實施例中,通訊模組113用以與其他電子裝置(例如,穿戴式裝置12、或行動裝置13)傳送或接收資料。The communication module 113 may support mobile communications such as fourth generation (4G) or other generations, Wi-Fi, Bluetooth, infrared, radio frequency identification (Radio Frequency Identification, RFID), Ethernet, optical fiber network It can be a communication transceiver, serial communication interface (such as RS-232), Universal Serial Bus (USB), Thunderbolt or other communication transmission interface. In the embodiment of the present invention, the communication module 113 is used to transmit or receive data with other electronic devices (for example, a wearable device 12 or a mobile device 13).

距離感測器114可以是雷達、飛行時間(Time of Flight,ToF)相機、LiDAR掃描器、深度感測器、紅外線測距器、超音波感測器或其他測距相關感測器。在一實施例中,距離感測器114可偵測待測物所處的方位角。也就是,待測物相對於距離感測器114的方位角。在另一實施例中,距離感測器114可偵測待測物的距離。也就是,待測物相對於距離感測器114的距離。在一些實施例中,前述一個或更多個距離感測器114的偵測結果(例如,方位角及/或距離)可作為感測資料。The distance sensor 114 may be a radar, a Time of Flight (ToF) camera, a LiDAR scanner, a depth sensor, an infrared rangefinder, an ultrasonic sensor, or other ranging-related sensors. In one embodiment, the distance sensor 114 can detect the azimuth angle of the object to be detected. That is, the azimuth angle of the object to be measured relative to the distance sensor 114 . In another embodiment, the distance sensor 114 can detect the distance of the object to be detected. That is, the distance of the object to be measured relative to the distance sensor 114 . In some embodiments, the detection results (eg, azimuth angle and/or distance) of the one or more distance sensors 114 can be used as sensing data.

記憶體115可以是任何型態的固定或可移動隨機存取記憶體(Radom Access Memory,RAM)、唯讀記憶體(Read Only Memory,ROM)、快閃記憶體(flash memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Drive,SSD)或類似元件。在一實施例中,記憶體115用以儲存程式碼、軟體模組、組態配置、資料(例如,感測資料、物件模型、影像串流等)或檔案,並待後文詳述其實施例。The memory 115 can be any type of fixed or removable random access memory (Radom Access Memory, RAM), read only memory (Read Only Memory, ROM), flash memory (flash memory), traditional hard disk (Hard Disk Drive, HDD), solid-state drive (Solid-State Drive, SSD) or similar components. In one embodiment, the memory 115 is used to store program codes, software modules, configurations, data (such as sensing data, object models, image streams, etc.) or files, and their implementation will be described in detail later. example.

處理器116耦接影像感測模組111、動作追蹤模組112、通訊模組113、距離感測器114及記憶體115。處理器116可以是中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphic Processing unit,GPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor,DSP)、可程式化控制器、現場可程式化邏輯閘陣列(Field Programmable Gate Array,FPGA)、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)、神經網路加速器或其他類似元件或上述元件的組合。在一實施例中,處理器116用以執行空間感測裝置11的所有或部份作業,且可載入並執行記憶體115所儲存的各程式碼、軟體模組、檔案及資料。在一些實施例中,處理器116的功能可透過軟體或晶片實現。The processor 116 is coupled to the image sensing module 111, the motion tracking module 112, the communication module 113, the distance sensor 114 and the memory 115. The processor 116 may be a central processing unit (CPU), a graphics processing unit (GPU), or other programmable general-purpose or special-purpose microprocessor (Microprocessor), digital signal processing Digital Signal Processor (DSP), programmable controller, Field Programmable Gate Array (FPGA), Application-Specific Integrated Circuit (ASIC), neural network accelerator or other similar elements or combinations of the above elements. In one embodiment, the processor 116 is used to execute all or part of the operations of the space sensing device 11, and can load and execute each program code, software module, file and data stored in the memory 115. In some embodiments, the functions of the processor 116 may be implemented through software or a chip.

空間感測裝置21的實施態樣及元件可參酌空間感測裝置11的說明,於此不再贅述。The implementation and components of the space sensing device 21 can be referred to the description of the space sensing device 11 and will not be described again here.

圖2B是依據本發明一實施例的穿戴式裝置12的元件方塊圖。請參照2B,穿戴式裝置12可以是智慧手環、智慧手錶、手持控制器、智慧腰環、智慧腳環、智慧頭套、頭戴式顯示器或其他供人體部位穿戴的感測裝置。穿戴式裝置12包括(但不僅限於)動作追蹤模組122、通訊模組123、記憶體125及處理器126。動作追蹤模組122、通訊模組123、記憶體125及處理器126的介紹可分別參酌動作追蹤模組112、通訊模組113、記憶體115及處理器116的說明,於此不再贅述。FIG. 2B is a component block diagram of the wearable device 12 according to an embodiment of the present invention. Please refer to 2B. The wearable device 12 may be a smart bracelet, a smart watch, a handheld controller, a smart waist ring, a smart anklet, a smart headgear, a head-mounted display or other sensing devices worn by human body parts. The wearable device 12 includes (but is not limited to) a motion tracking module 122 , a communication module 123 , a memory 125 and a processor 126 . For the introduction of the motion tracking module 122, the communication module 123, the memory 125 and the processor 126, please refer to the description of the motion tracking module 112, the communication module 113, the memory 115 and the processor 116 respectively, and will not be described again here.

穿戴式裝置22的實施態樣及元件可參酌穿戴式裝置12的說明,於此不再贅述。The implementation aspects and components of the wearable device 22 may be referred to the description of the wearable device 12 and will not be described again here.

圖2C是依據本發明一實施例的行動裝置13的元件方塊圖。請參照圖2C,行動裝置13可以是手機、平板電腦或筆記型電腦。行動裝置13包括(但不僅限於)通訊模組133、記憶體135及處理器136。通訊模組133、記憶體135及處理器136的介紹可分別參酌通訊模組113、記憶體115及處理器116的說明,於此不再贅述。FIG. 2C is a component block diagram of the mobile device 13 according to an embodiment of the present invention. Referring to FIG. 2C , the mobile device 13 may be a mobile phone, a tablet computer or a laptop computer. The mobile device 13 includes (but is not limited to) a communication module 133 , a memory 135 and a processor 136 . For the introduction of the communication module 133, the memory 135 and the processor 136, please refer to the description of the communication module 113, the memory 115 and the processor 116 respectively, and will not be described again here.

行動裝置23的實施態樣及元件可參酌行動裝置13的說明,於此不再贅述。The implementation aspects and components of the mobile device 23 may be referred to the description of the mobile device 13 and will not be described again here.

圖2D是依據本發明一實施例的本地顯示裝置14的元件方塊圖。請參照圖2D,本地顯示裝置14可以是頭戴顯示器或智慧眼鏡。本地顯示裝置14包括(但不僅限於)影像感測模組141、動作追蹤模組142、通訊模組143、顯示器144、記憶體145及處理器146。FIG. 2D is a component block diagram of the local display device 14 according to an embodiment of the present invention. Referring to FIG. 2D , the local display device 14 may be a head-mounted display or smart glasses. The local display device 14 includes (but is not limited to) an image sensing module 141, a motion tracking module 142, a communication module 143, a display 144, a memory 145 and a processor 146.

顯示器144可以是液晶顯示器(Liquid-Crystal Display,LCD)、(Light-Emitting Diode,LED)顯示器、有機發光二極體(Organic Light-Emitting Diode,OLED)、量子點顯示器(Quantum dot display)或其他類型顯示器。在一實施例中,顯示器144用以顯示影像。The display 144 may be a liquid crystal display (LCD), a light-emitting diode (LED) display, an organic light-emitting diode (OLED), a quantum dot display (Quantum dot display), or other type display. In one embodiment, the display 144 is used to display images.

影像感測模組141、動作追蹤模組142、通訊模組143、記憶體145及處理器146的介紹可分別參酌影像感測模組111、動作追蹤模組112、通訊模組113、記憶體115及處理器116的說明,於此不再贅述。The introduction of the image sensing module 141, motion tracking module 142, communication module 143, memory 145 and processor 146 can be referred to the image sensing module 111, motion tracking module 112, communication module 113, memory respectively. The description of 115 and processor 116 will not be repeated here.

在一些實施例中,本地顯示裝置14可能未包括動作追蹤模組142及/或影像感測模組141,並可能是電視或螢幕。In some embodiments, the local display device 14 may not include the motion tracking module 142 and/or the image sensing module 141 and may be a television or a screen.

遠端顯示裝置24實施態樣及元件可參酌本地顯示裝置14的說明,於此不再贅述。The implementation aspects and components of the remote display device 24 may be referred to the description of the local display device 14 and will not be described again here.

圖2E是依據本發明一實施例的伺服器30的元件方塊圖。請參照圖2E,伺服器30包括(但不僅限於)通訊模組33、記憶體35及處理器36。通訊模組33、記憶體35及處理器36的介紹可分別參酌通訊模組113、記憶體115及處理器116的說明,於此不再贅述。FIG. 2E is a component block diagram of the server 30 according to an embodiment of the present invention. Referring to FIG. 2E , the server 30 includes (but is not limited to) a communication module 33 , a memory 35 and a processor 36 . For the introduction of the communication module 33, the memory 35 and the processor 36, please refer to the description of the communication module 113, the memory 115 and the processor 116 respectively, and will not be repeated here.

圖3A是依據本發明一實施例的空間感測裝置為兩台一組的示意圖,圖3B依據圖3A的其中一空間感測裝置的示意圖,且圖3C是依據圖3A的另一空間感測裝置放置一行動裝置13的示意圖。如圖3A所示,兩台空間感測裝置11可組合在一起,以方便攜帶。如圖3C所示,空間感測裝置11具有平台供行動裝置13或其他物件放置。在一實施例中,空間感測裝置11還包括無線充電模組(未繪出),並用以提供電力給行動裝置13或其他電子裝置。空間感測裝置11可設有缺口/窗口/開口119以供影像感測模組111及/或距離感測器114傳送及接收訊號。Figure 3A is a schematic diagram of a set of two space sensing devices according to an embodiment of the present invention. Figure 3B is a schematic diagram of one of the space sensing devices based on Figure 3A. Figure 3C is a schematic diagram of another space sensing device based on Figure 3A. A schematic diagram of a mobile device 13 on which the device is placed. As shown in FIG. 3A , two space sensing devices 11 can be combined together to facilitate portability. As shown in FIG. 3C , the space sensing device 11 has a platform for placing the mobile device 13 or other objects. In one embodiment, the space sensing device 11 further includes a wireless charging module (not shown) and is used to provide power to the mobile device 13 or other electronic devices. The space sensing device 11 may be provided with a notch/window/opening 119 for the image sensing module 111 and/or the distance sensor 114 to transmit and receive signals.

請參照圖2A至圖2E,在一實施例中,運算系統2包括一個或更多個記憶體115、125、135、145、35及一個或更多個處理器116、126、136、146、36。一個或更多個處理器116、126、136、146、36載入記憶體115、125、135、145、35所儲存的程式碼,以執行/實現後續介紹的本發明實施例虛實互動方法。在一些實施例中,多個裝置可整合成一個裝置。Referring to FIGS. 2A to 2E , in one embodiment, the computing system 2 includes one or more memories 115, 125, 135, 145, 35 and one or more processors 116, 126, 136, 146, 36. One or more processors 116, 126, 136, 146, 36 load the program code stored in the memory 115, 125, 135, 145, 35 to execute/implement the virtual-real interaction method of the embodiment of the present invention introduced later. In some embodiments, multiple devices may be integrated into one device.

下文中,將搭配虛擬實境系統1及/或運算系統2中的各項裝置、元件及模組說明本發明實施例所述的方法。本方法的各個流程可依照實施情形而調整,且並不僅限於此。為了方便說明,以下將以伺服器30的處理器36(參照圖2E)作為本發明實施例所提出的方法的執行主體。然而,任一個處理器116、126、136、146、36所執行的全部或部分作業可透過另一個處理器116、126、136、146、36執行或實現,且本發明實施例不加以限制本發明實施例所提出的方法的執行主體。此外,裝置間的資料傳遞可分別透過通訊模組113、123、133、143或33實現。In the following, the method described in the embodiment of the present invention will be described with reference to various devices, components and modules in the virtual reality system 1 and/or the computing system 2 . Each process of this method can be adjusted according to the implementation situation, and is not limited thereto. For convenience of explanation, the processor 36 of the server 30 (refer to FIG. 2E) will be used as the execution subject of the method proposed in the embodiment of the present invention. However, all or part of the operations executed by any processor 116, 126, 136, 146, 36 may be executed or implemented by another processor 116, 126, 136, 146, 36, and the embodiments of the present invention are not limited thereto. The execution subject of the method proposed in the embodiment of the invention. In addition, data transfer between devices can be realized through communication modules 113, 123, 133, 143 or 33 respectively.

圖4是依據本發明一實施例的虛實互動方法的流程圖。請參照圖4,處理器36依據第一感測資料產生第一物件模型(步驟S410)。具體而言,參圖1與圖2A至2D,第一感測資料是位於第一地點(即本地使用者)的空間感測裝置11、穿戴式裝置12及/或本地顯示裝置14的影像感測模組111、141、動作追蹤模組112、122、142及/或距離感測器114的感測資料。例如,與影像、深度、距離、速度、旋轉、位置、方位等資訊。Figure 4 is a flow chart of a virtual-real interaction method according to an embodiment of the present invention. Referring to FIG. 4 , the processor 36 generates a first object model based on the first sensing data (step S410 ). Specifically, referring to FIG. 1 and FIGS. 2A to 2D , the first sensing data is the image sense of the space sensing device 11 , the wearable device 12 and/or the local display device 14 located at the first location (ie, the local user). Sensing data of the measurement modules 111, 141, motion tracking modules 112, 122, 142 and/or the distance sensor 114. For example, information such as image, depth, distance, speed, rotation, position, orientation, etc.

第一感測資料是針對位於第一地點的第一實體物件感測所得的。例如,空間感測裝置11與第一實體物件之間的距離、第一實體物件的移動速度及位移、或第一實體物件的深度資訊。The first sensing data is obtained by sensing the first physical object located at the first location. For example, the distance between the space sensing device 11 and the first physical object, the moving speed and displacement of the first physical object, or the depth information of the first physical object.

在一實施例中,第一實體物件是第一人物。處理器36可利用全息影像(Holographic)技術產生立體的第一物件模型。全息影像是利用干涉及折射原理來記錄被射物體反射或透射光波中的振幅及/或相位資訊,使記錄到的影像讓人產生立體視覺感受。影像感測模組111可透過雷射發出光訊號並透過感測元件接收回波訊號。例如,空間感測裝置11的處理器116可依據回波訊號產生全像攝影相關的感測資料。例如,前述光波的振幅及/或相位。處理器36即可基於全像攝影相關的感測資料產生第一實體物件的第一物件模型。In one embodiment, the first physical object is a first character. The processor 36 may use holographic technology to generate a three-dimensional first object model. Holographic images use the principles of interference and refraction to record the amplitude and/or phase information in the light waves reflected or transmitted by the object, so that the recorded images give people a three-dimensional visual experience. The image sensing module 111 can emit light signals through lasers and receive echo signals through sensing elements. For example, the processor 116 of the spatial sensing device 11 can generate sensing data related to holographic photography based on the echo signals. For example, the amplitude and/or phase of the aforementioned light wave. The processor 36 can generate the first object model of the first physical object based on the sensing data related to the holographic photography.

在另一實施例中,處理器36可利用諸如飛行時間、點線掃描、結構光投影、光學偏折、立體視覺等三維成像技術產生第一物件模型。In another embodiment, the processor 36 may generate the first object model using three-dimensional imaging technologies such as time-of-flight, point-line scanning, structured light projection, optical deflection, and stereoscopic vision.

在一實施例中,第一實體物是第一物品。第一物品並非人物。處理器36可辨識第一物品。例如,處理器36可基於神經網路的演算法(例如,YOLO(You only look once)、基於區域的卷積神經網路(Region Based Convolutional Neural Networks,R-CNN)、或快速R-CNN(Fast CNN))或是基於特徵匹配的演算法(例如,方向梯度直方圖(Histogram of Oriented Gradient,HOG)、尺度不變特徵轉換(Scale-Invariant Feature Transform,SIFT)、Harr、或加速穩健特徵(Speeded Up Robust Features,SURF)的特徵比對)實現物品辨識。處理器36可判斷影像感測模組111、141所擷取的影像中的物件是否為預設的第一物品。依據不同應用需求,第一物品可以是球、球框、運動工具、擊球工具等與運動有關物件。In one embodiment, the first physical object is a first article. The first item is not a character. Processor 36 can identify the first item. For example, the processor 36 may be based on a neural network algorithm (eg, YOLO (You only look once), Region Based Convolutional Neural Networks (R-CNN), or Fast R-CNN ( Fast CNN)) or feature matching-based algorithms (e.g., Histogram of Oriented Gradient (HOG), Scale-Invariant Feature Transform (SIFT), Harr, or accelerated robust features ( Speeded Up Robust Features (SURF) feature comparison) to achieve item identification. The processor 36 can determine whether the object in the image captured by the image sensing modules 111 and 141 is the default first object. According to different application requirements, the first object may be a ball, a ball frame, a sports tool, a batting tool and other sports-related objects.

除了運用演算法辨識第一物品外,在另一實施例中,第一物品的類型可能依據輸入裝置(圖未示,例如,鍵盤、滑鼠、手勢辨識模組或語音輸入模組)所取得的輸入指令而改變。例如,輸入指令是語音指令的“踢球”,則第一物品的類型相關於足球。又例如,輸入指令是手勢指令所指向的“投球”,則第一物品的類型相關於籃球或棒球。In addition to using an algorithm to identify the first item, in another embodiment, the type of the first item may be obtained based on an input device (not shown, such as a keyboard, a mouse, a gesture recognition module or a voice input module). changed according to the input command. For example, if the input command is a voice command of "kick ball", then the type of the first item is related to football. For another example, if the input command is "throw the ball" pointed by the gesture command, then the type of the first item is related to basketball or baseball.

在辨識第一物品後,處理器36可依據第一物品的辨識結果取得預存的第一物件模型。也就是,第一物件模型是預先建立、取得或儲存的立體模型。例如,處理器36可透過通訊模組133經由網際網路下載第一物件模型,並據以預存供後續使用。又例如,透過三維掃描器掃描第一物品,並據以建立第一物件模型且預存供後續使用。藉此,除了可節省全息影像重建第一物件模型的軟硬體資源,更可排除其它不相關物品的移動。After identifying the first object, the processor 36 may obtain the pre-stored first object model based on the identification result of the first object. That is, the first object model is a three-dimensional model that is pre-created, obtained or stored. For example, the processor 36 can download the first object model through the Internet through the communication module 133 and pre-store it for subsequent use. For another example, the first object is scanned with a three-dimensional scanner, and a first object model is created based on the scan and stored for subsequent use. In this way, in addition to saving software and hardware resources for reconstructing the first object model from the holographic image, the movement of other irrelevant objects can also be eliminated.

舉例而言,圖5A是依據本發明一實施例的第一地點的第一物件的示意圖。請同時參照圖1與圖5A,第一物件可以包括人物51、足球52及足球框53。處理器36可基於全息影像產生人物51的物件模型,並基於辨識結果載入足球52及足球框53的預存物件模型。For example, FIG. 5A is a schematic diagram of a first object at a first location according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 5A at the same time. The first object may include a character 51 , a football 52 and a football frame 53 . The processor 36 can generate an object model of the character 51 based on the holographic image, and load the pre-stored object models of the football 52 and the football frame 53 based on the recognition results.

在一實施例中,影像擷取模組114或141包括180度鏡頭。處理器36可將視野重疊的兩台或更多台影像擷取模組114或141所擷取的影像合併成360度環景影像。例如,處理器36透過平面投影轉換(Homography)、影像扭曲(Warping)、影像融合(blending)技術拼接兩張或更多張影像。In one embodiment, the image capture module 114 or 141 includes a 180-degree lens. The processor 36 can combine the images captured by two or more image capture modules 114 or 141 with overlapping fields of view into a 360-degree panoramic image. For example, the processor 36 splices two or more images through plane projection conversion (Homography), image warping (Warping), and image blending (blending) technologies.

請繼續參照圖4,處理器36依據第二感測資料產生第二物件模型(步驟S420)。具體而言,參圖1,第二感測資料是位於第二地點(即遠端使用者)的空間感測裝置21、穿戴式裝置22及/或遠端顯示裝置24的影像感測模組、動作追蹤模組及/或距離感測器(圖未示,但可參酌圖2A至圖2E的相同模組)的感測資料。例如,與影像、深度、距離、速度、施力強度、旋轉、位置、方位等資訊。Please continue to refer to FIG. 4 , the processor 36 generates a second object model based on the second sensing data (step S420 ). Specifically, referring to FIG. 1 , the second sensing data is the image sensing module of the space sensing device 21 , the wearable device 22 and/or the remote display device 24 located at the second location (ie, the remote user). , the sensing data of the motion tracking module and/or the distance sensor (not shown in the figure, but please refer to the same module in Figures 2A to 2E). For example, information such as image, depth, distance, speed, force intensity, rotation, position, orientation, etc.

第二感測資料是針對位於第二地點的第二實體物件感測所得的。例如,空間感測裝置11與第二實體物件之間的距離、第二實體物件的移動速度及位移、或第二實體物件的深度資訊。The second sensing data is obtained by sensing the second physical object located at the second location. For example, the distance between the space sensing device 11 and the second physical object, the moving speed and displacement of the second physical object, or the depth information of the second physical object.

在一實施例中,第二實體物件是第二人物。處理器36可利用全息影像技術產生立體的第二物件模型。第二人物的第二物件模型的產生可參酌前述第一人物的第一物件模型的說明,於此不再贅述。此外,在其他實施例中,處理器36也可利用諸如飛行時間、點線掃描、結構光投影、光學偏折、立體視覺等其他三維成像技術產生第二物件模型。In one embodiment, the second physical object is a second character. The processor 36 may use holographic imaging technology to generate a three-dimensional second object model. The second object model of the second character can be generated with reference to the aforementioned description of the first object model of the first character, and will not be described again here. In addition, in other embodiments, the processor 36 may also use other three-dimensional imaging technologies such as time-of-flight, point-line scanning, structured light projection, optical deflection, stereoscopic vision, etc. to generate the second object model.

在一實施例中,第二實體物件也可以是第二物品,即球、欄框、運動工具、擊球工具等與運動有關物件。第二物品的第二物件模型的產生可參酌前述第一人物的第一物件模型的說明,於此不再贅述。須說明的是,由於處理器36已產生第一物品的第一物件模型,在某些情況下就不一定需要產生第二物品的第二物件模型。In one embodiment, the second physical object may also be a second object, that is, a ball, a frame, a sports tool, a batting tool and other sports-related objects. The second object model of the second object may be generated with reference to the aforementioned description of the first object model of the first character, and will not be described again here. It should be noted that since the processor 36 has generated the first object model of the first item, in some cases it is not necessarily necessary to generate the second object model of the second item.

舉例而言,圖5B是依據本發明一實施例的第二地點的第二物件的示意圖。請同時參照圖1與圖5B,第二物件可以只包括人物55。處理器36可基於全息影像產生人物55的物件模型。也就是說,人物51與人物55的物件模型(即全息影像)可在虛擬場景中共用足球52及足球框53的預存物件模型。For example, FIG. 5B is a schematic diagram of a second object at a second location according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 5B at the same time. The second object may only include the character 55 . The processor 36 can generate an object model of the character 55 based on the holographic image. That is to say, the object models (ie, holographic images) of the characters 51 and 55 can share the pre-stored object models of the football 52 and the football frame 53 in the virtual scene.

請參照圖4,處理器36依據第一感測資料及第二感測資料決定第一物件模型及第二物件模型在虛擬場景中的行為(步驟S430)。具體而言,虛擬場景(或稱為虛擬世界)是透過空間掃描所產生的虛擬空間或是由運算裝置所模擬的虛擬空間。處理器36可依據空間感測裝置11、穿戴式裝置12及/或本地顯示裝置14的影像感測模組111、141、動作追蹤模組112、122、142及/或距離感測器114的感測資料決定第一實體物件在第一地點真實空間中的動作資訊。同樣地,處理器36依據空間感測裝置21、穿戴式裝置22及/或遠端顯示裝置24的影像感測模組、動作追蹤模組及/或距離感測器的感測資料決定第二實體物件在第二地點真實空間中的動作資訊。動作資訊例如是速度、方向及/或位移。Referring to FIG. 4 , the processor 36 determines the behavior of the first object model and the second object model in the virtual scene based on the first sensing data and the second sensing data (step S430 ). Specifically, a virtual scene (or virtual world) is a virtual space generated through spatial scanning or a virtual space simulated by a computing device. The processor 36 can be based on the image sensing modules 111, 141, motion tracking modules 112, 122, 142 and/or the distance sensor 114 of the space sensing device 11, the wearable device 12 and/or the local display device 14. The sensing data determines the motion information of the first physical object in the real space of the first location. Similarly, the processor 36 determines the second step based on the sensing data of the image sensing module, motion tracking module and/or distance sensor of the space sensing device 21 , the wearable device 22 and/or the remote display device 24 . Movement information of physical objects in the real space of the second location. Action information is, for example, speed, direction and/or displacement.

處理器36可分別依據第一實體物件及第二實體物件在各自真實空間中的動作資訊決定兩物件在虛擬場景中的行為。處理器36可在虛擬場景中模擬第一實體物件及第二實體物件在各自真實空間的行為。例如,第一實體物件踢球,則第一物件模型踢球。又例如,第二實體物件跑步,則第一物件模型快速移動。The processor 36 can respectively determine the behavior of the first physical object and the second physical object in the virtual scene based on the motion information of the two objects in their respective real spaces. The processor 36 can simulate the behavior of the first physical object and the second physical object in their respective real spaces in the virtual scene. For example, if the first entity object kicks a ball, then the first object model kicks the ball. For another example, if the second entity object runs, the first object model moves quickly.

處理器36依據第一物件模型在虛擬場景的行為產生第一影像串流(步驟S440),且處理器36依據第二物件模型在虛擬場景的行為產生第二影像串流(步驟S450)。具體而言,為了讓位於不同地點的實體物件可以互動,則處理器36在一個虛擬場景中產生第一物件模型及第二物件模型,例如虛擬場景中包括以全息影像產生人物51的第一物件模型與人物55的第二物件模型,並從伺服器30載入足球52及足球框53的預存第一物件模型。處理器36可分別將第一實體物件及第二實體物件在真實空間中的位置投影或轉換到虛擬場景中,並據以分別決定第一物件模型及第二物件模型在虛擬場景中的位置。此外,處理器36依據步驟S430所決定的行為在虛擬場景中模擬第一物件模型及第二物件模型的行為。例如,第一實體物件在真實空間中的行為是踢球,則第一物件模型在虛擬場景中模擬踢球的動作。The processor 36 generates a first image stream based on the behavior of the first object model in the virtual scene (step S440), and the processor 36 generates a second image stream based on the behavior of the second object model in the virtual scene (step S450). Specifically, in order to allow physical objects located in different locations to interact, the processor 36 generates a first object model and a second object model in a virtual scene. For example, the virtual scene includes a first object model that generates a holographic image of the character 51 . The object model and the second object model of the character 55 are loaded from the server 30 to the pre-stored first object model of the football 52 and the football frame 53 . The processor 36 can respectively project or convert the positions of the first physical object and the second physical object in the real space into the virtual scene, and determine the positions of the first object model and the second object model in the virtual scene accordingly. In addition, the processor 36 simulates the behaviors of the first object model and the second object model in the virtual scene according to the behavior determined in step S430. For example, if the behavior of the first entity object in the real space is to kick a ball, the first object model simulates the action of kicking the ball in the virtual scene.

在一實施例中,處理器36可依據第一感測資料及第二感測資料決定第一物件模型及第二物件模型之間的互動情形。例如,處理器36可依據第一及第二物件模型在虛擬場景中的位置決定兩物件模型是否有碰撞、觸碰、重疊等互動情形。In one embodiment, the processor 36 may determine the interaction between the first object model and the second object model based on the first sensing data and the second sensing data. For example, the processor 36 can determine whether there are collisions, touches, overlaps and other interactive situations between the two object models based on the positions of the first and second object models in the virtual scene.

處理器36可依據互動情形決定第一物件模型及第二物件模型的行為。例如,第二物件模型在虛擬場景中為靜止的,第一物件模型移動並碰撞第二物件模型,則第二物件模型會模擬被碰撞所產生移動的反應。須說明的是,依據不同實體物件的物理特性及行為及/或應用場景,第一及第二物件模型的互動可能不同。應用者可依據實際需求而設計互動內容,本發明實施例不加以限制。The processor 36 can determine the behavior of the first object model and the second object model according to the interaction situation. For example, if the second object model is stationary in the virtual scene, and the first object model moves and collides with the second object model, the second object model will simulate the movement reaction caused by the collision. It should be noted that, depending on the physical characteristics and behaviors of different physical objects and/or application scenarios, the interactions of the first and second object models may be different. Users can design interactive content according to actual needs, which are not limited by the embodiments of the present invention.

處理器36以虛擬視角擷取虛擬場景及第一物件模型(若有出現在視野中)的影像,並據以產生第一影像串流中的一張或更多訊框(frame)的影像。例如,追蹤鏡頭系統(tracking camera system)是虛擬鏡頭系統(virtual camera system)中用於追蹤角色移動的鏡頭系統或固定視角鏡頭系統。此外,處理器36以虛擬視角擷取虛擬場景及第二物件模型(若有出現在視野中)的影像,並據以產生第二影像串流中的一張或更多訊框的影像。The processor 36 captures images of the virtual scene and the first object model (if present in the field of view) from a virtual perspective, and generates images of one or more frames in the first image stream accordingly. For example, a tracking camera system is a camera system or a fixed-angle camera system used to track character movement in a virtual camera system. In addition, the processor 36 captures images of the virtual scene and the second object model (if present in the field of view) from a virtual perspective, and generates images of one or more frames in the second image stream accordingly.

由第一物件模型所產生的第一影像串流可供遠端顯示裝置24顯示。例如,伺服器30將第一影像串流傳送至遠端顯示裝置24。遠端顯示裝置24的顯示器(圖未示,可參照本地顯示裝置14的顯示器144)即可顯示這第一影像串流。舉例來說,第二地點的人物55可從其遠端顯示裝置24中看到虛擬場景中包括從第一地點的人物51所產生的全息影像,並包括透過辨識第一地點的足球52與足球框53後從伺服器載入虛擬足球與足球框的預存影像。The first image stream generated by the first object model can be displayed by the remote display device 24 . For example, the server 30 transmits the first image stream to the remote display device 24 . The display of the remote display device 24 (not shown in the figure, please refer to the display 144 of the local display device 14) can display the first image stream. For example, the character 55 at the second location can see from its remote display device 24 that the virtual scene includes the holographic image generated from the character 51 at the first location, including by identifying the football 52 and the football at the first location. After block 53, the pre-stored images of the virtual football and the football frame are loaded from the server.

另一方面,由第二物件模型所產生的第二影像串流可供本地顯示裝置14顯示。例如,伺服器30將第二影像串流經由行動裝置13傳送至本地顯示裝置14,本地顯示裝置14的顯示器144即可顯示這第二影像串流。舉例來說,第一地點的人物51可從其本地顯示裝置14中看到虛擬場景中包括從第二地點的人物51所產生的全息影像,並包括前述虛擬足球與足球框的預存影像。On the other hand, the second image stream generated by the second object model can be displayed by the local display device 14 . For example, the server 30 transmits the second image stream to the local display device 14 through the mobile device 13, and the display 144 of the local display device 14 can display the second image stream. For example, the character 51 in the first location can see from its local display device 14 that the virtual scene includes a holographic image generated from the character 51 in the second location, and includes the aforementioned pre-stored images of the virtual football and football frame.

在一應用情境中,為了讓第一地點(例如,本地)與第二地點(例如,遠端)的場地範圍計算有相同標準,因此第一地點及第二地點所用的空間感測裝置11、21的擺放的位置方向及距離要一致。In an application scenario, in order to have the same standard for calculating the field ranges of the first location (eg, local) and the second location (eg, remote), the spatial sensing devices 11 and 11 used in the first location and the second location are 21 should be placed in the same position, direction and distance.

圖6是依據本發明一實施例的空間校正的流程圖。請參照圖6,處理器36可依據第一感測裝置(例如,空間感測裝置11)的第三感測資料決定第一空間(步驟S610)。具體而言,第三感測資料是針對位於第一地點的空間感測所得的。例如,兩空間感測裝置11或空間感測裝置11與障礙物(例如,牆、桌或椅)之間的相對距離及空間感測裝置11的方向。處理器36可依據兩空間感測裝置11或空間感測裝置11與障礙物之間的距離及方向決定第一空間。例如,處理器36判斷兩空間感測裝置11的感測範圍,並將感測範圍的聯集作為第一空間。Figure 6 is a flow chart of spatial correction according to an embodiment of the present invention. Referring to FIG. 6 , the processor 36 may determine the first space based on the third sensing data of the first sensing device (eg, the space sensing device 11 ) (step S610 ). Specifically, the third sensing data is obtained by sensing the space located at the first location. For example, the relative distance between two space sensing devices 11 or the space sensing device 11 and an obstacle (eg, wall, table or chair) and the direction of the space sensing device 11 . The processor 36 can determine the first space based on the distance and direction between the two space sensing devices 11 or the space sensing device 11 and the obstacle. For example, the processor 36 determines the sensing ranges of the two space sensing devices 11 and uses the union of the sensing ranges as the first space.

處理器36可比較第一空間及虛擬場景的空間規格(步驟S620)。具體而言,處理器36可依據虛擬場景及/或應用情境的類型定義空間規格。例如,足球練習需要5*10公尺的空間。又例如,韻律舞需要2*3公尺的空間。處理器36可判斷第一空間及空間規格之間在長度及方位上的差異,並據以產生比較結果。The processor 36 may compare the space specifications of the first space and the virtual scene (step S620). Specifically, the processor 36 may define spatial specifications according to the type of virtual scene and/or application situation. For example, football practice requires a space of 5*10 meters. For another example, rhythmic dance requires a space of 2*3 meters. The processor 36 can determine the difference in length and orientation between the first space and the space specification, and generate a comparison result accordingly.

處理器36可依據第一空間及空間規格的比較結果產生第一空間調整提示(步驟S630)。若第一空間及空間規格的比較結果為相同或差異小於對應門檻值,則無需調整第一感測裝置的位置。也就是,空間感測裝置11維持在原地。行動裝置13或本地顯示裝置14的使用者介面可呈現空間已對位的視覺提示或透過喇叭(圖未示)播放聽覺提示(即,第一空間調整提示)。The processor 36 may generate a first space adjustment prompt based on the comparison result between the first space and the space specification (step S630). If the comparison result between the first space and the space specification is the same or the difference is less than the corresponding threshold, there is no need to adjust the position of the first sensing device. That is, the space sensing device 11 remains in place. The user interface of the mobile device 13 or the local display device 14 may present a visual prompt that the space is aligned or play an auditory prompt (ie, the first spatial adjustment prompt) through a speaker (not shown).

若第一空間及空間規格的比較結果為不相同或差異大於對應門檻值,則處理器36需調整第一感測裝置的位置。也就是,改變空間感測裝置11的位置或方位。第一空間調整提示用於調整第一感測裝置的位置或方位。行動裝置13或本地顯示裝置14的使用者介面可呈現移動距離及/或轉向角度的視覺提示或透過喇叭(圖未示)播放聽覺提示(即,第一空間調整提示)。If the comparison result between the first space and the space specification is different or the difference is greater than the corresponding threshold, the processor 36 needs to adjust the position of the first sensing device. That is, the position or orientation of the spatial sensing device 11 is changed. The first spatial adjustment prompt is used to adjust the position or orientation of the first sensing device. The user interface of the mobile device 13 or the local display device 14 may present visual prompts of the movement distance and/or steering angle or play an auditory prompt (ie, the first spatial adjustment prompt) through a speaker (not shown).

圖7A至圖7F是依據本發明一實施例的空間感測裝置的擺設位置的示意圖。請先參照圖7A,兩空間感測裝置11的感測範圍F形成第一空間S。兩感測範圍F要重疊一定程度(例如,50%、75%或80%)或其原點的相距在最小安全距離內才可形成第一空間S。處理器36可透過感測範圍F的朝向判斷是否可形成第一空間S。雖然圖7B、圖7D及圖7F中的空間感測裝置11的位置分別大致相同於圖7A、圖7C及圖7E,但圖7B、圖7D及圖7F中的兩空間感測裝置11的朝向沒有相對,並使得感測範圍F的重疊範圍無法形成第一空間S。因此,可透過第一空間調整提示提醒使用者改變空間感測裝置11的位置及/或朝向。7A to 7F are schematic diagrams of the placement location of a space sensing device according to an embodiment of the present invention. Please refer to FIG. 7A first. The sensing ranges F of the two space sensing devices 11 form a first space S. As shown in FIG. The first space S can be formed only when the two sensing ranges F overlap to a certain extent (for example, 50%, 75% or 80%) or the distance between their origins is within a minimum safe distance. The processor 36 can determine whether the first space S can be formed through the orientation of the sensing range F. Although the positions of the space sensing devices 11 in Figures 7B, 7D and 7F are approximately the same as those in Figures 7A, 7C and 7E, the orientations of the two space sensing devices 11 in Figures 7B, 7D and 7F are There is no relative, and the overlapping range of the sensing range F cannot form the first space S. Therefore, the user can be reminded to change the position and/or orientation of the space sensing device 11 through the first space adjustment prompt.

除了所處位置的多個空間感測裝置11間的對位,第一地點及第二地點的空間也需要校正。In addition to the alignment between the plurality of spatial sensing devices 11 at their locations, the spaces at the first location and the second location also need to be corrected.

圖8是依據本發明一實施例的空間校正的流程圖。請參照圖8,處理器36可依據第二感測裝置(例如,空間感測裝置21)的第四感測資料決定第二空間(步驟S810)。具體而言,第四感測資料是針對位於第二地點的空間感測所得的。例如,兩空間感測裝置21或空間感測裝置21與障礙物(例如,牆、桌或椅)之間的相對距離及空間感測裝置21方向。處理器36可依據兩空間感測裝置21或空間感測裝置21與障礙物之間的距離及方向決定第二空間。例如,處理器36判斷兩空間感測裝置21的感測範圍,並將感測範圍的聯集作為第二空間。Figure 8 is a flow chart of spatial correction according to an embodiment of the present invention. Referring to FIG. 8 , the processor 36 may determine the second space according to the fourth sensing data of the second sensing device (eg, the space sensing device 21 ) (step S810 ). Specifically, the fourth sensing data is obtained by sensing the space located at the second location. For example, the relative distance between two space sensing devices 21 or the space sensing device 21 and an obstacle (eg, wall, table or chair) and the direction of the space sensing device 21 . The processor 36 can determine the second space based on the distance and direction between the two space sensing devices 21 or the space sensing device 21 and the obstacle. For example, the processor 36 determines the sensing ranges of the two space sensing devices 21 and uses the union of the sensing ranges as the second space.

處理器36可比較第一空間及第二空間(步驟S820)。處理器36可判斷第一空間及第二空間之間在長度及方位上的差異,並據以產生比較結果。The processor 36 may compare the first space and the second space (step S820). The processor 36 can determine the difference in length and orientation between the first space and the second space, and generate a comparison result accordingly.

處理器36可依據第一空間及第二空間的比較結果產生第二空間調整提示(步驟S830)。若第一空間及第二空間的比較結果為相同或差異小於對應門檻值,則無需調整第一或第二感測裝置的位置。也就是,空間感測裝置11或21維持在原地。行動裝置13、23、本地顯示裝置14或遠端顯示裝置24的使用者介面可呈現空間已對位的視覺提示或透過喇叭(圖未示)播放聽覺提示(即,第二空間調整提示)。The processor 36 may generate a second space adjustment prompt based on the comparison result between the first space and the second space (step S830). If the comparison result between the first space and the second space is the same or the difference is less than the corresponding threshold, there is no need to adjust the position of the first or second sensing device. That is, the space sensing device 11 or 21 remains in place. The user interface of the mobile device 13 , 23 , the local display device 14 or the remote display device 24 may present a visual cue that the space is aligned or play an auditory cue (ie, a second spatial adjustment cue) through a speaker (not shown).

若第一空間及第二空間的比較結果為不相同或差異大於對應門檻值,則處理器36判斷需調整第一或第二感測裝置的位置。也就是,改變空間感測裝置11或21的位置或方位。第二空間調整提示用於調整第一或第二感測裝置的位置或方位。行動裝置13、23、本地顯示裝置14或遠端顯示裝置24的使用者介面可呈現移動距離及/或轉向角度的視覺提示或透過喇叭(圖未示)播放聽覺提示(即,第二空間調整提示)。If the comparison result between the first space and the second space is different or the difference is greater than the corresponding threshold, the processor 36 determines that the position of the first or second sensing device needs to be adjusted. That is, the position or orientation of the spatial sensing device 11 or 21 is changed. The second spatial adjustment prompt is used to adjust the position or orientation of the first or second sensing device. The user interface of the mobile device 13, 23, the local display device 14 or the remote display device 24 may present visual cues of the movement distance and/or steering angle or play auditory cues (i.e., second space adjustment) through a speaker (not shown). hint).

在一實施例中,處理器36以第一空間及第二空間中的較小者為基準,並產生第一空間及第二空間中的較大者的空間調整提示。也就是,第一空間及第二空間中的較小者的位置及方位不動,但調整第一空間及第二空間中的較大者的位置及/或方位。In one embodiment, the processor 36 uses the smaller of the first space and the second space as a reference, and generates a space adjustment prompt for the larger of the first space and the second space. That is, the position and orientation of the smaller one in the first space and the second space remain unchanged, but the position and/or orientation of the larger one in the first space and the second space are adjusted.

舉例而言,圖9是依據本發明一實施例的裝置移動建議的示意圖。請參照圖9,由於遠端(例如,第二地點)的兩空間感測裝置21a、21b所形成的空間S1大於本地(例如,第一地點)的空間感測裝置11a、11b所形成的空間S2,因此行動裝置23的使用者介面可呈現位置及/或方位調整的提示。例如,圖面上方的空間感測裝置21a建議靠近圖面下方的空間感測裝置21b。For example, FIG. 9 is a schematic diagram of device movement suggestions according to an embodiment of the present invention. Please refer to FIG. 9 , because the space S1 formed by the two space sensing devices 21a and 21b at the remote end (for example, the second location) is larger than the space formed by the space sensing devices 11a and 11b at the local location (for example, the first location). S2, therefore, the user interface of the mobile device 23 can present a prompt for position and/or orientation adjustment. For example, the space sensing device 21a at the top of the drawing is suggested to be close to the space sensing device 21b at the bottom of the drawing.

反之,在其他實施例中,處理器36以第一空間及第二空間中的較大者為基準,並產生第一空間及第二空間中的較小者的空間調整提示。也就是,第一空間及第二空間中的較大者的位置及方位不動,但調整第一空間及第二空間中的較小者的位置及/或方位。On the contrary, in other embodiments, the processor 36 uses the larger of the first space and the second space as a reference, and generates a space adjustment prompt for the smaller of the first space and the second space. That is, the position and orientation of the larger one in the first space and the second space remain unchanged, but the position and/or orientation of the smaller one in the first space and the second space are adjusted.

圖10A及圖10B是依據本發明一實施例的用於位置校正的使用者介面的示意圖。請參照圖10A及圖10B,因兩空間感測裝置11或21能偵側當前的方位和相對距離,處理器36可判定當下距離是否符合場地模式的定義(即,空間規格)(例如,長度X及Y),並可以透過行動裝置13或23的喇叭或在顯示畫面呈現使用者介面提示,讓使用者能夠據以調整/校正兩空間感測裝置11或21的位置及/或方位,直到調整至符合空間規格即可讓使用者介面切換至下一步驟。圖10A中的圖面右邊的空間感測裝置應逆時針轉向90度,以達到圖10B所示的感測器/模組之間的最小安全距離A。10A and 10B are schematic diagrams of a user interface for position correction according to an embodiment of the present invention. Please refer to FIG. 10A and FIG. 10B . Since the two spatial sensing devices 11 or 21 can detect the current orientation and relative distance, the processor 36 can determine whether the current distance complies with the definition of the field mode (ie, spatial specifications) (eg, length). X and Y), and can present user interface prompts through the speaker of the mobile device 13 or 23 or on the display screen, allowing the user to adjust/correct the position and/or orientation of the two space sensing devices 11 or 21 accordingly, until Adjusting to meet the space specifications will allow the user interface to switch to the next step. The spatial sensing device on the right side of the figure in Figure 10A should be turned 90 degrees counterclockwise to achieve the minimum safe distance A between the sensors/modules shown in Figure 10B.

若空間感測裝置11或21的擺放位置符合上述的對位及/或校正,則第一地點及第二地點所用的那些空間感測裝置11或21所形成的空間大致一致,將有助於後續計算第一實體物件及第二實體物件分別在第一地點及第二地點的位置。If the placement of the space sensing devices 11 or 21 complies with the above-mentioned alignment and/or correction, the spaces formed by the space sensing devices 11 or 21 used at the first location and the second location will be approximately the same, which will help Subsequently, the positions of the first physical object and the second physical object at the first location and the second location are calculated respectively.

在一實施例中,第一感測資料包括第一實體物件在所處空間的第一位置資訊。位置資訊可包括第一實體物件與空間感測裝置11的相對距離或絕對位置/座標。例如,定點的兩空間感測裝置11依據距離感測器114及/或影像感測模組111的測距資料產生第一實體物件所處空間的第一位置資訊,處理器136計算並將第一位置資訊分配到座標系的兩軸向量資料。例如,將位置分配到座標系中的最接近座標。當影像感測模組111拍攝移動的人像/物品時,影像感測模組111量測到的移動距離資料與定點距離資料(可能一併參考距離感測器114的測距資料)可透過通訊模組113回傳至行動裝置13。行動裝置13的處理器136可計算移動距離資料與定點距離資料對應在兩軸向的座標系上的向量位置。In one embodiment, the first sensing data includes first position information of the first physical object in the space where it is located. The location information may include the relative distance or absolute position/coordinates of the first physical object and the space sensing device 11 . For example, the fixed-point two-space sensing device 11 generates the first position information of the space where the first physical object is located based on the distance measurement data of the distance sensor 114 and/or the image sensing module 111. The processor 136 calculates and generates the first position information of the space where the first physical object is located. A position information is assigned to the two-axis vector data of the coordinate system. For example, assign the location to the closest coordinate in the coordinate system. When the image sensing module 111 captures moving portraits/objects, the moving distance data and fixed-point distance data measured by the image sensing module 111 (which may also refer to the ranging data of the distance sensor 114) can be communicated through The module 113 transmits back to the mobile device 13 . The processor 136 of the mobile device 13 can calculate the vector positions corresponding to the moving distance data and the fixed-point distance data on the two-axis coordinate system.

舉例而言,圖11是依據本發明一實施例的座標系CS的示意圖。請參照圖11,假設某一台空間感測裝置11a的座標系CS的原點位於左側中央,且另一台空間感測裝置11b的座標系CS的原點位於右側中央。以左邊的空間感測裝置11a而言,三角形圖案的座標為(4,-2),且方塊圖案的座標為(2,1)。以右邊的空間感測裝置11b而言,三角形圖案的座標為(3,-2),且方塊圖案的座標為(5,1)。處理器36可能將兩空間感測裝置11的座標系整合成單一座標系CS。例如,以左邊的空間感測裝置11的座標系CS為基準。For example, FIG. 11 is a schematic diagram of the coordinate system CS according to an embodiment of the present invention. Referring to FIG. 11 , it is assumed that the origin of the coordinate system CS of a certain space sensing device 11 a is located at the left center, and the origin of the coordinate system CS of another space sensing device 11 b is located at the right center. Taking the space sensing device 11a on the left as an example, the coordinates of the triangular pattern are (4,-2), and the coordinates of the square pattern are (2,1). Taking the space sensing device 11b on the right as an example, the coordinates of the triangular pattern are (3,-2), and the coordinates of the square pattern are (5,1). The processor 36 may integrate the coordinate systems of the two spatial sensing devices 11 into a single coordinate system CS. For example, take the coordinate system CS of the left space sensing device 11 as a reference.

處理器36可將第一位置資訊轉換成虛擬場景的平面座標系中的第二位置資訊。處理器36可依據第一位置資訊的座標系及虛擬場景的平面座標系的比例關係將第一位置資訊轉換成第二位置資訊。例如,圖11中的座標(2,1)轉換成(4,-2)。The processor 36 may convert the first position information into second position information in the plane coordinate system of the virtual scene. The processor 36 may convert the first position information into the second position information according to the proportional relationship between the coordinate system of the first position information and the plane coordinate system of the virtual scene. For example, the coordinates (2,1) in Figure 11 are converted to (4,-2).

第一物件模型的行為相關於第二位置資訊。也就是說,第一物件模型的位置是透過座標系轉換得來的。當第一物件模型的位置改變時,其行為也跟著改變。藉此,讓遠端與本地端的使用者從各自顯示裝置會看到對方的影像所體驗的距離感受更加準確與逼真。The behavior of the first object model is related to the second location information. In other words, the position of the first object model is obtained through coordinate system conversion. When the position of the first object model changes, its behavior also changes. This allows remote and local users to experience a more accurate and realistic distance experience when they see each other's images from their respective display devices.

為了方便讀者理解本發明實施例的精神,以下再舉應用情境說明。In order to facilitate readers to understand the spirit of the embodiments of the present invention, application scenarios are described below.

圖12是依據本發明一實施例的操作流程的示意圖,且圖13是依據本發明一實施例的使用者介面及操作流程的示意圖。請參照圖12及圖13,圖12呈現行動裝置13的使用者介面的操作流程,圖13呈現使用者介面的範例。在步驟121中,在起始頁中,若選擇「開始訓練」選項,則進入步驟S125的運動項目選擇。若選擇「觀看紀錄」選項,則進行步驟S122的雲端連接。FIG. 12 is a schematic diagram of an operation process according to an embodiment of the present invention, and FIG. 13 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. Please refer to FIGS. 12 and 13 . FIG. 12 shows the operation flow of the user interface of the mobile device 13 , and FIG. 13 shows an example of the user interface. In step S121, if the "Start Training" option is selected on the home page, the process proceeds to step S125 of exercise selection. If the "viewing record" option is selected, the cloud connection in step S122 is performed.

在步驟S122中,行動裝置13連線至伺服器30。在步驟S123中,可透過捲動瀏覽不同運動項目,並據以選擇特定運動項目。選擇運動項目之後,可透過捲動瀏覽這項目的不同紀錄(可顯示預覽影像,以輔助選擇)。選擇紀錄之後,行動裝置13可播放這紀錄的影像(步驟S124)。In step S122, the mobile device 13 is connected to the server 30. In step S123, different sports events can be browsed through scrolling, and a specific sport event can be selected accordingly. After selecting a sports event, you can scroll through the different records of this event (preview images can be displayed to assist selection). After selecting the record, the mobile device 13 can play the recorded image (step S124).

在步驟S125中,可選擇預設的運動項目或建立新項目。選擇運動項目之後,可選擇這項目的細節項目(步驟S126)。In step S125, a preset sports program can be selected or a new program can be created. After selecting a sports event, the detailed items of this event can be selected (step S126).

圖14是依據本發明一實施例的使用者介面及操作流程的示意圖。請參照圖12及圖14,選擇細節項目之後,在步驟S127中,可進行空間感測裝置11或21的操作提示,並進行其它裝置間的連線配對。例如,空間感測裝置11或21在各自所處空間中進行空間掃描(即參照圖6至圖11所述的實施例),同時連接各自的本地/遠端顯示裝置14或24以及穿戴式裝置12或22。FIG. 14 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. Referring to FIG. 12 and FIG. 14 , after selecting the detailed item, in step S127 , the operation prompt of the space sensing device 11 or 21 can be performed, and the connection pairing between other devices can be performed. For example, the spatial sensing device 11 or 21 performs spatial scanning in the respective spaces (ie, with reference to the embodiments described in FIGS. 6 to 11 ), and simultaneously connects the respective local/remote display device 14 or 24 and the wearable device. 12 or 22.

在步驟S128中,提供多種裝置的教學項目的選擇。例如,穿戴式裝置12的設置教學或行動裝置13的使用教學。選擇教學項目之後,可提供這項目的教學細節(步驟S129)。In step S128, a selection of teaching items for multiple devices is provided. For example, the setting teaching of the wearable device 12 or the usage teaching of the mobile device 13 . After selecting a teaching item, the teaching details of this item can be provided (step S129).

圖15是依據本發明一實施例的使用者介面及操作流程的示意圖。請參照圖12及圖15,在步驟1210中,若場地人數只有本地使用者一人,只須點選「使用者加入」之後,兩空間感測裝置11的影像感測模組111可即時掃瞄與建立「3D重建人像」。FIG. 15 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. Please refer to Figure 12 and Figure 15. In step 1210, if there is only one local user in the venue, you only need to click "User Join", and the image sensing modules 111 of the two space sensing devices 11 can scan in real time. and create "3D reconstructed portrait".

在一實施例中,處理器36依據控制操作的偵測結果在虛擬場景中產生第二物件模型。控制操作可以是透過行動裝置的輸入裝置(圖未示,例如是鍵盤、觸控面板或滑鼠)所接收的使用者操作。例如,按壓實體按鍵、點擊虛擬按鍵或切換開關。反應於未偵測到控制操作,處理器36禁能/不在虛擬場景中產生第二物件模型。此時,第一影像串流中沒有第二物件模型。而反應於偵測到控制操作,處理器36允許在虛擬場景中產生第二物件模型。此時,第一影像串流中有第二物件模型。In one embodiment, the processor 36 generates the second object model in the virtual scene according to the detection result of the control operation. The control operation may be a user operation received through an input device of the mobile device (not shown, such as a keyboard, a touch panel, or a mouse). For example, pressing a physical button, clicking a virtual button, or a toggle switch. In response to no control operation being detected, the processor 36 disables/does not generate the second object model in the virtual scene. At this time, there is no second object model in the first image stream. In response to detecting the control operation, the processor 36 allows the second object model to be generated in the virtual scene. At this time, there is a second object model in the first image stream.

例如,若場地人數不止一人,依照有配戴手環(即,穿戴式裝置12)的人,按下手環後判斷手環位在場地的位置資訊,並通知空間感測裝置11對有手環的人物即時掃瞄與建立「3D重建人像」。For example, if there is more than one person in the venue, according to the person wearing the bracelet (i.e., the wearable device 12), the location information of the bracelet in the venue is determined after pressing the bracelet, and the space sensing device 11 is notified of the pair of people wearing the bracelet. Real-time scanning and creation of "3D reconstructed portraits" of people.

在步驟S1211中,行動裝置13與穿戴式裝置12建立連線。行動裝置13可依據穿戴式裝置12所產生的感測資料決定穿戴式裝置12的位置,並據以將顯示對應物件模型在虛擬場景中,讓本地顯示裝置14的顯示器144顯示物件模型。最後,依據需求可返回其他步驟(例如,步驟S121、S124、S126或S1210)。In step S1211, the mobile device 13 establishes a connection with the wearable device 12. The mobile device 13 can determine the position of the wearable device 12 based on the sensing data generated by the wearable device 12, and accordingly display the corresponding object model in the virtual scene, allowing the display 144 of the local display device 14 to display the object model. Finally, other steps (for example, steps S121, S124, S126 or S1210) may be returned as required.

圖16A至圖16D是依據本發明一實施例的雙人足球情境的示意圖。請參照圖16A至圖16D,針對雙人射門,第一地點及第二地點皆有設置空間感測器11或21。當擁有實體球門和實體足球的本機使用者(例如,位於第一地點)按下手環的Host鍵成為攻方進行踢球,遠端使用者(例如,位於第二地點)成為守門員進行隔擋。此時,本機使用者可觀看到遠端使用者的全息影像的虛擬位置/姿勢(如圖16A所示)。當無實體足球(如圖16B、圖16C、圖16D)和無實體球門(如圖16B、圖16D)時,則從伺服器30載入足球與球門的預存物件模型(以下簡稱「虛擬足球」),本機/遠端使用者仍可分別成為攻者與守者。然後,當本機使用者按下手環的Host鍵進行踢球,空間感測裝置11的窗口內的相機模組拍攝本機使用者的運動姿勢,伺服器30即可依據使用者的姿勢來計算虛擬足球移動路徑,以傳送至遠端使用者的遠端顯示裝置24並在其螢幕中顯示以供觀看。16A to 16D are schematic diagrams of a two-player football situation according to an embodiment of the present invention. Please refer to Figures 16A to 16D. For double shooting, space sensors 11 or 21 are provided at both the first location and the second location. When the local user who owns the physical goal and physical football (for example, located at the first location) presses the Host button of the bracelet to become the attacker and kick the ball, the remote user (for example, located at the second location) becomes the goalkeeper to block. . At this time, the local user can see the virtual position/posture of the remote user's holographic image (as shown in Figure 16A). When there is no physical football (as shown in FIG. 16B, FIG. 16C, and FIG. 16D) and no physical goal (as shown in FIG. 16B, FIG. 16D), the pre-existing object models of the football and the goal (hereinafter referred to as "virtual football") are loaded from the server 30 ), local/remote users can still be attackers and defenders respectively. Then, when the local user presses the Host button of the bracelet to kick a ball, the camera module in the window of the space sensing device 11 captures the movement posture of the local user, and the server 30 can calculate based on the user's posture. The virtual football movement path is transmitted to the remote display device 24 of the remote user and displayed on the screen for viewing.

針對單人射門或長傳訓練,也就圖16A至圖16D中沒有遠端使用者。擁有實體球門和實體足球的本機使用者按下手環的Host鍵進行踢球。無實體球門和無實體足球時,本機使用者也能按下手環的Host鍵進行踢球。遠端使用者可以透過遠端顯示裝置24的顯示器看見本機使用者的踢球狀況並且按下手環對本機使用者進行語音指導。或者,若遠端使用者亦有空間感測裝置21,則可按下Host鍵切換成由遠端使用者產生虛擬運動姿勢,本機使用者可從本地顯示裝置14的顯示器144看到遠端使用者在虛擬場景中的示範動作。For single shot or long pass training, there is no far-end user in Figures 16A to 16D. The local user who owns a physical goal and a physical football presses the Host button of the bracelet to kick the ball. When there is no physical goal or physical football, the user of this device can also press the Host button of the bracelet to kick the ball. The remote user can see the local user's kicking status through the display of the remote display device 24 and press the bracelet to provide voice guidance to the local user. Alternatively, if the remote user also has a space sensing device 21, he or she can press the Host key to switch to a virtual movement posture generated by the remote user. The local user can see the remote user from the display 144 of the local display device 14. The user's demonstration actions in the virtual scene.

針對單人盤球訓練,擁有實體足球的本機使用者按下手環的Host鍵進行盤球。若無實體足球,則由本地顯示裝置14顯示虛擬足球。此外,本機使用者也能按下手環的Host鍵來踢虛擬足球。For single-player dribbling training, the local user with a physical football presses the Host button of the bracelet to dribble. If there is no physical football, the virtual football is displayed by the local display device 14 . In addition, local users can also press the host button of the bracelet to play virtual football.

針對雙人盤球訓練,當擁有實體足球的本機使用者按下手環的Host鍵成為攻者進行盤球,空間感測裝置11的窗口內的影像感測模組111拍攝本機使用者的運動姿勢以及實體足球,遠端使用者可從遠端顯示裝置24看到在虛擬場景中本機使用者的全息影像與虛擬足球的互動。For two-person dribbling training, when the local user who owns a physical football presses the Host button of the bracelet to become the attacker and perform dribbling, the image sensing module 111 in the window of the space sensing device 11 captures the movement of the local user. posture and the physical football, the remote user can see the interaction between the local user's holographic image and the virtual football in the virtual scene from the remote display device 24 .

假設遠端使用者成為守者進行抄截,如成功抄截則結束。若本機與遠端使用者肢體重疊,則球會消失暫停比賽,且分開之後球原地出現。Assume that the remote user becomes the defender and intercepts. If the interception is successful, it ends. If the limbs of this machine and the remote user overlap, the ball will disappear and the game will be suspended. After separation, the ball will appear in place.

另一方面,若無實體足球則由本地顯示裝置14顯示虛擬足球。然後,當本機使用者按下手環的Host鍵進行盤球,空間感測裝置11的窗口內的影像感測模組111拍攝本機使用者的運動姿勢。行動裝置13可依據使用者的姿勢來計算虛擬足球移動路徑,如此一來遠端使用者同樣可從遠端顯示裝置24看到在虛擬場景中本機使用者的全息影像與虛擬足球的互動。On the other hand, if there is no physical football, the local display device 14 displays the virtual football. Then, when the local user presses the Host button of the bracelet to perform dribbling, the image sensing module 111 in the window of the space sensing device 11 captures the movement posture of the local user. The mobile device 13 can calculate the moving path of the virtual football based on the user's posture, so that the remote user can also see the interaction between the local user's holographic image and the virtual football in the virtual scene from the remote display device 24 .

圖17A至圖17D是依據本發明一實施例的單人足球情境的示意圖。請參照圖17A至圖17D,針對競賽模式,本機或兩地均需要實體足球,按下手環的Host鍵遠端使用者開始時間計算。針對挑球/顛球次數比賽,空間感測裝置11內的影像感測模組111拍攝足球影像,以供雲端或空間感測器內的運算中心來計算次數。若球落於地板,則結束此回合計算(圖17A、圖17B)。針對障礙路線盤球時間比賽:伺服器計算足球正確繞過虛擬角錐,完成路線超過後感測器停止計時(圖3、圖4)。若需要使用虛擬角錐,則可在虛擬場景中產生對應的物件模型。17A to 17D are schematic diagrams of a single football situation according to an embodiment of the present invention. Please refer to Figure 17A to Figure 17D. For the competition mode, a physical football is required on the local or both locations. Press the Host button of the bracelet to start time calculation for the remote user. For the number of picking/dumping games, the image sensing module 111 in the space sensing device 11 captures football images for the cloud or the computing center in the space sensor to calculate the number of times. If the ball falls on the floor, the calculation of this round ends (Figure 17A, Figure 17B). For the obstacle course dribbling time competition: the server calculates that the football correctly bypasses the virtual pyramid, and the sensor stops timing after completing the route and passing it (Figure 3, Figure 4). If it is necessary to use a virtual pyramid, the corresponding object model can be generated in the virtual scene.

另一方面,遠端使用者可以透過顯示裝置24的顯示器看見本機使用者的踢球狀況並且按下手環對本機使用者進行語音指導。或者,當第二地點亦有空間感測裝置21時,則可按下Host鍵切換成由遠端使用者產生虛擬運動姿勢,並據以透過本地顯示裝置14的顯示器144顯示遠端使用者的動作姿勢。On the other hand, the remote user can see the local user's kicking status through the display of the display device 24 and press the bracelet to provide voice guidance to the local user. Alternatively, when there is also a space sensing device 21 at the second location, the Host key can be pressed to switch to the remote user generating a virtual movement posture, and accordingly display the remote user's movement posture through the display 144 of the local display device 14 . action poses.

圖18是依據本發明一實施例的用於健身訓練的空間感測裝置的位置布建的示意圖。請參照圖18,針對健身訓練,由於健身動作基本以前方姿勢為主要且姿勢多為左右對稱,因此一台空間感測裝置11置於使用者前方,另一台空間感測裝置11置於使用者的側邊,並據以將感測資料所對應的動作建構於虛擬影像(即,影像串流)。Figure 18 is a schematic diagram of the position layout of a spatial sensing device for fitness training according to an embodiment of the present invention. Please refer to Figure 18. For fitness training, since fitness movements are basically based on the front posture and the postures are mostly symmetrical, one space sensing device 11 is placed in front of the user, and the other space sensing device 11 is placed in front of the user. side of the user, and accordingly constructs the action corresponding to the sensing data into the virtual image (i.e., image streaming).

在教學端,可記錄教學者之健身姿勢與動作,並即時傳輸給學習者觀看,以便於依循動作。或者,可記錄學習者的健身姿勢與動作,並即時傳輸給教學者檢視。針對共同訓練使用,多位使用者可互相檢視對方的健身姿勢與動作。On the teaching end, the instructor's fitness postures and movements can be recorded and transmitted to the learners in real time for viewing so that they can follow the movements. Alternatively, the learner's fitness postures and movements can be recorded and transmitted to the instructor for review in real time. For joint training use, multiple users can view each other's fitness postures and movements.

圖19是依據本發明一實施例的多感測情境的示意圖。請參照圖17,穿戴式裝置12(例如,手環)可記錄與設定正確動作資訊(例如,位移、距離、方向、力道等),空間感測裝置11所取得的影像可提供給本地顯示裝置14的顯示器144,以提示使用者每個動作是否到位。Figure 19 is a schematic diagram of a multi-sensing scenario according to an embodiment of the present invention. Referring to Figure 17, the wearable device 12 (for example, a bracelet) can record and set correct action information (for example, displacement, distance, direction, force, etc.), and the image obtained by the space sensing device 11 can be provided to the local display device The display 144 of 14 is used to prompt the user whether each action is in place.

須說明的是,前述應用情境的內容僅是用於範例說明,應用者依據實際需求而改變訓練內容。It should be noted that the content of the aforementioned application scenarios is only for example illustration, and users can change the training content according to actual needs.

綜上所述,在本發明實施例的虛實互動方法、用於虛擬世界的運算系統及虛擬實境系統中,透過影像感測模組拍攝人物動作與互動物品(例如,球、球框)的影像並產生位置資料。取得的人物影像資料傳送到伺服器之後,可透過全息影像技術重建出三維重建人像。對互動物品只拍攝其影像,並依據所選擇的運動項目的選擇及/或記憶體內已存AI模型辨識物品影像的特徵,自伺服器的資料庫中載入符合預存的三維物品影像,以節省重建三維持物品影像的資源。To sum up, in the virtual-real interaction method, the computing system for the virtual world and the virtual reality system of the embodiments of the present invention, the image sensing module is used to capture the actions of characters and interactive objects (for example, balls, ball frames). images and generate location data. After the obtained character image data is sent to the server, a three-dimensional reconstructed portrait can be reconstructed through holographic imaging technology. Only the image of the interactive object is captured, and based on the selection of the selected sports event and/or the AI model stored in the memory to identify the characteristics of the object image, the pre-stored three-dimensional object image is loaded from the server database to save money. Rebuild three resources to maintain the image of the item.

藉此,使用者在不同的運動項目,能夠擁有不一樣的操作方法及體驗。除了多個使用者能夠同樂,也能夠有教練與學生的教學互動方式,讓使用者在家中也能夠有在戶外運動一般的體驗。In this way, users can have different operating methods and experiences in different sports. In addition to the ability for multiple users to have fun together, there is also a teaching interaction method between coaches and students, allowing users to have the same experience of outdoor sports at home.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Anyone with ordinary knowledge in the technical field may make some modifications and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the appended patent application scope.

1:虛擬實境系統 11、11a、11b、21、21a、21b:空間感測裝置 12、22:穿戴式裝置 13、23:行動裝置 14:本地顯示裝置 24:遠端顯示裝置 30:伺服器 2:運算系統 111、141:影像感測模組 112、122、142:動作追蹤模組 113、123、133、143、33:通訊模組 114:距離感測器 115、125、135、145、35:記憶體 116、126、136、146、36:處理器 144:顯示器 S410~S450、S610~S630、S810~S830、S121~S1211:步驟 51、55:人物 52:足球 53:足球框 S、S1、S2:空間 S:感測範圍 X、Y:長度 A:最小安全距離 1:Virtual reality system 11, 11a, 11b, 21, 21a, 21b: space sensing device 12, 22: Wearable devices 13, 23:Mobile device 14:Local display device 24:Remote display device 30:Server 2:Computing system 111, 141: Image sensing module 112, 122, 142: motion tracking module 113, 123, 133, 143, 33: Communication module 114: Distance sensor 115, 125, 135, 145, 35: memory 116, 126, 136, 146, 36: Processor 144:Display S410~S450, S610~S630, S810~S830, S121~S1211: steps 51, 55: Characters 52:Football 53: Football Frame S, S1, S2: space S: sensing range X, Y: length A: Minimum safety distance

圖1是依據本發明一實施例的虛擬實境系統的元件方塊圖。 圖2A是依據本發明一實施例的空間感測裝置的元件方塊圖。 圖2B是依據本發明一實施例的穿戴式裝置的元件方塊圖。 圖2C是依據本發明一實施例的行動裝置的元件方塊圖。 圖2D是依據本發明一實施例的本地顯示裝置的元件方塊圖。 圖2E是依據本發明一實施例的伺服器的元件方塊圖。 圖3A是依據本發明一實施例的空間感測裝置為兩台一組的示意圖。 圖3B是依據圖3A的其中一空間感測裝置的示意圖。 圖3C是依據圖3A的另一空間感測裝置上放置一行動裝置放置的示意圖。 圖4是依據本發明一實施例的虛實互動方法的流程圖。 圖5A是依據本發明一實施例的第一地點的第一物件的示意圖。 圖5B是依據本發明一實施例的第二地點的第二物件的示意圖。 圖6是依據本發明一實施例的空間校正的流程圖。 圖7A至圖7F是依據本發明一實施例的空間感測裝置的擺設位置的示意圖。 圖8是依據本發明一實施例的空間校正的流程圖。 圖9是依據本發明一實施例的裝置移動建議的示意圖。 圖10A及圖10B是依據本發明一實施例的用於位置校正的使用者介面的示意圖。 圖11是依據本發明一實施例的座標系的示意圖。 圖12是依據本發明一實施例的操作流程的示意圖。 圖13是依據本發明一實施例的使用者介面及操作流程的示意圖。 圖14是依據本發明一實施例的使用者介面及操作流程的示意圖。 圖15是依據本發明一實施例的使用者介面及操作流程的示意圖。 圖16A至圖16D是依據本發明一實施例的雙人足球情境的示意圖。 圖17A至圖17D是依據本發明一實施例的單人足球情境的示意圖。 圖18是依據本發明一實施例的用於健身訓練的空間感測裝置的位置布建的示意圖。 圖19是依據本發明一實施例的多感測情境的示意圖。 FIG. 1 is a component block diagram of a virtual reality system according to an embodiment of the present invention. FIG. 2A is a component block diagram of a space sensing device according to an embodiment of the present invention. FIG. 2B is a component block diagram of a wearable device according to an embodiment of the present invention. FIG. 2C is a block diagram of a mobile device according to an embodiment of the present invention. FIG. 2D is a component block diagram of a local display device according to an embodiment of the present invention. FIG. 2E is a component block diagram of a server according to an embodiment of the present invention. FIG. 3A is a schematic diagram of a set of two space sensing devices according to an embodiment of the present invention. FIG. 3B is a schematic diagram of one of the space sensing devices according to FIG. 3A. FIG. 3C is a schematic diagram of a mobile device placed on another space sensing device according to FIG. 3A . Figure 4 is a flow chart of a virtual-real interaction method according to an embodiment of the present invention. FIG. 5A is a schematic diagram of the first object at the first location according to an embodiment of the present invention. FIG. 5B is a schematic diagram of the second object at the second location according to an embodiment of the present invention. Figure 6 is a flow chart of spatial correction according to an embodiment of the present invention. 7A to 7F are schematic diagrams of the placement location of a space sensing device according to an embodiment of the present invention. Figure 8 is a flow chart of spatial correction according to an embodiment of the present invention. FIG. 9 is a schematic diagram of device movement suggestions according to an embodiment of the present invention. 10A and 10B are schematic diagrams of a user interface for position correction according to an embodiment of the present invention. FIG. 11 is a schematic diagram of a coordinate system according to an embodiment of the present invention. FIG. 12 is a schematic diagram of an operation flow according to an embodiment of the present invention. FIG. 13 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. FIG. 14 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. FIG. 15 is a schematic diagram of a user interface and operation process according to an embodiment of the present invention. 16A to 16D are schematic diagrams of a two-player football situation according to an embodiment of the present invention. 17A to 17D are schematic diagrams of a single football situation according to an embodiment of the present invention. Figure 18 is a schematic diagram of the position layout of a spatial sensing device for fitness training according to an embodiment of the present invention. Figure 19 is a schematic diagram of a multi-sensing scenario according to an embodiment of the present invention.

1:虛擬實境系統 1:Virtual reality system

11、21:空間感測裝置 11, 21: Space sensing device

12、22:穿戴式裝置 12, 22: Wearable devices

13、23:行動裝置 13, 23:Mobile device

14:本地顯示裝置 14:Local display device

24:遠端顯示裝置 24:Remote display device

30:伺服器 30:Server

Claims (16)

一種虛實互動方法,包括:依據一第一感測資料產生一第一物件模型,其中該第一感測資料是對一第一實體物件感測所得的;依據一第二感測資料產生一第二物件模型,其中該第二感測資料是對一第二實體物件感測所得的;依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型在一虛擬場景中的行為,其中決定該第一物件模型及該第二物件模型在該虛擬場景中的行為的步驟包括:依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型之間的一互動情形;以及依據該互動情形決定該第一物件模型及該第二物件模型在該虛擬場景中的該行為;依據該第一物件模型在該虛擬場景的行為產生一第一影像串流,其中該第一影像串流用於供一遠端顯示裝置顯示;以及依據該第二物件模型在該虛擬場景的行為產生一第二影像串流,其中該第二影像串流用於供一本地顯示裝置顯示。 A virtual-real interaction method includes: generating a first object model based on a first sensing data, wherein the first sensing data is obtained by sensing a first physical object; generating a first object model based on a second sensing data. Two object models, wherein the second sensing data is obtained by sensing a second physical object; determining where the first object model and the second object model are based on the first sensing data and the second sensing data. Behavior in a virtual scene, wherein the step of determining the behavior of the first object model and the second object model in the virtual scene includes: determining the first object based on the first sensing data and the second sensing data An interaction situation between the model and the second object model; and determining the behavior of the first object model and the second object model in the virtual scene based on the interaction situation; based on the behavior of the first object model in the virtual scene The behavior generates a first image stream, wherein the first image stream is used for display by a remote display device; and a second image stream is generated according to the behavior of the second object model in the virtual scene, wherein the first image stream is used for display by a remote display device; The two image streams are used for display on a local display device. 如請求項1所述的虛實互動方法,其中該第一實體物件是一第一人物,該第二實體物件是一第二人物,且產生該第一物件模型與該第二物件模型的步驟包括:利用一全息影像(Holographic)技術產生立體的該第一物件模 型與該第二物件模型。 The virtual-real interaction method as claimed in claim 1, wherein the first physical object is a first character, the second physical object is a second character, and the step of generating the first object model and the second object model includes : Using a holographic image (Holographic) technology to generate a three-dimensional model of the first object model and the second object model. 如請求項1所述的虛實互動方法,其中該第一實體物件是一第一物品,且產生該第一物件模型的步驟包括:辨識該第一物品;以及依據該第一物品的辨識結果取得預存的該第一物件模型。 The virtual-real interaction method as described in claim 1, wherein the first physical object is a first object, and the step of generating the first object model includes: identifying the first object; and obtaining the first object model based on the identification result of the first object. The pre-stored first object model. 如請求項1所述的虛實互動方法,其中該第一感測資料包括該第一實體物件在所處空間的一第一位置資訊,且決定該第一物件模型及該第二物件模型在該虛擬場景中的行為的步驟包括:將該第一位置資訊轉換成該虛擬場景的一平面座標系中的一第二位置資訊,其中該第一物件模型的行為相關於該第二位置資訊。 The virtual-real interaction method as described in claim 1, wherein the first sensing data includes a first position information of the first physical object in the space where it is located, and determines the position of the first object model and the second object model in the space. The step of behaving in the virtual scene includes: converting the first position information into a second position information in a plane coordinate system of the virtual scene, wherein the behavior of the first object model is related to the second position information. 如請求項1所述的虛實互動方法,更包括:依據一控制操作的偵測結果在該虛擬場景中產生該第二物件模型,其中反應於未偵測到該控制操作,禁能在該虛擬場景中產生該第二物件模型;以及反應於偵測到該控制操作,允許在該虛擬場景中產生該第二物件模型。 The virtual-real interaction method as described in claim 1 further includes: generating the second object model in the virtual scene according to the detection result of a control operation, wherein in response to the non-detection of the control operation, disabling the virtual object model is disabled. The second object model is generated in the scene; and in response to detecting the control operation, the second object model is allowed to be generated in the virtual scene. 如請求項1所述的虛實互動方法,更包括:依據一第一感測裝置的一第三感測資料決定一第一空間,其中該第三感測資料包括與另一第一感測裝置之間的相對距離及該 第一感測裝置的方位,且該第一感測裝置用於感測該第一實體物件;比較該第一空間及該虛擬場景的一空間規格;以及依據該第一空間及該空間規格的比較結果產生一第一空間調整提示,其中該第一空間調整提示用於調整該第一感測裝置的位置或方位。 The virtual-real interaction method as claimed in claim 1, further comprising: determining a first space based on a third sensing data of a first sensing device, wherein the third sensing data includes information related to another first sensing device. the relative distance between and the The orientation of the first sensing device, and the first sensing device is used to sense the first physical object; compare the first space and a spatial specification of the virtual scene; and based on the first space and the spatial specification The comparison result generates a first spatial adjustment prompt, wherein the first spatial adjustment prompt is used to adjust the position or orientation of the first sensing device. 如請求項6所述的虛實互動方法,更包括:依據一第二感測裝置的一第四感測資料決定一第二空間,其中該第四感測資料包括與另一第二感測裝置之間的相對距離及該第二感測裝置的方位,且該第二感測裝置用於感測該第二實體物件;比較該第一空間及該第二空間;以及依據該第一空間及該第二空間的比較結果產生一第二空間調整提示,其中該第二空間調整提示用於調整該第一感測裝置或第二感測裝置的位置或方位。 The virtual-real interaction method as described in claim 6, further comprising: determining a second space based on a fourth sensing data of a second sensing device, wherein the fourth sensing data includes a second space with another second sensing device. the relative distance between them and the orientation of the second sensing device, and the second sensing device is used to sense the second physical object; compare the first space and the second space; and based on the first space and The comparison result of the second space generates a second space adjustment prompt, wherein the second space adjustment prompt is used to adjust the position or orientation of the first sensing device or the second sensing device. 一種用於虛擬世界的運算系統,包括:至少一記憶體,用以儲存至少一程式碼;以及至少一處理器,耦接該至少一記憶體,經配置用以載入該至少一程式碼以執行:依據一第一感測資料產生一第一物件模型,其中該第一感測資料是對一第一實體物件感測所得的;依據一第二感測資料產生一第二物件模型,其中該第二 感測資料是對一第二實體物件感測所得的;依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型在一虛擬場景中的行為,其中該至少一處理器更經配置用以執行:依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型之間的一互動情形;以及依據該互動情形決定該第一物件模型及該第二物件模型在該虛擬場景中的該行為;依據該第一物件模型在該虛擬場景的行為產生一第一影像串流,其中該第一影像串流用於供一遠端顯示裝置顯示;以及依據該第二物件模型在該虛擬場景的行為產生一第二影像串流,其中該第二影像串流用於供一本地顯示裝置顯示。 A computing system for a virtual world, including: at least one memory to store at least one program code; and at least one processor coupled to the at least one memory and configured to load the at least one program code to Execution: Generate a first object model based on a first sensing data, wherein the first sensing data is obtained by sensing a first physical object; generate a second object model based on a second sensing data, wherein The second The sensing data is obtained by sensing a second physical object; the behavior of the first object model and the second object model in a virtual scene is determined based on the first sensing data and the second sensing data, wherein The at least one processor is further configured to: determine an interaction situation between the first object model and the second object model based on the first sensing data and the second sensing data; and based on the interaction situation Determine the behavior of the first object model and the second object model in the virtual scene; generate a first image stream based on the behavior of the first object model in the virtual scene, wherein the first image stream is used to provide A remote display device displays; and a second image stream is generated according to the behavior of the second object model in the virtual scene, wherein the second image stream is used for display by a local display device. 如請求項8所述的用於虛擬世界的運算系統,其中該第一實體物件是一第一人物,該第二實體物件是一第二人物,且該至少一處理器更經配置用以執行:利用一全息影像技術產生立體的該第一物件模型與該第二物件模型。 The computing system for a virtual world as claimed in claim 8, wherein the first physical object is a first character, the second physical object is a second character, and the at least one processor is further configured to execute : Using a holographic imaging technology to generate the three-dimensional first object model and the second object model. 如請求項8所述的用於虛擬世界的運算系統,其中該第一實體物件是一第一物品,且該至少一處理器更經配置用以執行:辨識該第一物品;以及依據該第一物品的辨識結果取得預存的該第一物件模型。 The computing system for a virtual world as claimed in claim 8, wherein the first physical object is a first object, and the at least one processor is further configured to perform: identifying the first object; and according to the first object The identification result of an object obtains the pre-stored first object model. 如請求項8所述的用於虛擬世界的運算系統,其中該第一感測資料包括該第一實體物件在所處空間的一第一位置資訊,且該至少一處理器更經配置用以執行:將該第一位置資訊轉換成該虛擬場景的一平面座標系中的一第二位置資訊,其中該第一物件模型的行為相關於該第二位置資訊。 The computing system for a virtual world as described in claim 8, wherein the first sensing data includes a first position information of the first physical object in the space where it is located, and the at least one processor is further configured to Execution: convert the first position information into a second position information in a plane coordinate system of the virtual scene, wherein the behavior of the first object model is related to the second position information. 如請求項8所述的用於虛擬世界的運算系統,其中該至少一處理器更經配置用以執行:依據一控制操作的偵測結果在該虛擬場景中產生該第二物件模型,其中反應於未偵測到該控制操作,禁能在該虛擬場景中產生該第二物件模型;以及反應於偵測到該控制操作,允許在該虛擬場景中產生該第二物件模型。 The computing system for a virtual world as described in claim 8, wherein the at least one processor is further configured to perform: generating the second object model in the virtual scene according to a detection result of a control operation, wherein the response When the control operation is not detected, generation of the second object model in the virtual scene is disabled; and in response to detection of the control operation, generation of the second object model in the virtual scene is allowed. 如請求項8所述的用於虛擬世界的運算系統,其中該至少一處理器更經配置用以執行:依據一第一感測裝置的一第三感測資料決定一第一空間,其中該第三感測資料包括與另一第一感測裝置之間的相對距離及該第一感測裝置的方位,且該第一感測裝置用於感測該第一實體物件;比較該第一空間及該虛擬場景的一空間規格;以及依據該第一空間及該空間規格的比較結果產生一第一空間調 整提示,其中該第一空間調整提示用於調整該第一感測裝置的位置或方位。 The computing system for a virtual world as claimed in claim 8, wherein the at least one processor is further configured to perform: determining a first space based on a third sensing data of a first sensing device, wherein the The third sensing data includes the relative distance to another first sensing device and the orientation of the first sensing device, and the first sensing device is used to sense the first physical object; compare the first space and a spatial specification of the virtual scene; and generating a first spatial tone based on the comparison result of the first space and the spatial specification. A whole prompt, wherein the first spatial adjustment prompt is used to adjust the position or orientation of the first sensing device. 如請求項13所述的用於虛擬世界的運算系統,其中該至少一處理器更經配置用以執行:依據一第二感測裝置的一第四感測資料決定一第二空間,其中該第四感測資料包括與另一第二感測裝置之間的相對距離及該第二感測裝置的方位,且該第二感測裝置用於感測該第二實體物件;比較該第一空間及該第二空間;以及依據該第一空間及該第二空間的比較結果產生一第二空間調整提示,其中該第二空間調整提示用於調整該第一感測裝置或第二感測裝置的位置或方位。 The computing system for a virtual world as claimed in claim 13, wherein the at least one processor is further configured to perform: determining a second space according to a fourth sensing data of a second sensing device, wherein the The fourth sensing data includes the relative distance to another second sensing device and the orientation of the second sensing device, and the second sensing device is used to sense the second physical object; compare the first space and the second space; and generate a second space adjustment prompt based on the comparison result of the first space and the second space, wherein the second space adjustment prompt is used to adjust the first sensing device or the second sensing The location or orientation of the device. 一種虛擬實境系統,包括:二第一空間感測裝置,用以對一第一實體物件感測,以取得一第一感測資料;至少一運算裝置,經配置用以:依據該第一感測資料產生一第一物件模型;依據一第二感測資料產生一第二物件模型,其中該第二感測資料是透過二第二空間感測裝置對一第二實體物件感測所得的;依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型在一虛擬場景中的行為,其中該至少一運 算裝置經配置更用以:依據該第一感測資料及該第二感測資料決定該第一物件模型及該第二物件模型之間的一互動情形;以及依據該互動情形決定該第一物件模型及該第二物件模型在該虛擬場景中的該行為;依據該第一物件模型在該虛擬場景的行為產生一第一影像串流,其中該第一影像串流用於供一遠端顯示裝置顯示;以及依據該第二物件模型在該虛擬場景的行為產生一第二影像串流;以及一本地顯示裝置,用以顯示該第二影像串流。 A virtual reality system, including: two first space sensing devices, used to sense a first physical object to obtain a first sensing data; at least one computing device configured to: according to the first The sensing data generates a first object model; a second object model is generated based on a second sensing data, wherein the second sensing data is obtained by sensing a second physical object through two second space sensing devices. ; Determine the behavior of the first object model and the second object model in a virtual scene based on the first sensing data and the second sensing data, wherein the at least one operation The computing device is further configured to: determine an interaction situation between the first object model and the second object model based on the first sensing data and the second sensing data; and determine the first interaction situation based on the interaction situation. The behavior of the object model and the second object model in the virtual scene; generating a first image stream based on the behavior of the first object model in the virtual scene, wherein the first image stream is used for a remote display device display; and generating a second image stream based on the behavior of the second object model in the virtual scene; and a local display device for displaying the second image stream. 如請求項15所述的虛擬實境系統,更包括:至少一穿戴式裝置,用於供一第一人物配戴,並據以產生一第一子資料,其中該二第一空間感測裝置產生一第二子資料,且該第一感測資料包括該第一子資料及該第二子資料。 The virtual reality system as claimed in claim 15, further comprising: at least one wearable device for being worn by a first character and generating a first sub-data accordingly, wherein the two first space sensing devices A second sub-data is generated, and the first sensing data includes the first sub-data and the second sub-data.
TW111134062A 2021-09-13 2022-09-08 Virtual and real interaction method, computing system used for virtual world, and virtual reality system TWI835289B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163243208P 2021-09-13 2021-09-13
US63/243,208 2021-09-13

Publications (2)

Publication Number Publication Date
TW202311912A TW202311912A (en) 2023-03-16
TWI835289B true TWI835289B (en) 2024-03-11

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014164901A1 (en) 2013-03-11 2014-10-09 Magic Leap, Inc. System and method for augmented and virtual reality

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014164901A1 (en) 2013-03-11 2014-10-09 Magic Leap, Inc. System and method for augmented and virtual reality

Similar Documents

Publication Publication Date Title
US10821347B2 (en) Virtual reality sports training systems and methods
US9504920B2 (en) Method and system to create three-dimensional mapping in a two-dimensional game
CN102331840B (en) User selection and navigation based on looped motions
Miles et al. A review of virtual environments for training in ball sports
EP2585896B1 (en) User tracking feedback
CN102193624B (en) Physical interaction zone for gesture-based user interfaces
CN105073210B (en) Extracted using the user's body angle of depth image, curvature and average terminal position
JP5081964B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
US11826628B2 (en) Virtual reality sports training systems and methods
US20170352188A1 (en) Support Based 3D Navigation
US20160314620A1 (en) Virtual reality sports training systems and methods
US8957858B2 (en) Multi-platform motion-based computer interactions
CN102207771A (en) Intention deduction of users participating in motion capture system
US20180339215A1 (en) Virtual reality training system for team sports
Bang et al. Interactive experience room using infrared sensors and user's poses
TWI835289B (en) Virtual and real interaction method, computing system used for virtual world, and virtual reality system
Li Development of immersive and interactive virtual reality environment for two-player table tennis
TW202311912A (en) Virtual and real interaction method, computing system used for virtual world, and virtual reality system
JP6594254B2 (en) Virtual environment construction device, virtual environment construction method, program
JP7185814B2 (en) Information processing device, information processing method and program
CN114078190B (en) Guide device for body-building exercise
WO2016057997A1 (en) Support based 3d navigation
CN102375541B (en) User movement is converted into the response of multiple object
JP2021068405A (en) Virtual object operating system and virtual object operating method