TWM650161U - Autostereoscopic 3d reality system - Google Patents

Autostereoscopic 3d reality system Download PDF

Info

Publication number
TWM650161U
TWM650161U TW112209839U TW112209839U TWM650161U TW M650161 U TWM650161 U TW M650161U TW 112209839 U TW112209839 U TW 112209839U TW 112209839 U TW112209839 U TW 112209839U TW M650161 U TWM650161 U TW M650161U
Authority
TW
Taiwan
Prior art keywords
image
dimensional
space
naked
module
Prior art date
Application number
TW112209839U
Other languages
Chinese (zh)
Inventor
簡銘伸
吳俊憲
鄭峻和
蔡明勳
鄭傑鴻
Original Assignee
國立虎尾科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立虎尾科技大學 filed Critical 國立虎尾科技大學
Priority to TW112209839U priority Critical patent/TWM650161U/en
Publication of TWM650161U publication Critical patent/TWM650161U/en

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

(無)

Description

裸視三維實境系統Naked vision 3D reality system

本新型是關於一種實境系統,且特別是關於一種裸視三維實境系統。The present invention relates to a reality system, and in particular to a naked-view three-dimensional reality system.

虛擬實境(Virtual Reality;VR)及擴增實境(Augmented Reality;AR)以三維模擬的形式提供使用者各種場景的體驗。一般使用者透過穿戴頭戴式裝置進行實境體驗,除了穿戴麻煩之外,使用者的視線也會被頭戴式裝置的顯示銀幕完全覆蓋,而難以察覺周遭環境發生的突發狀況。Virtual Reality (VR) and Augmented Reality (AR) provide users with various scene experiences in the form of three-dimensional simulations. Generally, users wear head-mounted devices to experience real-life situations. In addition to the trouble of wearing them, the user's line of sight will also be completely covered by the display screen of the head-mounted device, making it difficult to detect emergencies in the surrounding environment.

有鑑於此,目前市場上缺乏一種能夠取代穿戴式裝置並提供使用者實境體驗的三維實境系統,故相關業者均在尋求其解決之道。In view of this, there is currently a lack of a three-dimensional reality system on the market that can replace wearable devices and provide users with a real-world experience, so relevant industries are looking for solutions.

本新型之目的在於提供一種裸視三維實境系統,其能夠取代一般實境穿戴式裝置,而提供使用者實境體驗。The purpose of the present invention is to provide a naked-view three-dimensional reality system that can replace general reality wearable devices and provide users with a real-world experience.

依據本新型的結構態樣之一實施方式提供一種裸視三維實境系統,用以取得複數物件的一三維虛擬物件。裸視三維實境系統包含至少一影像擷取裝置、一處理裝置及至少一顯示裝置。至少一影像擷取裝置用以擷取各物件之一偵測影像。處理裝置訊號連接至少一影像擷取裝置,並包含至少一人工智慧模組、一空間邏輯運算模組及至少一三維繪製模組。至少一人工智慧模組用以依據一辨識規則辨識與切割偵測影像,以產生一影像物件資訊。空間邏輯運算模組連接至少一人工智慧模組,並包含一影像空間定位運算單元及一全空間邏輯運算單元。影像空間定位運算單元取得影像物件資訊並進行三維空間定位,以產生各物件於一虛擬空間的一定位資料。全空間邏輯運算單元連接影像空間定位運算單元,全空間邏輯運算單元取得影像物件資訊與定位資料,並依據複數演算法對影像物件資訊與定位資料進行一空間邏輯運算,以產生各物件之間於虛擬空間的一位置關係及一移動關係。至少一三維繪製模組連接空間邏輯運算模組,取得位置關係及移動關係以繪製對應各物件的三維虛擬物件。至少一顯示裝置訊號連接處理裝置,取得並顯示三維虛擬物件。According to one embodiment of the structural aspect of the present invention, a naked-eye three-dimensional reality system is provided for obtaining a three-dimensional virtual object of a plurality of objects. The naked-eye three-dimensional reality system includes at least one image capturing device, a processing device and at least one display device. At least one image capturing device is used to capture a detection image of each object. The processing device is signal-connected to at least one image capture device and includes at least one artificial intelligence module, one spatial logic operation module and at least one three-dimensional rendering module. At least one artificial intelligence module is used to identify and segment the detection image according to a recognition rule to generate image object information. The spatial logic operation module is connected to at least one artificial intelligence module and includes an image space positioning operation unit and a full-space logic operation unit. The image space positioning computing unit obtains image object information and performs three-dimensional space positioning to generate positioning data of each object in a virtual space. The full-space logical operation unit is connected to the image space positioning operation unit. The full-space logical operation unit obtains the image object information and positioning data, and performs a spatial logical operation on the image object information and positioning data according to the complex algorithm to generate a space between each object. A position relationship and a movement relationship in the virtual space. At least one three-dimensional rendering module is connected to the spatial logic operation module to obtain positional relationships and movement relationships to draw three-dimensional virtual objects corresponding to each object. At least one display device signal is connected to the processing device to obtain and display the three-dimensional virtual object.

前述實施方式之其他實施例如下:辨識規則包含複數已定義物件,至少一人工智慧模組整合偵測影像與各已定義物件而產生影像物件資訊。Other examples of the aforementioned implementation are as follows: the recognition rule includes a plurality of defined objects, and at least one artificial intelligence module integrates the detected image and each defined object to generate image object information.

前述實施方式之其他實施例如下:影像物件資訊包含對應各物件於一視角的一三維座標。Other examples of the aforementioned implementation are as follows: the image object information includes a three-dimensional coordinate corresponding to each object at a viewing angle.

前述實施方式之其他實施例如下:空間邏輯運算模組更包含一影像資料庫。影像資料庫連接影像空間定位運算單元,影像資料庫用以儲存複數歷史影像物件。其中,影像空間定位運算單元整合影像物件資訊與各歷史影像物件以產生定位資料,並將影像物件資訊及定位資料儲存至影像資料庫,以更新影像資料庫。Other examples of the aforementioned implementation are as follows: the spatial logic operation module further includes an image database. The image database is connected to the image space positioning computing unit, and the image database is used to store multiple historical image objects. Among them, the image space positioning calculation unit integrates the image object information and each historical image object to generate positioning data, and stores the image object information and positioning data to the image database to update the image database.

前述實施方式之其他實施例如下:空間邏輯運算模組更包含一場景資料庫。場景資料庫連接全空間邏輯運算單元,場景資料庫用以儲存複數已定義場景。其中,全空間邏輯運算單元套用其中一已定義場景於虛擬空間並進行空間邏輯運算,而產生各物件於虛擬空間的位置關係。Other examples of the aforementioned implementation are as follows: the spatial logic operation module further includes a scene database. The scene database is connected to the full-space logical operation unit, and the scene database is used to store multiple defined scenes. Among them, the full-space logical operation unit applies one of the defined scenes to the virtual space and performs spatial logical operations to generate the positional relationship of each object in the virtual space.

前述實施方式之其他實施例如下:裸視三維實境系統更包含至少一演算法模組,連接空間邏輯運算模組,用以儲存各演算法。其中,全空間邏輯運算單元取得各演算法,並依據各演算法對位置關係進行運算而取得移動關係。Other examples of the aforementioned implementation are as follows: the naked-eye three-dimensional reality system further includes at least one algorithm module connected to the spatial logic operation module to store each algorithm. Among them, the full-space logical operation unit obtains each algorithm, and operates on the position relationship based on each algorithm to obtain the movement relationship.

前述實施方式之其他實施例如下:移動關係包含各物件之間互動的一連續性狀態。Other examples of the foregoing implementation are as follows: the movement relationship includes a continuous state of interaction between objects.

前述實施方式之其他實施例如下:空間邏輯運算模組更包含一負載平衡單元。負載平衡單元連接空間邏輯運算模組及至少一三維繪製模組,並用以依據至少一三維繪製模組的作業負載量而對應分配各物件所對應的位置關係及移動關係。Other examples of the aforementioned implementation are as follows: the spatial logic operation module further includes a load balancing unit. The load balancing unit is connected to the spatial logic operation module and at least one three-dimensional rendering module, and is used to correspondingly allocate the position relationship and movement relationship of each object according to the workload of the at least one three-dimensional rendering module.

藉此,本新型透過影像擷取裝置擷取物件影像,搭配處理裝置進行空間邏輯運算而取得對應的三維虛擬物件,而能夠於虛擬空間中與其餘三維虛擬物件互動並執行實境操作。In this way, the present invention captures object images through an image capture device, and cooperates with a processing device to perform spatial logic operations to obtain corresponding three-dimensional virtual objects. It can interact with other three-dimensional virtual objects in the virtual space and perform real-world operations.

以下將參照圖式說明本新型的複數個實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,應瞭解到,這些實務上的細節不應用以限制本新型。也就是說,在本新型部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示的;並且重複的元件將可能使用相同的編號表示的。Several embodiments of the present invention will be described below with reference to the drawings. For the sake of clarity, many practical details will be explained together in the following narrative. However, it should be understood that these practical details should not be used to limit the invention. That is to say, in some embodiments of the present invention, these practical details are not necessary. In addition, for the sake of simplifying the drawings, some commonly used structures and components are shown in the drawings in a simple schematic manner; and repeated components may be represented by the same numbers.

此外,本文中當某一元件(或單元或模組等)「連接」於另一元件,可指所述元件是直接連接於另一元件,亦可指某一元件是間接連接於另一元件,意即,有其他元件介於所述元件及另一元件之間。而當有明示某一元件是「直接連接」於另一元件時,才表示沒有其他元件介於所述元件及另一元件之間。而第一、第二、第三等用語只是用來描述不同元件,而對元件本身並無限制,因此,第一元件亦可改稱為第二元件。且本文中的元件/單元/電路的組合非此領域中的一般周知、常規或習知的組合,不能以元件/單元/電路本身是否為習知,來判定其組合關係是否容易被技術領域中的通常知識者輕易完成。In addition, when a certain component (or unit or module, etc.) is "connected" to another component in this article, it may mean that the component is directly connected to the other component, or it may mean that one component is indirectly connected to the other component. , meaning that there are other elements between the said element and another element. When it is stated that an element is "directly connected" to another element, it means that no other elements are interposed between the element and the other element. The terms first, second, third, etc. are only used to describe different components without limiting the components themselves. Therefore, the first component can also be renamed the second component. Moreover, the combination of components/units/circuit in this article is not a combination that is generally known, conventional or customary in this field. Whether the component/unit/circuit itself is common knowledge cannot be used to determine whether its combination relationship is easily recognized by those in the technical field. Easily accomplished by the average person with knowledge.

參閱第1圖至第4圖,其中第1圖係繪示本新型之第一實施例之裸視三維實境系統100的示意圖;第2圖係繪示依照第1圖中處理裝置120的示意圖;第3圖係繪示本新型之第二實施例之提供裸視三維實境之方法200的步驟方塊圖;第4圖係繪示依照第1圖中裸視三維實境系統100的使用情境示意圖。裸視三維實境系統100經配置以實施提供裸視三維實境之方法200,而用以取得複數物件1的一三維虛擬物件2,並透過三維虛擬物件2於一虛擬空間S執行實境操作。必須說明的是,本新型之提供裸視三維實境之方法200不限於透過本新型的裸視三維實境系統100實現。裸視三維實境系統100包含至少一影像擷取裝置110、一處理裝置120及至少一顯示裝置130,處理裝置120訊號連接影像擷取裝置110及顯示裝置130。於第一實施例中,處理裝置120與影像擷取裝置110及顯示裝置130之間可透過有線實體連接或無線網路設備連接,但本新型不以此為限。在其他實施例中,影像擷取裝置110、處理裝置120及顯示裝置130可整合於一機台供使用者操作。Referring to Figures 1 to 4, Figure 1 is a schematic diagram of the naked-eye three-dimensional reality system 100 according to the first embodiment of the present invention; Figure 2 is a schematic diagram of the processing device 120 in Figure 1. ; Figure 3 is a block diagram illustrating the steps of a method 200 for providing naked-eye three-dimensional reality according to the second embodiment of the present invention; Figure 4 is a usage scenario according to the naked-eye three-dimensional reality system 100 in Figure 1 Schematic diagram. The naked-eye three-dimensional reality system 100 is configured to implement the method 200 for providing naked-eye three-dimensional reality, and is used to obtain a three-dimensional virtual object 2 of a plurality of objects 1, and perform reality operations in a virtual space S through the three-dimensional virtual object 2 . It must be noted that the method 200 of the present invention for providing a naked-eye three-dimensional reality is not limited to being implemented through the naked-eye three-dimensional reality system 100 of the present invention. The naked-eye three-dimensional reality system 100 includes at least one image capture device 110, a processing device 120 and at least one display device 130. The processing device 120 is connected to the image capture device 110 and the display device 130 via signals. In the first embodiment, the processing device 120, the image capture device 110 and the display device 130 can be connected through a wired physical connection or a wireless network device, but the present invention is not limited to this. In other embodiments, the image capturing device 110, the processing device 120 and the display device 130 can be integrated into one machine for user operation.

影像擷取裝置110用以擷取物件1之一偵測影像,處理裝置120用以產生物件1對應的三維虛擬物件2,顯示裝置130用以顯示三維虛擬物件2。其中,物件1可為使用者的整體、臉部或是肢體。於第一實施例中,影像擷取裝置110為具多重高度及方向之相機或攝影機;處理裝置120可為處理器(Processor)、微處理器(Microprocessor)、中央處理器(Central Processing Unit;CPU)、電腦、行動裝置處理器、雲端處理器或其他電子運算處理器;顯示裝置130可為電腦、手機、平板電腦等終端裝置之顯示銀幕,或是搭配單面或可透光的雙面投影布幕之投影機,但本新型不以此為限。The image capturing device 110 is used to capture a detection image of the object 1 , the processing device 120 is used to generate a three-dimensional virtual object 2 corresponding to the object 1 , and the display device 130 is used to display the three-dimensional virtual object 2 . Among them, object 1 can be the user's whole body, face or limbs. In the first embodiment, the image capturing device 110 is a camera or video camera with multiple heights and directions; the processing device 120 can be a processor (Processor), a microprocessor (Microprocessor), or a Central Processing Unit (CPU) ), computer, mobile device processor, cloud processor or other electronic computing processor; the display device 130 can be a display screen of a terminal device such as a computer, mobile phone, tablet computer, etc., or it can be equipped with a single-sided or light-transmissive double-sided projection. A projector with a curtain, but the present invention is not limited to this.

處理裝置120包含至少一人工智慧模組121、至少一演算法模組122、一空間邏輯運算模組123及至少一三維繪製模組124。空間邏輯運算模組123連接人工智慧模組121、演算法模組122及三維繪製模組124。人工智慧模組121用以依據一辨識規則辨識與切割偵測影像,以產生一影像物件資訊。其中,辨識規則包含複數已定義物件,人工智慧模組121整合偵測影像與已定義物件而產生影像物件資訊。詳細而言,已定義物件為物件1特徵(如人臉),或為物件1所執行的行為與動作(如臉部動作、肢體動作或是手勢)。如第4圖所示,影像物件資訊包含對應各物件1於一視角的一三維座標,藉此,位於不同位置的使用者觀看虛擬空間S中的同一個三維虛擬物件2時,將會因各自所在的三維座標而有對應的檢視角度。演算法模組122用以儲存複數演算法,並提供空間邏輯運算模組123讀取演算法而執行空間運算。空間邏輯運算模組123用以對影像物件資訊進行三維空間定位及空間邏輯運算,以產生各物件1之間於虛擬空間S的一位置關係及一移動關係。三維繪製模組124用以依據位置關係及移動關係而繪製對應各物件的三維虛擬物件2,疊合各三維虛擬物件2後透過顯示裝置130進行顯示。The processing device 120 includes at least one artificial intelligence module 121, at least one algorithm module 122, a spatial logic operation module 123, and at least one three-dimensional rendering module 124. The spatial logic operation module 123 connects the artificial intelligence module 121, the algorithm module 122 and the three-dimensional rendering module 124. The artificial intelligence module 121 is used to identify and segment the detection image according to a recognition rule to generate image object information. Among them, the recognition rules include a plurality of defined objects, and the artificial intelligence module 121 integrates the detected images and the defined objects to generate image object information. Specifically, the defined objects are the characteristics of Object 1 (such as a human face), or the behaviors and actions performed by Object 1 (such as facial movements, body movements, or gestures). As shown in Figure 4, the image object information includes a three-dimensional coordinate corresponding to each object 1 at a viewing angle. Accordingly, when users at different locations view the same three-dimensional virtual object 2 in the virtual space S, they will view the same three-dimensional virtual object 2 according to their respective The three-dimensional coordinates where it is located have corresponding viewing angles. The algorithm module 122 is used to store complex number algorithms, and provides the spatial logic operation module 123 to read the algorithms and perform spatial operations. The spatial logic operation module 123 is used to perform three-dimensional spatial positioning and spatial logic operations on the image object information to generate a positional relationship and a movement relationship between the objects 1 in the virtual space S. The three-dimensional rendering module 124 is used to draw the three-dimensional virtual objects 2 corresponding to each object according to the position relationship and the movement relationship, and display the three-dimensional virtual objects 2 through the display device 130 after superimposing the three-dimensional virtual objects 2 .

空間邏輯運算模組123包含一影像空間定位運算單元1231、一影像資料庫1232、一全空間邏輯運算單元1233、一場景資料庫1234及一負載平衡單元1235。影像空間定位運算單元1231連接影像資料庫1232及全空間邏輯運算單元1233,影像資料庫1232連接負載平衡單元1235,全空間邏輯運算單元1233連接場景資料庫1234及負載平衡單元1235。影像空間定位運算單元1231取得影像物件資訊並進行三維空間定位,以產生各物件1於虛擬空間S的一定位資料。影像資料庫1232用以儲存複數歷史影像物件,影像空間定位運算單元1231整合影像物件資訊與各歷史影像物件以產生定位資料,並將影像物件資訊及定位資料儲存至影像資料庫1232,以更新影像資料庫1232。全空間邏輯運算單元1233取得影像物件資訊與定位資料,並自演算法模組122取得複數演算法,以依據演算法對影像物件資訊與定位資料進行一空間邏輯運算,以產生各物件1之間的位置關係及移動關係。其中,移動關係由全空間邏輯運算單元1233依據各演算法對位置關係進行運算而取得,移動關係包含各物件1之間互動的一連續性狀態。場景資料庫1234用以儲存複數已定義場景,全空間邏輯運算單元1233套用其中一已定義場景於虛擬空間S並進行空間邏輯運算,而產生各物件1於虛擬空間S的位置關係。負載平衡單元1235用以依據三維繪製模組124的作業負載量而對應分配各物件1所對應的位置關係及移動關係給三維繪製模組124進行繪製,以達到最佳化資源使用。其中,當影像空間定位運算單元1231判斷影像物件資訊中含有已存在的影像物件,能夠透過負載平衡單元1235將影像資料庫1232中對應的歷史影像物件提供給三維繪製模組124,以加速三維繪製作業。The spatial logic operation module 123 includes an image space positioning operation unit 1231, an image database 1232, a full space logic operation unit 1233, a scene database 1234 and a load balancing unit 1235. The image space positioning operation unit 1231 is connected to the image database 1232 and the full-space logical operation unit 1233. The image database 1232 is connected to the load balancing unit 1235. The full-space logical operation unit 1233 is connected to the scene database 1234 and the load balancing unit 1235. The image space positioning calculation unit 1231 obtains image object information and performs three-dimensional space positioning to generate positioning data of each object 1 in the virtual space S. The image database 1232 is used to store a plurality of historical image objects. The image space positioning calculation unit 1231 integrates the image object information and each historical image object to generate positioning data, and stores the image object information and positioning data to the image database 1232 to update the image. Database 1232. The full-space logical operation unit 1233 obtains the image object information and positioning data, and obtains a complex algorithm from the algorithm module 122 to perform a spatial logical operation on the image object information and positioning data according to the algorithm to generate a relationship between each object 1 location relationship and movement relationship. Among them, the movement relationship is obtained by the full-space logical operation unit 1233 operating on the position relationship according to each algorithm. The movement relationship includes a continuous state of interaction between objects 1 . The scene database 1234 is used to store a plurality of defined scenes. The full-space logical operation unit 1233 applies one of the defined scenes to the virtual space S and performs spatial logical operations to generate the positional relationship of each object 1 in the virtual space S. The load balancing unit 1235 is used to correspondingly allocate the position relationship and movement relationship of each object 1 to the three-dimensional rendering module 124 for rendering according to the workload of the three-dimensional rendering module 124, so as to achieve optimal resource usage. Among them, when the image space positioning calculation unit 1231 determines that the image object information contains existing image objects, it can provide the corresponding historical image objects in the image database 1232 to the three-dimensional rendering module 124 through the load balancing unit 1235 to accelerate three-dimensional rendering. Homework.

參閱第1圖至第5圖,其中第5圖係繪示依照第3圖中提供裸視三維實境之方法200的步驟流程圖。提供裸視三維實境之方法200包含依序執行之一影像擷取步驟S10、一影像處理步驟S20及一顯示步驟S30。影像擷取步驟S10包含驅動影像擷取裝置110擷取物件1之偵測影像。影像處理步驟S20包含一辨識步驟S21及一運算步驟S22。辨識步驟S21包含驅動人工智慧模組121依據辨識規則辨識與切割偵測影像,以產生影像物件資訊。運算步驟S22包含驅動處理裝置120依據影像物件資訊而產生物件1對應的三維虛擬物件2。顯示步驟S30包含驅動顯示裝置130取得並顯示三維虛擬物件2。Referring to FIGS. 1 to 5 , FIG. 5 is a step flow chart illustrating the method 200 for providing naked-eye three-dimensional reality in FIG. 3 . The method 200 for providing naked-eye three-dimensional reality includes sequentially executing an image capturing step S10, an image processing step S20, and a displaying step S30. The image capturing step S10 includes driving the image capturing device 110 to capture the detection image of the object 1 . The image processing step S20 includes a recognition step S21 and a calculation step S22. The recognition step S21 includes driving the artificial intelligence module 121 to recognize and cut the detection image according to the recognition rules to generate image object information. The operation step S22 includes driving the processing device 120 to generate a three-dimensional virtual object 2 corresponding to the object 1 based on the image object information. The display step S30 includes driving the display device 130 to obtain and display the three-dimensional virtual object 2 .

詳細而言,運算步驟S22包含一定位運算步驟S221、一邏輯運算步驟S222、一負載平衡步驟S223及一三維繪製步驟S224。定位運算步驟S221包含驅動影像空間定位運算單元1231取得影像物件資訊並進行三維空間定位,以產生各物件1於虛擬空間S的定位資料。定位運算步驟S221更包含驅動影像空間定位運算單元1231整合影像物件資訊與各歷史影像物件以產生定位資料,並將影像物件資訊及定位資料儲存至影像資料庫1232,以更新影像資料庫1232。邏輯運算步驟S222包含驅動全空間邏輯運算單元1233取得影像物件資訊與定位資料,套用場景資料庫1234中其中一已定義場景於虛擬空間S並進行空間邏輯運算而取得位置關係,並依據演算法模組122的各演算法對位置關係進行運算而取得移動關係。負載平衡步驟S223包含驅動負載平衡單元1235依據三維繪製模組124的作業負載量而分配各物件1所對應的位置關係及移動關係。三維繪製步驟S224包含驅動三維繪製模組124取得位置關係及移動關係以繪製對應各物件1的三維虛擬物件2。Specifically, the operation step S22 includes a positioning operation step S221, a logic operation step S222, a load balancing step S223 and a three-dimensional rendering step S224. The positioning operation step S221 includes driving the image space positioning operation unit 1231 to obtain image object information and perform three-dimensional space positioning to generate positioning data of each object 1 in the virtual space S. The positioning operation step S221 further includes driving the image space positioning operation unit 1231 to integrate the image object information and each historical image object to generate positioning data, and store the image object information and positioning data into the image database 1232 to update the image database 1232. The logical operation step S222 includes driving the full-space logical operation unit 1233 to obtain the image object information and positioning data, applying one of the defined scenes in the scene database 1234 to the virtual space S and performing spatial logical operations to obtain the positional relationship, and based on the algorithm model Each algorithm of the group 122 operates on the positional relationship and obtains the movement relationship. The load balancing step S223 includes driving the load balancing unit 1235 to allocate the position relationship and movement relationship corresponding to each object 1 according to the workload of the three-dimensional rendering module 124 . The three-dimensional rendering step S224 includes driving the three-dimensional rendering module 124 to obtain the position relationship and the movement relationship to draw the three-dimensional virtual object 2 corresponding to each object 1.

由上述實施方式可知,本新型之裸視三維實境系統及提供裸視三維實境之方法具有以下優點:其一,透過影像擷取裝置擷取物件影像,搭配處理裝置進行空間邏輯運算而取得對應的三維虛擬物件,而能夠於虛擬空間中與其餘三維虛擬物件互動並執行實境操作;其二,透過負載平衡單元根據三維繪製模組的作業負載量而分配各物件所對應的位置關係及移動關係給三維繪製模組進行繪製,能夠達到最佳化資源使用,並加速三維繪製模組的繪製作業。As can be seen from the above embodiments, the novel naked-view 3D reality system and the method for providing naked-eye 3D reality have the following advantages: First, the object image is captured through an image capture device, and the processing device is used to perform spatial logic operations to obtain the image. The corresponding three-dimensional virtual objects can interact with other three-dimensional virtual objects in the virtual space and perform real-world operations; secondly, the load balancing unit allocates the corresponding position relationship of each object according to the workload of the three-dimensional rendering module and Moving the relationship to draw the 3D rendering module can optimize the use of resources and speed up the drawing work of the 3D rendering module.

雖然本新型已以實施例揭露如上,然其並非用以限定本新型,任何所屬技術領域中具有通常知識者,在不脫離本新型的精神和範圍內,當可作些許的更動與潤飾,故本新型的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above through embodiments, they are not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some modifications and modifications without departing from the spirit and scope of the present invention. Therefore, The scope of protection of the present invention shall be determined by the appended patent application scope.

1:物件 2:三維虛擬物件 100:裸視三維實境系統 110:影像擷取裝置 120:處理裝置 121:人工智慧模組 122:演算法模組 123:空間邏輯運算模組 1231:影像空間定位運算單元 1232:影像資料庫 1233:全空間邏輯運算單元 1234:場景資料庫 1235:負載平衡單元 124:三維繪製模組 130:顯示裝置 200:提供裸視三維實境之方法 S:虛擬空間 S10:影像擷取步驟 S20:影像處理步驟 S21:辨識步驟 S22:運算步驟 S221:定位運算步驟 S222:邏輯運算步驟 S223:負載平衡步驟 S224:三維繪製步驟 S30:顯示步驟1:Object 2: Three-dimensional virtual objects 100: Naked vision 3D reality system 110:Image capture device 120: Processing device 121:Artificial intelligence module 122: Algorithm module 123: Spatial logic operation module 1231: Image space positioning operation unit 1232:Image database 1233: Full space logic operation unit 1234: Scene database 1235:Load balancing unit 124: 3D rendering module 130:Display device 200: A method to provide naked vision three-dimensional reality S: virtual space S10: Image capture steps S20: Image processing steps S21: Identification steps S22: Operation steps S221: Positioning operation steps S222: Logical operation steps S223: Load balancing steps S224: Three-dimensional drawing steps S30: Display steps

第1圖係繪示本新型之第一實施例之裸視三維實境系統的示意圖; 第2圖係繪示依照第1圖中處理裝置的示意圖; 第3圖係繪示本新型之第二實施例之提供裸視三維實境之方法的步驟方塊圖; 第4圖係繪示依照第1圖中裸視三維實境系統的使用情境示意圖;及 第5圖係繪示依照第3圖中提供裸視三維實境之方法的步驟流程圖。 Figure 1 is a schematic diagram of a naked-eye three-dimensional reality system according to the first embodiment of the present invention; Figure 2 is a schematic diagram of a processing device according to Figure 1; Figure 3 is a block diagram illustrating the steps of a method for providing naked-eye three-dimensional reality according to the second embodiment of the present invention; Figure 4 is a schematic diagram illustrating the usage scenario of the naked-eye three-dimensional reality system in Figure 1; and FIG. 5 is a step flow chart illustrating the method of providing naked-eye three-dimensional reality according to FIG. 3 .

100:裸視三維實境系統 100: Naked vision 3D reality system

110:影像擷取裝置 110:Image capture device

120:處理裝置 120: Processing device

121:人工智慧模組 121:Artificial intelligence module

122:演算法模組 122: Algorithm module

123:空間邏輯運算模組 123: Spatial logic operation module

124:三維繪製模組 124: 3D rendering module

130:顯示裝置 130:Display device

Claims (8)

一種裸視三維實境系統,用以取得複數物件的一三維虛擬物件,該裸視三維實境系統包含: 至少一影像擷取裝置,用以擷取各該物件之一偵測影像; 一處理裝置,訊號連接該至少一影像擷取裝置,並包含: 至少一人工智慧模組,用以依據一辨識規則辨識與切割該偵測影像,以產生一影像物件資訊; 一空間邏輯運算模組,連接該至少一人工智慧模組,並包含: 一影像空間定位運算單元,取得該影像物件資訊並進行三維空間定位,以產生各該物件於一虛擬空間的一定位資料;及 一全空間邏輯運算單元,連接該影像空間定位運算單元,該全空間邏輯運算單元取得該影像物件資訊與該定位資料,並依據複數演算法對該影像物件資訊與該定位資料進行一空間邏輯運算,以產生各該物件之間於該虛擬空間的一位置關係及一移動關係;及 至少一三維繪製模組,連接該空間邏輯運算模組,取得該位置關係及該移動關係以繪製對應各該物件的該三維虛擬物件;以及 至少一顯示裝置,訊號連接該處理裝置,取得並顯示該三維虛擬物件。 A naked-view three-dimensional reality system used to obtain a three-dimensional virtual object of a plurality of objects. The naked-eye three-dimensional reality system includes: At least one image capture device for capturing one detection image of each object; A processing device, the signal is connected to the at least one image capture device, and includes: At least one artificial intelligence module is used to identify and segment the detection image according to a recognition rule to generate image object information; A spatial logic operation module is connected to the at least one artificial intelligence module and includes: An image space positioning computing unit obtains the image object information and performs three-dimensional space positioning to generate positioning data of each object in a virtual space; and A full-space logical operation unit is connected to the image space positioning operation unit. The full-space logical operation unit obtains the image object information and the positioning data, and performs a spatial logical operation on the image object information and the positioning data according to a complex algorithm. , to generate a positional relationship and a movement relationship between the objects in the virtual space; and At least one three-dimensional rendering module is connected to the spatial logic operation module to obtain the position relationship and the movement relationship to draw the three-dimensional virtual object corresponding to each object; and At least one display device is connected to the processing device with signals to obtain and display the three-dimensional virtual object. 如請求項1所述之裸視三維實境系統,其中該辨識規則包含複數已定義物件,該至少一人工智慧模組整合該偵測影像與該些已定義物件而產生該影像物件資訊。The naked-eye three-dimensional reality system described in claim 1, wherein the recognition rule includes a plurality of defined objects, and the at least one artificial intelligence module integrates the detection image and the defined objects to generate the image object information. 如請求項2所述之裸視三維實境系統,其中該影像物件資訊包含對應各該物件於一視角的一三維座標。The naked-eye three-dimensional reality system as described in claim 2, wherein the image object information includes a three-dimensional coordinate corresponding to each object at a viewing angle. 如請求項1所述之裸視三維實境系統,其中該空間邏輯運算模組更包含: 一影像資料庫,連接該影像空間定位運算單元,該影像資料庫用以儲存複數歷史影像物件; 其中,該影像空間定位運算單元整合該影像物件資訊與各該歷史影像物件以產生該定位資料,並將該影像物件資訊及該定位資料儲存至該影像資料庫,以更新該影像資料庫。 The naked-eye three-dimensional reality system as described in claim 1, wherein the spatial logic operation module further includes: An image database is connected to the image space positioning computing unit, and the image database is used to store a plurality of historical image objects; Among them, the image space positioning calculation unit integrates the image object information and each historical image object to generate the positioning data, and stores the image object information and the positioning data into the image database to update the image database. 如請求項1所述之裸視三維實境系統,其中該空間邏輯運算模組更包含: 一場景資料庫,連接該全空間邏輯運算單元,該場景資料庫用以儲存複數已定義場景; 其中,該全空間邏輯運算單元套用其中一已定義場景於該虛擬空間並進行該空間邏輯運算,而產生各該物件於該虛擬空間的該位置關係。 The naked-eye three-dimensional reality system as described in claim 1, wherein the spatial logic operation module further includes: A scene database connected to the full-space logical operation unit. The scene database is used to store a plurality of defined scenes; Among them, the full-space logical operation unit applies one of the defined scenes to the virtual space and performs the spatial logical operation to generate the positional relationship of each object in the virtual space. 如請求項1所述之裸視三維實境系統,更包含: 至少一演算法模組,連接該空間邏輯運算模組,用以儲存各該演算法; 其中,該全空間邏輯運算單元取得各該演算法,並依據各該演算法對該位置關係進行運算而取得該移動關係。 The naked-eye three-dimensional reality system as described in request item 1 further includes: At least one algorithm module is connected to the spatial logic operation module to store each algorithm; Among them, the full-space logical operation unit obtains each of the algorithms, and performs operations on the position relationship according to each of the algorithms to obtain the movement relationship. 如請求項6所述之裸視三維實境系統,其中該移動關係包含各該物件之間互動的一連續性狀態。The naked-eye three-dimensional reality system as described in claim 6, wherein the movement relationship includes a continuous state of interaction between the objects. 如請求項1所述之裸視三維實境系統,其中該空間邏輯運算模組更包含: 一負載平衡單元,連接該空間邏輯運算模組及該至少一三維繪製模組,並用以依據該至少一三維繪製模組的作業負載量而對應分配各該物件所對應的該位置關係及該移動關係。 The naked-eye three-dimensional reality system as described in claim 1, wherein the spatial logic operation module further includes: A load balancing unit connects the spatial logic operation module and the at least one three-dimensional rendering module, and is used to correspondingly allocate the position relationship and the movement corresponding to each object according to the workload of the at least one three-dimensional rendering module. relation.
TW112209839U 2023-09-12 2023-09-12 Autostereoscopic 3d reality system TWM650161U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112209839U TWM650161U (en) 2023-09-12 2023-09-12 Autostereoscopic 3d reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112209839U TWM650161U (en) 2023-09-12 2023-09-12 Autostereoscopic 3d reality system

Publications (1)

Publication Number Publication Date
TWM650161U true TWM650161U (en) 2024-01-01

Family

ID=90455626

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112209839U TWM650161U (en) 2023-09-12 2023-09-12 Autostereoscopic 3d reality system

Country Status (1)

Country Link
TW (1) TWM650161U (en)

Similar Documents

Publication Publication Date Title
US11546505B2 (en) Touchless photo capture in response to detected hand gestures
US11861070B2 (en) Hand gestures for animating and controlling virtual and graphical elements
US11275453B1 (en) Smart ring for manipulating virtual objects displayed by a wearable device
KR20230164185A (en) Bimanual interactions between mapped hand regions for controlling virtual and graphical elements
CN110709897B (en) Shadow generation for image content inserted into an image
US10740918B2 (en) Adaptive simultaneous localization and mapping (SLAM) using world-facing cameras in virtual, augmented, and mixed reality (xR) applications
US11508141B2 (en) Simple environment solver using planar extraction
US11430192B2 (en) Placement and manipulation of objects in augmented reality environment
US11195341B1 (en) Augmented reality eyewear with 3D costumes
JP7043601B2 (en) Methods and devices for generating environmental models and storage media
WO2023075493A1 (en) Method and apparatus for three-dimensional scene reconstruction and dense depth map
US20210406542A1 (en) Augmented reality eyewear with mood sharing
TWM650161U (en) Autostereoscopic 3d reality system
US12013985B1 (en) Single-handed gestures for reviewing virtual content
US11863860B2 (en) Image capture eyewear with context-based sending
CN116212361B (en) Virtual object display method and device and head-mounted display device
US20220182777A1 (en) Augmented reality spatial audio experience
CN116433569A (en) Method for detecting illuminator on handle and virtual display device
CN117891063A (en) Positioning method, device and system of optical tracker