TWI818665B - Method, processing device, and display system for information display - Google Patents

Method, processing device, and display system for information display Download PDF

Info

Publication number
TWI818665B
TWI818665B TW111130006A TW111130006A TWI818665B TW I818665 B TWI818665 B TW I818665B TW 111130006 A TW111130006 A TW 111130006A TW 111130006 A TW111130006 A TW 111130006A TW I818665 B TWI818665 B TW I818665B
Authority
TW
Taiwan
Prior art keywords
information
processing device
user
sight
display
Prior art date
Application number
TW111130006A
Other languages
Chinese (zh)
Other versions
TW202320016A (en
Inventor
蘇育萱
蔡宇翔
戴宏明
徐雅柔
林凱舜
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to CN202211252575.3A priority Critical patent/CN116107534A/en
Priority to US17/979,785 priority patent/US11822851B2/en
Publication of TW202320016A publication Critical patent/TW202320016A/en
Application granted granted Critical
Publication of TWI818665B publication Critical patent/TWI818665B/en

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Alarm Systems (AREA)
  • Details Of Aerials (AREA)
  • Position Input By Displaying (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Aerials With Secondary Devices (AREA)

Abstract

A method, a processing device, and a display system for information display are proposed, and the system includes a plurality of light transmissive displays and a plurality of processing devices. The processing devices are connected and communicate with each other via gateways. A first processing device is selected from the processing devices according to position information of a user, and the first processing device determines sight line information of the user according to the position information and posture information of the user. A second processing device different from the first processing device calculates an object coordinate of the target object. The first processing device selects a third processing device from the processing devices according to the sight line information of the user, and the third processing device determines display position information of a virtual object according to a user coordinate and the object coordinate and controls one of the displays to display the virtual object according to the display position information.

Description

資訊顯示方法及其資訊顯示系統與處理裝置Information display method and information display system and processing device thereof

本發明是有關於一種資訊顯示技術。The invention relates to an information display technology.

隨著影像處理技術與空間定位技術的發展,透明顯示器的應用已逐漸受到重視。此類的技術可讓顯示器搭配實體物件,再輔以虛擬物件,並且依照使用者的需求來產生互動式的體驗,可使資訊以更為直觀的方式呈現。With the development of image processing technology and spatial positioning technology, the application of transparent displays has gradually attracted attention. This type of technology allows the display to be matched with physical objects, supplemented by virtual objects, and generate an interactive experience according to the user's needs, allowing information to be presented in a more intuitive way.

再者,關聯於實體物件的虛擬物件可顯示於透明顯示器的特定位置上,讓使用者可透過透明顯示器同時觀看到實體物件與疊加於實體物件上或一側的虛擬物件。舉例而言,透過於觀景台上設置透明顯示器,觀賞者可同時觀看景觀以及透明顯示器提供的景觀資訊。然而,於一些大型應用場景中,可能需要透過多台透明顯示器的組合來提供虛實融合的資訊顯示服務,且實體物件與使用者的數量也更多。於是,若使用單一中心運算裝置來負責所有運算任務,將可能因為運算量過於龐大或其他因素而發生計算延遲的問題,導致無法提供即時的虛實融合顯示服務給觀賞者。Furthermore, virtual objects associated with physical objects can be displayed at specific positions on the transparent display, allowing users to simultaneously view the physical objects and the virtual objects superimposed on or on one side of the physical objects through the transparent display. For example, by setting up a transparent display on an observation deck, viewers can view the landscape and the landscape information provided by the transparent display at the same time. However, in some large-scale application scenarios, it may be necessary to provide virtual and real information display services through a combination of multiple transparent displays, and the number of physical objects and users is also larger. Therefore, if a single central computing device is used to be responsible for all computing tasks, computing delays may occur due to excessive computing workload or other factors, resulting in the inability to provide real-time virtual and real integrated display services to viewers.

本揭露提供一種資訊顯示方法及其資訊顯示系統與處理裝置,其可根據使用者位置與使用者的視線資訊來分配運算任務給經選擇的處理裝置。The present disclosure provides an information display method and its information display system and processing device, which can allocate computing tasks to selected processing devices based on the user's position and the user's line of sight information.

在本揭露的一範例實施例中,上述的資訊顯示系統包括可透光的多個顯示器、多個感知資訊擷取裝置以及多個處理裝置。感知資訊擷取裝置用以擷取使用者的位置資訊與姿態資訊以及擷取目標物的位置資訊。處理裝置分別對應於顯示器,且經由多個閘道器而彼此連接與通訊。第一處理裝置是根據使用者的位置資訊而從處理裝置選擇出來,且第一處理裝置根據感知資訊擷取裝置所提供之使用者的位置資訊與姿態資訊決定使用者的視線資訊。相異於第一處理裝置的第二處理裝置根據感知資訊擷取裝置所提供之目標物的位置資訊進行座標轉換而計算目標物的目標物座標。第一處理裝置根據使用者的視線資訊而從處理裝置選擇出第三處理裝置。第三處理裝置根據使用者座標與目標物座標決定虛擬物件的顯示位置資訊,並控制顯示器其中之一者根據虛擬物件的顯示位置資訊顯示虛擬物件。In an exemplary embodiment of the present disclosure, the above-mentioned information display system includes a plurality of light-transmissive displays, a plurality of sensory information acquisition devices, and a plurality of processing devices. The perceptual information acquisition device is used to acquire the user's position information and attitude information as well as the position information of the target object. The processing devices respectively correspond to the displays and are connected and communicated with each other through multiple gateways. The first processing device is selected from the processing device based on the user's position information, and the first processing device determines the user's line of sight information based on the user's position information and posture information provided by the sensing information acquisition device. The second processing device, which is different from the first processing device, performs coordinate conversion based on the position information of the target object provided by the sensing information acquisition device to calculate the target coordinates of the target object. The first processing device selects the third processing device from the processing devices according to the user's line of sight information. The third processing device determines the display position information of the virtual object based on the user coordinates and the target object coordinates, and controls one of the displays to display the virtual object based on the display position information of the virtual object.

在本揭露的一範例實施例中,上述的資訊顯示方法適用於具有可透光的多個顯示器、多個感知資訊擷取裝置以及多個處理裝置的資訊顯示系統,並且包括下列步驟。利用感知資訊擷取裝置擷取使用者的位置資訊與姿態資訊以及目標物的位置資訊。根據使用者的位置資訊而從處理裝置選擇第一處理裝置。透過第一處理裝置根據感知資訊擷取裝置所提供之使用者的位置資訊與姿態資訊決定使用者的視線資訊。透過相異於第一處理裝置的第二處理裝置根據感知資訊擷取裝置所提供之目標物的位置資訊進行座標轉換而計算目標物的一目標物座標。根據使用者的視線資訊而從處理裝置選擇出第三處理裝置。透過第三處理裝置根據使用者座標與目標物座標決定虛擬物件的顯示位置資訊,並控制顯示器其中之一者根據虛擬物件的顯示位置資訊顯示虛擬物件。In an exemplary embodiment of the present disclosure, the above information display method is applicable to an information display system having multiple light-transmissive displays, multiple sensory information acquisition devices, and multiple processing devices, and includes the following steps. The perceptual information acquisition device is used to acquire the user's position information and posture information as well as the target object's position information. The first processing device is selected from the processing devices based on the user's location information. The first processing device determines the user's line of sight information based on the user's position information and posture information provided by the sensing information acquisition device. A target coordinate of the target is calculated by a second processing device that is different from the first processing device performing coordinate conversion based on the position information of the target provided by the sensing information acquisition device. The third processing device is selected from the processing devices according to the user's sight information. The third processing device determines the display position information of the virtual object based on the user coordinates and the target object coordinates, and controls one of the displays to display the virtual object based on the display position information of the virtual object.

本揭露一範例實施例提出一種處理裝置,其連接於可透光的顯示器以及感知資訊擷取裝置,並經由多個閘道器連接至多個其他處理裝置。感知資訊擷取裝置用以擷取使用者的位置資訊與姿態資訊,處理裝置包括記憶體與處理器。記憶體用以儲存資料,而處理器經配置以執行下列步驟。透過感知資訊擷取裝置判定處理裝置與使用者之間的距離小於各個其他處理裝置與使用者之間的距離。根據感知資訊擷取裝置所提供的使用者的位置資訊與姿態資訊決定使用者的視線資訊。根據使用者的視線資訊選擇多個處理裝置其中之一者,並經由閘道器傳輸使用者的視線資訊至多個處理裝置其中之一者。其中,多個處理裝置其中之一者根據視線資訊、使用者座標與目標物座標決定虛擬物件的顯示位置資訊,並控制顯示器或連接於多個其他處理裝置的另一顯示器可根據虛擬物件的顯示位置資訊顯示虛擬物件。An exemplary embodiment of the present disclosure provides a processing device that is connected to a light-transmissive display and a sensory information acquisition device, and is connected to a plurality of other processing devices through a plurality of gateways. The perceptual information acquisition device is used to acquire the user's position information and posture information, and the processing device includes a memory and a processor. The memory is used to store data, and the processor is configured to perform the following steps. The sensing information acquisition device determines that the distance between the processing device and the user is smaller than the distance between each other processing device and the user. The user's line of sight information is determined based on the user's position information and posture information provided by the perceptual information acquisition device. One of the plurality of processing devices is selected according to the user's line of sight information, and the user's line of sight information is transmitted to one of the plurality of processing devices through the gateway. Among them, one of the plurality of processing devices determines the display position information of the virtual object based on the line of sight information, the user coordinates and the target object coordinates, and controls the display or another display connected to multiple other processing devices to display the virtual object according to Location information displays virtual objects.

為讓本揭露能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the present disclosure more obvious and understandable, embodiments are given below and described in detail with reference to the attached drawings.

本揭露的部份範例實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些範例實施例只是本揭露的一部份,並未揭示所有本揭露的可實施方式。更確切的說,這些範例實施例僅為本揭露的專利申請範圍中的方法以及系統的範例。Some exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. The component symbols cited in the following description will be regarded as the same or similar components when the same component symbols appear in different drawings. These exemplary embodiments are only part of the disclosure and do not disclose all possible implementations of the disclosure. Rather, these example embodiments are merely examples of methods and systems within the scope of the present disclosure.

圖1A是根據本揭露一範例實施例所繪示的資訊顯示系統的方塊圖。首先圖1A先介紹系統中的各個構件以及配置關係,詳細功能將配合後續範例實施例的流程圖一併揭露。FIG. 1A is a block diagram of an information display system according to an exemplary embodiment of the present disclosure. First, Figure 1A first introduces the various components and configuration relationships in the system. Detailed functions will be disclosed together with the flow charts of subsequent exemplary embodiments.

請參照圖1A,本範例實施例中的資訊顯示系統10可包括多個顯示器110_1、110_2、110_3、…、110_N、多個感知資訊擷取裝置120_1、120_2、120_3、…、120_N,以及多個處理裝置130_1、130_2、130_3、…、130_N。處理裝置130_1~130_N可以是以無線、有線或電性地分別連接於顯示器110_1~110_N與感知資訊擷取裝置120_1~120_N。須說明的是,於圖1A的範例中是以一個處理裝置連接至一個顯示器與一個感知資訊擷取裝置為範例進行說明,像是處理裝置130_1連接顯示器110_1與感知資訊擷取裝置120_1,但本揭露並不限制於此。於其他範例中,一個處理裝置可連接至多個感知資訊擷取裝置或多個顯示器。Referring to FIG. 1A , the information display system 10 in this exemplary embodiment may include multiple displays 110_1, 110_2, 110_3,..., 110_N, multiple sensing information acquisition devices 120_1, 120_2, 120_3,..., 120_N, and multiple Processing devices 130_1, 130_2, 130_3, ..., 130_N. The processing devices 130_1 to 130_N may be connected to the displays 110_1 to 110_N and the sensory information acquisition devices 120_1 to 120_N respectively through wireless, wired or electrical connections. It should be noted that in the example of FIG. 1A , a processing device is connected to a display and a perceptual information acquisition device as an example. For example, the processing device 130_1 is connected to the display 110_1 and the perceptual information acquisition device 120_1. However, this The disclosure is not limited to this. In other examples, a processing device may be connected to multiple sensory information capture devices or multiple displays.

顯示器110_1~110_N可用以顯示資訊,其可包括是由單個或多個顯示裝置組合而成,顯示裝置可例如是液晶顯示器(Liquid crystal display,LCD)、場色序(Field sequential color)液晶顯示器、發光二極體(Light emitting diode,LED)顯示器、電濕潤顯示器等穿透式可透光顯示器,或者是投影式可透光顯示器。The displays 110_1 to 110_N may be used to display information, which may be composed of a single or multiple display devices. The display device may be, for example, a liquid crystal display (LCD), a field sequential color liquid crystal display, Transmissive light-transmitting displays such as light emitting diode (LED) displays, electrowetting displays, or projection-type light-transmitting displays.

感知資訊擷取裝置120_1~120_N可用以擷取使用者的位置資訊與姿態資訊。感知資訊擷取裝置120_1~120_N包括用以擷取使用者資訊的感測裝置。於一些實施例中,感知資訊擷取裝置120_1~120_N可包括至少一個影像感測器或者包括至少一個影像感測器搭配至少一個深度感測器,以朝向位於顯示器110_1~110_N前側的使用者擷取影像資料,從而對使用者進行影像辨識與定位。前述影像感測器可為可見光感測器或非可見光感測器如紅外線感測器等。此外,感知資訊擷取裝置120_1~120_N也可以包括光學定位器來對使用者進行光學空間定位。於一些實施例中,感知資訊擷取裝置120_1~120_N還可透過各類人體姿態辨識技術來辨識使用者的四肢、軀幹與頭部所呈現的姿態。像是,感知資訊擷取裝置120_1~120_N可根據影像資料辨識人體骨架與人體特徵點等等,從而識別出使用者的姿態。只要是可以定位出使用者所在位置資訊以及辨識使用者的姿態資訊的裝置或其組合,皆屬於感知資訊擷取裝置120_1~120_N的範疇。The sensing information acquisition devices 120_1˜120_N can be used to acquire the user's position information and posture information. The sensing information capturing devices 120_1˜120_N include sensing devices used to capture user information. In some embodiments, the sensory information capturing devices 120_1 to 120_N may include at least one image sensor or at least one image sensor and at least one depth sensor to capture the user located in front of the displays 110_1 to 110_N. Obtain image data to identify and locate the user. The aforementioned image sensor may be a visible light sensor or a non-visible light sensor such as an infrared sensor. In addition, the sensory information acquisition devices 120_1 to 120_N may also include optical positioners to perform optical spatial positioning of the user. In some embodiments, the perceptual information acquisition devices 120_1˜120_N can also identify the postures of the user's limbs, torso, and head through various human posture recognition technologies. For example, the perceptual information acquisition devices 120_1 to 120_N can identify the human skeleton, human body feature points, etc. based on the image data, thereby identifying the user's posture. As long as it is a device or a combination thereof that can locate the user's location information and identify the user's posture information, it belongs to the category of perceptual information acquisition devices 120_1 to 120_N.

另一方面,感知資訊擷取裝置120_1~120_N可用以擷取實體場景中目標物的位置資訊。感知資訊擷取裝置120_1~120_N包括用以擷取目標物資訊的感測裝置。於一些實施例中,感知資訊擷取裝置120_1~120_N可包括至少一個影像感測器或者包括至少一個影像感測器搭配至少一個深度感測器,以朝向位於顯示器110_1~110_N後側的目標物擷取影像資料,從而對目標物進行影像辨識與定位。前述影像感測器可為可見光感測器或非可見光感測器如紅外線感測器等。只要是可以定位出目標物所在位置資訊的裝置或其組合,皆屬於感知資訊擷取裝置120_1~120_N的範疇。On the other hand, the sensing information acquisition devices 120_1˜120_N can be used to acquire position information of target objects in the physical scene. The sensing information acquisition devices 120_1˜120_N include sensing devices used to acquire target object information. In some embodiments, the sensory information capturing devices 120_1 to 120_N may include at least one image sensor or at least one image sensor and at least one depth sensor to face the target object located behind the displays 110_1 to 110_N. Capture image data to identify and locate the target object. The aforementioned image sensor may be a visible light sensor or a non-visible light sensor such as an infrared sensor. As long as it is a device or a combination thereof that can locate the location information of the target object, it belongs to the category of sensing information acquisition devices 120_1 to 120_N.

於本揭露實施例中,上述的影像感測器可用以擷取影像並且包括具有透鏡以及感光元件的攝像鏡頭。上述的深度感測器可用以偵測深度資訊,其可以利用主動式深度感測技術以及被動式深度感測技術來實現。主動式深度感測技術可藉由主動發出光源、紅外線、超音波、雷射等作為訊號搭配時差測距技術來計算深度資訊。被動式深度感測技術可以藉由兩個影像感測器以不同視角擷取其前方的兩張影像,以利用兩張影像的視差來計算深度資訊。In the embodiment of the present disclosure, the above-mentioned image sensor can be used to capture images and includes a camera lens having a lens and a photosensitive element. The above-mentioned depth sensor can be used to detect depth information, which can be implemented using active depth sensing technology and passive depth sensing technology. Active depth sensing technology can calculate depth information by actively emitting light sources, infrared, ultrasound, laser, etc. as signals together with time-lag ranging technology. Passive depth sensing technology can use two image sensors to capture two images in front of them from different viewing angles to calculate depth information using the parallax of the two images.

於一些實施例中,感知資訊擷取裝置120_1~120_N可以透過各自的通訊介面以有線或是無線的方式傳輸資訊至處理裝置130_1~130_N。處理裝置130_1~130_N為具有運算能力的計算機裝置。處理裝置130_1~130_N可佈署於資訊顯示系統10所屬的場域之中,其可以是分別內建於顯示器110_1~110_N或分別連接顯示器110_1~110_N的計算機裝置。處理裝置130_1~130_N分別對應於顯示器110_1~110_N,並可用以控制與其相連接的顯示器110_1~110_N。例如,處理裝置130_1可用以控制顯示器110_1進行顯示並顯示內容。In some embodiments, the sensing information acquisition devices 120_1 ~ 120_N can transmit information to the processing devices 130_1 ~ 130_N through their respective communication interfaces in a wired or wireless manner. The processing devices 130_1 to 130_N are computer devices with computing capabilities. The processing devices 130_1 to 130_N can be deployed in the field to which the information display system 10 belongs. They can be computer devices built into the displays 110_1 to 110_N respectively or connected to the displays 110_1 to 110_N respectively. The processing devices 130_1 to 130_N correspond to the displays 110_1 to 110_N respectively, and can be used to control the displays 110_1 to 110_N connected thereto. For example, the processing device 130_1 can be used to control the display 110_1 to display and display content.

舉例而言,圖1B是根據本揭露一範例實施例所繪示的資訊顯示系統的示意圖。為了方便且清楚說明,圖1B以3個顯示器110_1~110_3以及3個感知資訊擷取裝置120_1~120_3為範例進行說明,但本揭露不限制於此。請參照圖1B,使用者U1與目標物Obj1分別位於顯示器110_1~110_3的前側與後側。於本範例中,使用者U1可透過顯示器110_2觀看包含有目標物Obj1的虛擬物件Vf1的實體場景。虛擬物件Vf1可視為基於目標物Obj1而擴增的擴增實境內容。For example, FIG. 1B is a schematic diagram of an information display system according to an exemplary embodiment of the present disclosure. For convenience and clear explanation, FIG. 1B uses three displays 110_1 to 110_3 and three sensory information acquisition devices 120_1 to 120_3 as an example for illustration, but the present disclosure is not limited thereto. Please refer to FIG. 1B , the user U1 and the target Obj1 are located on the front side and the back side of the displays 110_1˜110_3 respectively. In this example, the user U1 can view the physical scene of the virtual object Vf1 including the target object Obj1 through the display 110_2. The virtual object Vf1 can be regarded as augmented reality content amplified based on the target object Obj1.

須特別說明的是,處理裝置130_1~130_N經由多個閘道器G1、G2、…、Gk而彼此連接與通訊。每一個閘道器G1~Gk支援無線傳輸協定或有線傳輸協定,其可與附近的閘道器或處理裝置130_1~130_N建立鏈結。本揭露對於無線傳輸協定與有線傳輸協定的種類並不限制,可以是WiFi標準、ZigBee標準、行動通訊標準或乙太網路標準等等。於一些實施例中,閘道器G1~Gk可形成一網路拓樸N1。然而,本揭露對於閘道器G1~Gk的數量與網路拓樸的樣式並不限定。每一個處理裝置130_1~130_N將可與閘道器G1~Gk其中至少一者相連。透過閘道器G1~Gk之間的鏈結,處理裝置130_1~130_N可透過閘道器G1~Gk來進行資訊傳輸與相互通訊。It should be noted that the processing devices 130_1 to 130_N are connected and communicate with each other through a plurality of gateways G1, G2,..., Gk. Each gateway G1 ~ Gk supports a wireless transmission protocol or a wired transmission protocol, and can establish a link with a nearby gateway or processing device 130_1 ~ 130_N. This disclosure does not limit the types of wireless transmission protocols and wired transmission protocols, which can be WiFi standards, ZigBee standards, mobile communication standards, or Ethernet standards, etc. In some embodiments, the gateways G1˜Gk may form a network topology N1. However, the disclosure does not limit the number of gateways G1 to Gk and the type of network topology. Each processing device 130_1˜130_N will be connected to at least one of the gateways G1˜Gk. Through the links between the gateways G1 ~ Gk, the processing devices 130_1 ~ 130_N can transmit information and communicate with each other through the gateways G1 ~ Gk.

須特別說明的是,透過設置多個處理裝置130_1~130_N與閘道器G1~Gk的鏈結,基於使用者U1之位置資訊與姿態資訊以及目標物Obj1的位置資訊來顯示虛擬物件Vf1所需的計算任務可分散給一部份的處理裝置130_1~130_N來進行。藉此,可透過分散式處理架構來提昇計算效率,以避免虛擬物件的延遲顯示。It should be noted that by setting up links between multiple processing devices 130_1 to 130_N and gateways G1 to Gk, the virtual object Vf1 is displayed based on the position information and posture information of the user U1 and the position information of the target Obj1. The computing tasks can be distributed to some of the processing devices 130_1˜130_N for execution. In this way, computing efficiency can be improved through a distributed processing architecture to avoid delayed display of virtual objects.

圖2是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖,請同時參照圖1A、圖1B以及圖2,而圖2的方法流程可由圖1A與圖1B的資訊顯示系統10來實現。FIG. 2 is a flow chart of an information display method according to an exemplary embodiment of the present disclosure. Please refer to FIGS. 1A, 1B and 2 at the same time. The method flow of FIG. 2 can be implemented by the information display system 10 of FIGS. 1A and 1B. to achieve.

於步驟S210,利用感知資訊擷取裝置120_1~120_N擷取使用者U1的位置資訊與姿態資訊以及目標物Obj1的位置資訊。如同前述,感知資訊擷取裝置120_1~120_N例如是可針對使用者U1以及目標物Obj1的所在位置進行定位的影像感測器、深度感測器或其組合。In step S210, the sensing information acquisition devices 120_1˜120_N are used to acquire the position information and attitude information of the user U1 and the position information of the target Obj1. As mentioned above, the perceptual information acquisition devices 120_1 to 120_N are, for example, image sensors, depth sensors or combinations thereof that can locate the positions of the user U1 and the target Obj1.

於步驟S220,根據使用者U1的位置資訊而從處理裝置130_1~130_N挑選第一處理裝置。於一些實施例中,第一處理裝置為處理裝置130_1~130_N中與使用者U1的位置資訊距離最近的一者。亦即,第一處理裝置與使用者U1之間的距離小於各個其他處理裝置與使用者U1之間的距離。詳細而言,感知資訊擷取裝置120_1~120_N其中至少一可以定位出使用者U1的位置資訊。並且,在處理裝置130_1~130_N已經固定設置於資訊顯示系統10所屬場域中的情況下,處理裝置130_1~130_N的位置資訊是已知的。因此,處理裝置130_1~130_N其中至少一可根據使用者U1的位置資訊以及各個處理裝置130_1~130_N的已知位置資訊而獲取各個處理裝置130_1~130_N與使用者U1之間的距離。藉此,多個處理裝置之中與使用者U1的位置資訊距離最近的第一處理裝置將可被挑選出來。可知的,反應於使用者U1的動態移動,最靠近使用者U1的第一處理裝置可能對應改變。In step S220, the first processing device is selected from the processing devices 130_1˜130_N according to the location information of the user U1. In some embodiments, the first processing device is the one closest to the location information of the user U1 among the processing devices 130_1˜130_N. That is, the distance between the first processing device and the user U1 is smaller than the distance between each other processing device and the user U1. Specifically, at least one of the sensing information acquisition devices 120_1˜120_N can locate the location information of the user U1. Moreover, when the processing devices 130_1 to 130_N have been fixedly installed in the field to which the information display system 10 belongs, the location information of the processing devices 130_1 to 130_N is known. Therefore, at least one of the processing devices 130_1 to 130_N can obtain the distance between each of the processing devices 130_1 to 130_N and the user U1 based on the location information of the user U1 and the known location information of each of the processing devices 130_1 to 130_N. In this way, the first processing device that is closest to the location information of the user U1 among the plurality of processing devices can be selected. It can be seen that, in response to the dynamic movement of the user U1, the first processing device closest to the user U1 may change accordingly.

於步驟S230,透過第一處理裝置根據感知資訊擷取裝置120_1~120_N所提供之使用者U1的位置資訊與姿態資訊決定使用者的視線資訊E1。具體而言,在挑選第一處理裝置之後,第一處理裝置可直接從與其相連的感知資訊擷取裝置120_1~120_N其中之一獲取使用者的位置資訊與姿態資訊,或者從閘道器G1~Gk獲取使用者的位置資訊與姿態資訊。於是,第一處理裝置可根據使用者U1的位置資訊與姿態資訊來辨識出視線資訊E1,視線資訊E1包括視線向量。In step S230, the first processing device determines the user's line of sight information E1 based on the position information and posture information of the user U1 provided by the sensing information acquisition devices 120_1˜120_N. Specifically, after selecting the first processing device, the first processing device can directly obtain the user's position information and posture information from one of the sensory information acquisition devices 120_1 to 120_N connected thereto, or from the gateway G1 to Gk obtains the user's location information and posture information. Therefore, the first processing device can identify the line of sight information E1 according to the position information and posture information of the user U1, and the line of sight information E1 includes the line of sight vector.

於步驟S240,透過相異於第一處理裝置的第二處理裝置根據感知資訊擷取裝置120_1~120_N所提供之目標物Obj1的位置資訊進行座標轉換而計算目標物Obj1的目標物座標。換言之,第二處理裝置將對感知資訊擷取裝置120_1~120_N其中至少一所提供的目標物Obj1的位置資訊(例如是相機座標或影像座標)進行座標轉換,而獲取三維顯示座標系下的目標物座標。In step S240, the target coordinates of the target Obj1 are calculated by performing coordinate conversion based on the position information of the target Obj1 provided by the sensing information acquisition devices 120_1˜120_N through a second processing device that is different from the first processing device. In other words, the second processing device will perform coordinate conversion on the position information (such as camera coordinates or image coordinates) of the target Obj1 provided by at least one of the sensing information acquisition devices 120_1˜120_N, and obtain the target in the three-dimensional display coordinate system. object coordinates.

於步驟S250,透過第一處理裝置根據使用者U1的視線資訊E1而從處理裝置130_1~130_N挑選出第三處理裝置。詳細而言,在第一處理裝置獲取使用者U1的視線資訊E1之後,第一處理裝置根據使用者U1的視線資訊E1識別顯示器110_1~110_N其中之一者,以根據顯示器110_1~110_N其中之一者從處理裝置130_1~130_N選擇出對應的第三處理裝置。於一些實施例中,第一處理裝置可根據使用者U1的位置資訊計算對應於顯示器110_1~110_N其中之一者的視線角度範圍。反應於使用者U1的視線資訊落在視線角度範圍內,第一處理裝置自顯示器110_1~110_3識別顯示器110_1~110_N其中之所述一者。以圖1B為例,第一處理裝置可根據使用者U1的位置資訊計算對應於顯示器110_2的視線角度範圍。由於使用者U1的視線資訊落在顯示器110_2的視線角度範圍內,因而可判定使用者U1的視線位置落在顯示器110_2上。In step S250, the first processing device selects a third processing device from the processing devices 130_1˜130_N according to the line of sight information E1 of the user U1. Specifically, after the first processing device obtains the line of sight information E1 of the user U1, the first processing device identifies one of the displays 110_1 to 110_N according to the line of sight information E1 of the user U1, so as to identify one of the displays 110_1 to 110_N based on the line of sight information E1 of the user U1. The corresponding third processing device is selected from the processing devices 130_1 to 130_N. In some embodiments, the first processing device may calculate the line of sight angle range corresponding to one of the displays 110_1˜110_N according to the location information of the user U1. In response to the line of sight information of the user U1 falling within the line of sight angle range, the first processing device identifies one of the displays 110_1 to 110_N from the displays 110_1 to 110_3. Taking FIG. 1B as an example, the first processing device can calculate the line of sight angle range corresponding to the display 110_2 based on the location information of the user U1. Since the line of sight information of the user U1 falls within the line of sight angle range of the display 110_2, it can be determined that the line of sight position of the user U1 falls on the display 110_2.

也就是說,第一處理裝置可根據使用者U1的視線資訊E1來識別使用者所注視的顯示器。由於顯示器110_1~110_N會分別受控制於對應的處理裝置130_1~130_N,因此第一處理裝置可將使用者所注視的顯示器對應的處理裝置選擇為第三處理裝置。須說明的是,第一處理裝置可相同或相異於第三處理裝置。更具體而言,在顯示器110_1~110_N可平行並排且處理裝置130_1~130_N分別設置於鄰近對應的顯示器110_1~110_N的情境中,當使用者注視正前方的顯示器時,最靠近使用者U1的第一處理裝置會相同於使用者U1所注視的第三處理裝置;當使用者注視左右兩側的顯示器時,最靠近使用者U1的第一處理裝置會相異於使用者U1所注視的第三處理裝置。That is to say, the first processing device can identify the display that the user is looking at based on the sight information E1 of the user U1. Since the displays 110_1 to 110_N will be controlled by the corresponding processing devices 130_1 to 130_N respectively, the first processing device can select the processing device corresponding to the display that the user is looking at as the third processing device. It should be noted that the first processing device may be the same as or different from the third processing device. More specifically, in a situation where the displays 110_1 to 110_N can be arranged in parallel and the processing devices 130_1 to 130_N are respectively disposed adjacent to the corresponding displays 110_1 to 110_N, when the user looks at the display directly in front, the third one closest to the user U1 One processing device will be the same as the third processing device that the user U1 is looking at; when the user looks at the monitors on the left and right sides, the first processing device closest to the user U1 will be different from the third processing device that the user U1 is looking at. processing device.

於一些實施例中,第三處理裝置根據感知資訊擷取裝置120_1~120_N所提供之使用者U1的位置資訊進行座標轉換而計算使用者的使用者座標。換言之,第三處理裝置將對感知資訊擷取裝置120_1~120_N其中至少一所提供的使用者U1的位置資訊(例如是相機座標或影像座標)進行座標轉換,而獲取三維顯示座標系下的使用者座標。In some embodiments, the third processing device performs coordinate conversion based on the location information of user U1 provided by the sensing information acquisition devices 120_1˜120_N to calculate the user coordinates of the user. In other words, the third processing device will perform coordinate conversion on the location information (such as camera coordinates or image coordinates) of the user U1 provided by at least one of the sensory information acquisition devices 120_1 to 120_N, and obtain the usage information in the three-dimensional display coordinate system. or coordinates.

於步驟S260,透過第三處理裝置根據使用者座標與目標物座標決定虛擬物件Vf1的顯示位置資訊,並控制顯示器110_1~110_N其中之一者根據虛擬物件Vf1的顯示位置資訊顯示虛擬物件Vf1。於一些實施例中,第二處理裝置可經由閘道器G1~Gk傳輸目標物Obj1的目標物座標至第三處理裝置。相似的,若第一處理裝置相異於第三處理裝置,第一處理裝置亦可經由閘道器G1~Gk傳輸使用者U1的視線資訊E1至第三處理裝置。基此,第三處理裝置可根據使用者座標、視線資訊E1與目標物座標決定虛擬物件Vf1的顯示位置資訊。具體來說,顯示位置資訊可視為使用者觀看目標物Obj1時視線投射於顯示平面上的落點或區域。基於各式需求或不同應用,第三處理裝置可根據顯示位置資訊決定虛擬物件Vf1的實際顯示位置,以讓使用者U1可看到顯示於目標物Obj1附近的虛擬物件Vf1或看到疊加顯示於目標物Obj1上的虛擬物件Vf1。In step S260, the third processing device determines the display position information of the virtual object Vf1 according to the user coordinates and the target object coordinates, and controls one of the displays 110_1 to 110_N to display the virtual object Vf1 according to the display position information of the virtual object Vf1. In some embodiments, the second processing device may transmit the object coordinates of the object Obj1 to the third processing device via the gateways G1˜Gk. Similarly, if the first processing device is different from the third processing device, the first processing device can also transmit the line of sight information E1 of the user U1 to the third processing device via the gateways G1 to Gk. Based on this, the third processing device can determine the display position information of the virtual object Vf1 based on the user coordinates, the line of sight information E1 and the target object coordinates. Specifically, the display position information can be regarded as the point or area where the user's line of sight is projected on the display plane when viewing the target object Obj1. Based on various needs or different applications, the third processing device can determine the actual display position of the virtual object Vf1 based on the display position information, so that the user U1 can see the virtual object Vf1 displayed near the target object Obj1 or see it superimposed on the target object Obj1. The virtual object Vf1 on the target Obj1.

由此可知,透過閘道器G1~Gk鏈結多個處理裝置130_1~130_N,本揭露可將顯示虛擬物件Vf1所需的計算量分配給多個處理裝置來負責,從而大幅提昇計算效率,以避免虛擬物件的顯示延遲。It can be seen from this that by linking multiple processing devices 130_1 ~ 130_N through the gateways G1 ~ Gk, the present disclosure can allocate the calculation amount required to display the virtual object Vf1 to multiple processing devices, thereby greatly improving the calculation efficiency. Avoid display delays of virtual objects.

以下將搭配顯示系統100列舉實施例以說明本揭露根據視線資訊決定第三處理裝置以及單使用者與多使用者的實施方式。為了方便且清楚說明,後續實施例將以3個處理裝置130_1~130_3分別連接3個顯示器110_1~110_3以及3個感知資訊擷取裝置120_1~120_3為範例進行說明,但本揭露不限制於此。處理裝置130_1~130_3可分別設置相鄰於對應的顯示器110_1~110_3。The following is an example of the display system 100 to illustrate the present disclosure's implementation of determining the third processing device and single-user and multi-user based on line of sight information. For convenience and clear explanation, subsequent embodiments will be described using an example in which three processing devices 130_1 to 130_3 are connected to three displays 110_1 to 110_3 and three sensing information acquisition devices 120_1 to 120_3 respectively, but the present disclosure is not limited thereto. The processing devices 130_1 to 130_3 may be respectively disposed adjacent to the corresponding displays 110_1 to 110_3.

圖3A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。圖3B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。請同時參照圖3A與圖3B。於本實施例中,使用者U1位於顯示器110_2的前方,並且注視位於使用者U1正前方的顯示器110_2。FIG. 3A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 3B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. Please refer to Figure 3A and Figure 3B at the same time. In this embodiment, the user U1 is located in front of the display 110_2 and looks at the display 110_2 located directly in front of the user U1.

感知資訊擷取裝置120_2可擷取使用者U1的位置資訊與姿態資訊(步驟S302),並將使用者U1的位置資訊與姿態資訊傳輸至例如處理裝置130_2。反應於接收使用者U1的位置資訊,處理裝置130_2可計算各個處理裝置130_1~130_3與使用者U1的位置資訊之間的距離。並且,處理裝置130_2可根據各個處理裝置130_1~130_3與使用者U1的位置資訊之間的距離選擇第一處理裝置(步驟S304)。於此,第一處理裝置是處理裝置130_1~130_3中與使用者U1的位置資訊距離最近的一者。於本範例中,假設處理裝置130_2為最靠近使用者U1的第一處理裝置。亦即,於一實施例中,處理裝置130_2可透過感知資訊擷取裝置120_2判定處理裝置130_2與使用者U1之間的距離小於各個其他處理裝置130_1、130_3與使用者U1之間的距離。The sensing information acquisition device 120_2 can acquire the position information and attitude information of the user U1 (step S302), and transmit the position information and attitude information of the user U1 to, for example, the processing device 130_2. In response to receiving the location information of the user U1, the processing device 130_2 can calculate the distance between each processing device 130_1˜130_3 and the location information of the user U1. Furthermore, the processing device 130_2 may select the first processing device according to the distance between each processing device 130_1 to 130_3 and the location information of the user U1 (step S304). Here, the first processing device is the one closest to the location information of the user U1 among the processing devices 130_1 to 130_3. In this example, it is assumed that the processing device 130_2 is the first processing device closest to the user U1. That is, in one embodiment, the processing device 130_2 can determine through the sensing information acquisition device 120_2 that the distance between the processing device 130_2 and the user U1 is smaller than the distance between each of the other processing devices 130_1, 130_3 and the user U1.

接著,處理裝置130_2可根據使用者U1的位置資訊與姿態資訊來辨識使用者U1的視線資訊E1(步驟S306)。更詳細來說,處理裝置130_2被挑選來計算使用者U1的視線資訊E1,並判斷使用者U1的視線資訊E1落在那一個顯示器上。於本範例中,處理裝置130_2可根據使用者U1的視線資訊E1判定使用者U1的視線資訊E1落在顯示器110_2之上,並根據視線資訊E1所投射的顯示器110_2而選擇第三處理裝置(步驟S308)。於本範例中,用以控制顯示器110_2的處理裝置130_2為第三處理裝置。亦即,本範例之第一處理裝置與第三處理裝置是相同的並且皆為處理裝置130_2。於是,處理裝置130_2根據感知資訊擷取裝置120_2所提供之使用者U1的位置資訊進行座標轉換而計算使用者的使用者座標(步驟S310)。Next, the processing device 130_2 can identify the line of sight information E1 of the user U1 according to the position information and posture information of the user U1 (step S306). To be more specific, the processing device 130_2 is selected to calculate the line of sight information E1 of the user U1 and determine which display the line of sight information E1 of the user U1 falls on. In this example, the processing device 130_2 can determine that the line of sight information E1 of the user U1 falls on the display 110_2 based on the line of sight information E1 of the user U1, and select the third processing device according to the display 110_2 projected by the line of sight information E1 (step S308). In this example, the processing device 130_2 used to control the display 110_2 is the third processing device. That is, the first processing device and the third processing device in this example are the same and both are the processing device 130_2. Therefore, the processing device 130_2 performs coordinate conversion based on the location information of the user U1 provided by the sensing information acquisition device 120_2 to calculate the user coordinates of the user (step S310).

另一方面,感知資訊擷取裝置120_1~120_3可擷取目標物Obj1的位置資訊(步驟S312)。由於處理裝置130_2已經被挑選作為第一處理裝置,因此處理裝置130_1或處理裝置130_3可被選擇作為第二處理裝置(步驟S314)。以下將以處理裝置130_3為第二處理裝置為例繼續進行說明。處理裝置130_3可接收目標物Obj1的位置資訊與其他相關資訊,以進一步處理與目標物Obj1相關的目標物辨識(步驟S316)。接著,處理裝置130_3根據感知資訊擷取裝120_1~120_3所提供之目標物Obj1的位置資訊進行座標轉換而計算目標物Obj1的目標物座標(步驟S318),以將使用者U1的位置資訊與目標物Obj1的位置資訊都轉換至相同的座標系統下。處理裝置130_3可經由閘道器G1~Gk其中至少一將目標物Obj1的目標物座標傳輸至作為第三處理裝置的處理裝置130_2(步驟S320)。On the other hand, the sensing information capturing devices 120_1 to 120_3 can capture the position information of the target object Obj1 (step S312). Since the processing device 130_2 has been selected as the first processing device, the processing device 130_1 or the processing device 130_3 may be selected as the second processing device (step S314). The following description will continue taking the processing device 130_3 as the second processing device as an example. The processing device 130_3 can receive the position information and other related information of the target Obj1 to further process the target recognition related to the target Obj1 (step S316). Next, the processing device 130_3 performs coordinate conversion based on the position information of the target Obj1 provided by the sensing information acquisition devices 120_1 to 120_3 to calculate the target coordinates of the target Obj1 (step S318), so as to compare the position information of the user U1 with the target The position information of object Obj1 is converted to the same coordinate system. The processing device 130_3 may transmit the target object coordinates of the target Obj1 to the processing device 130_2 as the third processing device via at least one of the gateways G1 to Gk (step S320).

最後,處理裝置130_2根據使用者座標與目標物座標決定虛擬物件Vf1的顯示位置資訊(步驟S322),並控制顯示器110_2根據虛擬物件的顯示位置資訊顯示虛擬物件(步驟S324)。藉此,處理裝置130_2可以顯示位置資訊為參考基準來顯示虛擬物件Vf1。Finally, the processing device 130_2 determines the display position information of the virtual object Vf1 according to the user coordinates and the target object coordinates (step S322), and controls the display 110_2 to display the virtual object according to the display position information of the virtual object (step S324). Thereby, the processing device 130_2 can display the virtual object Vf1 based on the displayed position information.

圖4A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。圖4B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。請同時參照圖4A與圖4B。於本實施例中,使用者U1位於顯示器110_3的前方,並且注視位於使用者U1左側的顯示器110_1。FIG. 4A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 4B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. Please refer to Figure 4A and Figure 4B at the same time. In this embodiment, the user U1 is located in front of the display 110_3 and looks at the display 110_1 located on the left side of the user U1.

感知資訊擷取裝置120_3可擷取使用者U1的位置資訊與姿態資訊(步驟S402),並將使用者U1的位置資訊與姿態資訊傳輸至例如處理裝置130_3。處理裝置130_3可根據各個處理裝置130_1~130_3與使用者U1的位置資訊之間的距離選擇第一處理裝置(步驟S404)。於本範例中,第一處理裝置是處理裝置130_1~130_3中與使用者U1的位置資訊距離最近的處理裝置130_3。接著,作為第一處理裝置的處理裝置130_3可根據使用者U1的位置資訊與姿態資訊來辨識使用者U1的視線資訊E2(步驟S406)。於本範例中,處理裝置130_3可根據使用者U1的視線資訊E2判定使用者U1的視線資訊E2落在顯示器110_1之上,並根據視線資訊E2所投射的顯示器110_1而選擇第三處理裝置(步驟S408)。於本範例中,用以控制顯示器110_1的處理裝置130_1為第三處理裝置。亦即,本範例之第一處理裝置與第三處理裝置是相異的。處理裝置130_3可經由閘道器G1~Gk其中至少一將使用者U1的視線資訊E2傳輸至作為第三處理裝置的處理裝置130_1(步驟S410)。The sensing information acquisition device 120_3 can acquire the position information and attitude information of the user U1 (step S402), and transmit the position information and attitude information of the user U1 to, for example, the processing device 130_3. The processing device 130_3 may select the first processing device according to the distance between each processing device 130_1 to 130_3 and the location information of the user U1 (step S404). In this example, the first processing device is the processing device 130_3 that is closest to the location information of the user U1 among the processing devices 130_1 to 130_3. Next, the processing device 130_3 as the first processing device can identify the line of sight information E2 of the user U1 based on the position information and posture information of the user U1 (step S406). In this example, the processing device 130_3 can determine that the line of sight information E2 of the user U1 falls on the display 110_1 based on the line of sight information E2 of the user U1, and select the third processing device according to the display 110_1 projected by the line of sight information E2 (step S408). In this example, the processing device 130_1 used to control the display 110_1 is the third processing device. That is, the first processing device and the third processing device in this example are different. The processing device 130_3 may transmit the line of sight information E2 of the user U1 to the processing device 130_1 as the third processing device via at least one of the gateways G1 to Gk (step S410).

另一方面,感知資訊擷取裝置120_1~120_3可擷取目標物Obj2的位置資訊(步驟S412)。由於處理裝置130_3已經被挑選作為第一處理裝置且處理裝置130_1已經被挑選作為第三處理裝置,因此處理裝置130_2可被選擇作為第二處理裝置(步驟S414)。處理裝置130_2可接收目標物Obj2的位置資訊與其他相關資訊,以進一步處理與目標物Obj2相關的目標物辨識(步驟S416)。接著,處理裝置130_2根據感知資訊擷取裝120_1~120_3所提供之目標物Obj2的位置資訊進行座標轉換而計算目標物Obj2的目標物座標(步驟S418)。處理裝置130_2可經由閘道器G1~Gk其中至少一將目標物Obj2的目標物座標傳輸至作為第三處理裝置的處理裝置130_1(步驟S420)。On the other hand, the sensing information capturing devices 120_1 to 120_3 can capture the position information of the target object Obj2 (step S412). Since the processing device 130_3 has been selected as the first processing device and the processing device 130_1 has been selected as the third processing device, the processing device 130_2 may be selected as the second processing device (step S414). The processing device 130_2 can receive the position information and other related information of the target Obj2 to further process the target recognition related to the target Obj2 (step S416). Next, the processing device 130_2 performs coordinate conversion based on the position information of the target Obj2 provided by the sensing information acquisition devices 120_1 to 120_3 to calculate the target coordinates of the target Obj2 (step S418). The processing device 130_2 may transmit the target object coordinates of the target Obj2 to the processing device 130_1 as the third processing device via at least one of the gateways G1 to Gk (step S420).

處理裝置130_1可透過閘道器G1~Gk或直接從感知資訊擷取裝置120_1接收使用者U1的位置資訊。於是,處理裝置130_1根據使用者U1的位置資訊進行座標轉換而計算使用者的使用者座標(步驟S422)。處理裝置130_1根據使用者座標、目標物座標以及視線資訊E2決定虛擬物件Vf2的顯示位置資訊(步驟S424),並控制顯示器110_1根據虛擬物件Vf2的顯示位置資訊顯示虛擬物件Vf2(步驟S426)。於本範例中,處理裝置130_3(即第一處理裝置)可分析使用者U1的視線資訊E2。處理裝置130_2(即第二處理裝置)可處理目標物Obj2的物件辨識與座標轉換。處理裝置130_1(即第三處理裝置)根據使用者座標與目標物座標來決定虛擬物件Vf2的顯示位置資訊。The processing device 130_1 may receive the location information of the user U1 through the gateways G1˜Gk or directly from the sensing information acquisition device 120_1. Therefore, the processing device 130_1 performs coordinate conversion based on the location information of the user U1 to calculate the user coordinates of the user (step S422). The processing device 130_1 determines the display position information of the virtual object Vf2 based on the user coordinates, the target object coordinates, and the line of sight information E2 (step S424), and controls the display 110_1 to display the virtual object Vf2 based on the display position information of the virtual object Vf2 (step S426). In this example, the processing device 130_3 (ie, the first processing device) can analyze the line of sight information E2 of the user U1. The processing device 130_2 (ie, the second processing device) can process object recognition and coordinate conversion of the target Obj2. The processing device 130_1 (ie, the third processing device) determines the display position information of the virtual object Vf2 based on the user coordinates and the target object coordinates.

於一些實施例中,第一處理裝置可根據使用者U1的位置資訊計算對應於某一個顯示器的視線角度範圍。反應於使用者U1的視線資訊落在該視線角度範圍內,第一處理裝置可自顯示器110_1~110_3識別出使用者U1所注視的顯示器。In some embodiments, the first processing device can calculate the line of sight angle range corresponding to a certain display based on the location information of the user U1. In response to the sight line information of the user U1 falling within the sight angle range, the first processing device can identify the display that the user U1 is looking at from the displays 110_1 to 110_3.

詳細而言,圖5A與圖5B為根據本揭露實施例所繪示的估測視線位置的示意圖。請參照圖5A與圖5B,顯示器110_2的寬度為dw。透過於顯示器110_2前方設置已知位置的基準點P1~P4,處理裝置130_2可根據感知資訊擷取裝置120_2所擷取之影像上基準點P1~P4的像素位置來估測出使用者U1距離顯示器110_2之左邊界的橫向偏移距離X。基準點P1~P4可以為任何標示物,本揭露對此不限制。In detail, FIG. 5A and FIG. 5B are schematic diagrams of estimating a line of sight position according to an embodiment of the present disclosure. Please refer to FIG. 5A and FIG. 5B , the width of the display 110_2 is dw. By setting reference points P1 to P4 with known positions in front of the display 110_2, the processing device 130_2 can estimate the distance between the user U1 and the display based on the pixel positions of the reference points P1 to P4 on the image captured by the sensory information acquisition device 120_2. The lateral offset distance X of the left boundary of 110_2. The reference points P1 to P4 can be any markers, and this disclosure is not limited thereto.

更詳細而言,基準點P1~P2的深度資訊為D1,基準點P3~P4的深度資訊為D2。基準點P1之X軸像素座標D1 R減去基準點P2之X軸像素座標D1 L的相減結果比上深度資訊D1的比例會相等於X軸像素座標Du R減去X軸像素座標Du L的相減結果比上深度資訊D的比例。同理,基準點P3之X軸像素座標D2 R減去基準點P4之X軸像素座標D2 L的相減結果比上深度資訊D2的比例會相等於X軸像素座標Du R減去X軸像素座標Du L的相減結果比上深度資訊D的比例。基此,在X軸像素座標Du R以及X軸像素座標Du L可經由計算而得知的情況下,透過例如內插計算與基於使用者的深度資訊D以及寬度dw可得知使用者U1距離顯示器110_2之左邊界的橫向偏移距離X。 In more detail, the depth information of reference points P1 to P2 is D1, and the depth information of reference points P3 to P4 is D2. The ratio of the subtraction result of the X-axis pixel coordinate D1 R of the reference point P1 minus the X-axis pixel coordinate D1 L of the reference point P2 to the depth information D1 will be equal to the X-axis pixel coordinate Du R minus the X-axis pixel coordinate Du L The subtraction result is proportional to the depth information D. In the same way, the ratio of the subtraction result of the X-axis pixel coordinate D2 R of the reference point P3 minus the X-axis pixel coordinate D2 L of the reference point P4 to the depth information D2 will be equal to the X-axis pixel coordinate Du R minus the X-axis pixel. The ratio of the subtraction result of coordinates Du L to the depth information D. Based on this, when the X-axis pixel coordinate Du R and the X-axis pixel coordinate Du L can be known through calculation, the distance to the user U1 can be known through interpolation calculation and user-based depth information D and width dw, for example. The lateral offset distance X of the left border of the display 110_2.

如此一來,基於橫向偏移距離X、深度資訊D以及正切函數可計算出視線角度範圍θ。如圖5B所示,若使用者U1的視線資訊E3未落在視線角度範圍θ之內,代表使用者U1注視左側的顯示器110_1。反之,若使用者U1的視線資訊落在視線角度範圍θ之內,代表使用者U1注視顯示器110_2。In this way, the line of sight angle range θ can be calculated based on the lateral offset distance X, the depth information D and the tangent function. As shown in FIG. 5B , if the line of sight information E3 of the user U1 does not fall within the line of sight angle range θ, it means that the user U1 is looking at the display 110_1 on the left. On the contrary, if the line of sight information of the user U1 falls within the line of sight angle range θ, it means that the user U1 is looking at the display 110_2.

圖6A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。圖6B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。請同時參照圖6A與圖6B。於本實施例中,使用者U1位於顯示器110_2的前方,並且從注視位於使用者U1正前方的顯示器110_2轉換為注視位於使用者U1左側的顯示器110_1。FIG. 6A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 6B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. Please refer to Figure 6A and Figure 6B at the same time. In this embodiment, the user U1 is located in front of the display 110_2, and switches from looking at the display 110_2 located directly in front of the user U1 to looking at the display 110_1 located to the left of the user U1.

感知資訊擷取裝置120_2可擷取使用者U1的位置資訊與姿態資訊(步驟S602),並將使用者U1的位置資訊與姿態資訊傳輸至例如處理裝置130_2。處理裝置130_2可根據各個處理裝置130_1~130_3與使用者U1的位置資訊之間的距離選擇第一處理裝置(步驟S604)。於本範例中,第一處理裝置可以是處理裝置130_1~130_3中與使用者U1的位置資訊距離最近的處理裝置130_2。接著,作為第一處理裝置的處理裝置130_2可根據使用者U1的位置資訊與姿態資訊來辨識使用者U1的視線資訊E1(步驟S606)。於本範例中,處理裝置130_2可根據使用者U1的視線資訊E1判定使用者U1的視線資訊E1落在顯示器110_2之上,並根據視線資訊E1所投射的顯示器110_1而選擇第三處理裝置(步驟S608)。於本範例中,在使用者U1的視線資訊發生變化之前,用以控制顯示器110_2的處理裝置130_2同樣為第三處理裝置。於是,處理裝置130_2計算使用者的使用者座標(步驟S610)。處理裝置130_2決定虛擬物件Vf1的顯示位置資訊(步驟S612)。處理裝置130_2控制顯示器110_2顯示虛擬物件Vf1(步驟S614)。步驟S602~步驟S614的詳細操作細節已於前述實施例詳細說明,於此不再贅述。The sensing information acquisition device 120_2 can acquire the position information and attitude information of the user U1 (step S602), and transmit the position information and attitude information of the user U1 to, for example, the processing device 130_2. The processing device 130_2 may select the first processing device according to the distance between each processing device 130_1 to 130_3 and the location information of the user U1 (step S604). In this example, the first processing device may be the processing device 130_2 that is closest to the location information of the user U1 among the processing devices 130_1 to 130_3. Next, the processing device 130_2 as the first processing device can identify the line of sight information E1 of the user U1 based on the position information and posture information of the user U1 (step S606). In this example, the processing device 130_2 can determine that the line of sight information E1 of the user U1 falls on the display 110_2 based on the line of sight information E1 of the user U1, and select the third processing device according to the display 110_1 projected by the line of sight information E1 (step S608). In this example, before the line of sight information of the user U1 changes, the processing device 130_2 used to control the display 110_2 is also a third processing device. Therefore, the processing device 130_2 calculates the user coordinates of the user (step S610). The processing device 130_2 determines the display position information of the virtual object Vf1 (step S612). The processing device 130_2 controls the display 110_2 to display the virtual object Vf1 (step S614). The detailed operation details of steps S602 to S614 have been described in detail in the previous embodiments and will not be described again here.

須特別注意的是,反應於使用者U1的轉身或轉頭,處理裝置130_2偵測使用者U1的視線資訊的變化(步驟S616)。於本範例中,使用者的視線資訊E1變化為視線資訊E3。在使用者U1的視線資訊發生變化之後,處理裝置130_2判斷使用者U1的視線資訊E3是否依然落在顯示器110_1~110_3其中之一者(即顯示器110_2)的所述視線角度範圍內(步驟S618)。反應於使用者U1的視線資訊E3未落在顯示器110_2的視線角度範圍內(步驟S618判斷為否),處理裝置130_2根據使用者的視線資訊E3識別顯示器110_1~110_3其中之另一者(即顯示器110_1),以根據顯示器110_1~110_3其中之另一者(即顯示器110_1)從處理裝置130_1~130_3選擇出另一第三處理裝置(步驟S620)。於本範例中,在使用者U1的視線資訊發生變化之後,用以控制顯示器110_1的處理裝置130_1被識別另一第三處理裝置。於是,處理裝置130_2將視線資訊E3傳輸給後續負責顯示控制的處理裝置130_1(步驟S622)。It should be noted that in response to the turning or turning of the head of the user U1, the processing device 130_2 detects changes in the line of sight information of the user U1 (step S616). In this example, the user's gaze information E1 changes to gaze information E3. After the line of sight information of the user U1 changes, the processing device 130_2 determines whether the line of sight information E3 of the user U1 still falls within the line of sight angle range of one of the displays 110_1 to 110_3 (ie, the display 110_2) (step S618) . In response to the fact that the line of sight information E3 of the user U1 does not fall within the line of sight angle range of the display 110_2 (step S618 determines as NO), the processing device 130_2 identifies another one of the displays 110_1 to 110_3 (i.e., the display) based on the user's line of sight information E3. 110_1) to select another third processing device from the processing devices 130_1 to 130_3 according to another one of the displays 110_1 to 110_3 (ie, the display 110_1) (step S620). In this example, after the line of sight information of the user U1 changes, the processing device 130_1 used to control the display 110_1 is identified as another third processing device. Therefore, the processing device 130_2 transmits the line of sight information E3 to the subsequent processing device 130_1 responsible for display control (step S622).

另一方面,感知資訊擷取裝置120_1~120_3可擷取目標物Obj1、Obj2的位置資訊(步驟S624)。由於處理裝置130_2已經被挑選作為第一處理裝置,因此處理裝置130_3可被選擇作為第二處理裝置(步驟S626)。處理裝置130_3可接收目標物Obj1、Obj2的位置資訊與其他相關資訊,以進一步處理與目標物Obj1、Obj2相關的目標物辨識(步驟S628)。接著,處理裝置130_3根據感知資訊擷取裝120_1~120_3所提供之目標物Obj1、Obj2的位置資訊進行座標轉換而計算目標物Obj1、Obj2的目標物座標(步驟S630)。處理裝置130_3可經由閘道器G1~Gk其中至少一將目標物Obj1、Obj2的目標物座標傳輸至作為第三處理裝置的處理裝置130_1以及處理裝置130_2(步驟S632)。On the other hand, the sensing information capturing devices 120_1 to 120_3 can capture the position information of the target objects Obj1 and Obj2 (step S624). Since the processing device 130_2 has been selected as the first processing device, the processing device 130_3 may be selected as the second processing device (step S626). The processing device 130_3 can receive the position information and other related information of the target objects Obj1 and Obj2 to further process the target object recognition related to the target objects Obj1 and Obj2 (step S628). Next, the processing device 130_3 performs coordinate conversion based on the position information of the target objects Obj1 and Obj2 provided by the sensing information acquisition devices 120_1 to 120_3 to calculate the target coordinates of the target objects Obj1 and Obj2 (step S630). The processing device 130_3 may transmit the target object coordinates of the target objects Obj1 and Obj2 to the processing device 130_1 and the processing device 130_2 as the third processing device through at least one of the gateways G1 to Gk (step S632).

相似於前述操作原理,處理裝置130_1根據使用者U1的位置資訊進行座標轉換而計算使用者的使用者座標(步驟S634)。處理裝置130_1根據使用者座標、目標物座標以及視線資訊E3決定虛擬物件Vf2的顯示位置資訊(步驟S636),並控制顯示器110_1根據虛擬物件Vf2的顯示位置資訊顯示虛擬物件Vf2(步驟S638)。於本範例中,用以負責顯示控制的第三處理裝置將反應於視線變化而從處理裝置130_2切換為130_1。Similar to the aforementioned operating principles, the processing device 130_1 performs coordinate conversion based on the location information of the user U1 to calculate the user coordinates of the user (step S634). The processing device 130_1 determines the display position information of the virtual object Vf2 based on the user coordinates, the target object coordinates, and the line of sight information E3 (step S636), and controls the display 110_1 to display the virtual object Vf2 based on the display position information of the virtual object Vf2 (step S638). In this example, the third processing device responsible for display control will switch from the processing device 130_2 to the processing device 130_1 in response to the line of sight change.

圖7是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。請參照圖7,當使用者的人數超過一人時,處理裝置130_1~130_3其中之二可作為計算視線資訊E2、E4的第一處理裝置。於圖7的範例中,由於感知資訊擷取裝置120_1偵測到使用者U2,因此與使用者U2距離最近的處理裝置130_1將被選擇來作為計算使用者U2之視線資訊E4的第一處理裝置。另一方面,由於感知資訊擷取裝置120_1偵測到使用者U1,與使用者U1距離最近的處理裝置130_3將被選擇來作為計算使用者U1之視線資訊E2的第一處理裝置。此外,根據使用者U1與U2的視線資訊E2、E4投射於顯示器110_1、110_2上的視線位置,處理裝置130_1與130_2可分別用來計算虛擬物件Obj1、Obj2的顯示位置資訊。FIG. 7 is a schematic diagram of an application scenario of the information display system according to an exemplary embodiment of the present disclosure. Please refer to FIG. 7 . When the number of users exceeds one, two of the processing devices 130_1 to 130_3 can be used as the first processing device for calculating the line of sight information E2 and E4. In the example of FIG. 7 , since the sensing information acquisition device 120_1 detects the user U2, the processing device 130_1 closest to the user U2 will be selected as the first processing device to calculate the line of sight information E4 of the user U2. . On the other hand, since the sensing information acquisition device 120_1 detects the user U1, the processing device 130_3 closest to the user U1 will be selected as the first processing device to calculate the line of sight information E2 of the user U1. In addition, based on the line of sight information E2 and E4 of the users U1 and U2 projected on the displays 110_1 and 110_2, the processing devices 130_1 and 130_2 can be used to calculate the display position information of the virtual objects Obj1 and Obj2 respectively.

圖8A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。圖8B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。請同時參照圖8A與圖8B。於本實施例中,使用者U1與多位其他使用者U3、U4位於顯示器110_3的前方,並且使用者U1與多位其他使用者U3、U4都注視位於左側的顯示器110_1。FIG. 8A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 8B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. Please refer to Figure 8A and Figure 8B at the same time. In this embodiment, the user U1 and multiple other users U3 and U4 are located in front of the display 110_3, and the user U1 and multiple other users U3 and U4 are all looking at the display 110_1 located on the left.

感知資訊擷取裝置120_3可擷取使用者U1與多位其他使用者U3、U4的位置資訊與姿態資訊(步驟S802),並將使用者U1與多位其他使用者U3、U4的位置資訊與姿態資訊傳輸至例如處理裝置130_3。於本實施例中,處理裝置130_3可根據各個處理裝置130_1~130_3與使用者U1與多位其他使用者U3、U4的位置資訊之間的距離選擇第一處理裝置(步驟S804)。具體而言,處理裝置130_3可計算出處理裝置130_1~130_3與使用者U1之間的距離。相似的,處理裝置130_3可計算出處理裝置130_1~130_3分別與多位其他使用者U3、U4之間的距離。處理裝置130_3可從上述所有距離中找到最小距離,並選擇關聯於該最小距離的處理裝置作為第一處理裝置。於本範例中,使用者U1與處理裝置130_3之間相距一最小距離,因此處理裝置130_3被選擇作為第一處理裝置。The sensing information capturing device 120_3 can capture the location information and attitude information of the user U1 and multiple other users U3 and U4 (step S802), and combine the location information of the user U1 and multiple other users U3 and U4 with The posture information is transmitted to, for example, the processing device 130_3. In this embodiment, the processing device 130_3 can select the first processing device according to the distance between each processing device 130_1˜130_3 and the location information of the user U1 and multiple other users U3 and U4 (step S804). Specifically, the processing device 130_3 can calculate the distance between the processing devices 130_1˜130_3 and the user U1. Similarly, the processing device 130_3 can calculate the distances between the processing devices 130_1˜130_3 and multiple other users U3 and U4 respectively. The processing device 130_3 can find the minimum distance from all the above distances, and select the processing device associated with the minimum distance as the first processing device. In this example, there is a minimum distance between the user U1 and the processing device 130_3, so the processing device 130_3 is selected as the first processing device.

接著,作為第一處理裝置的處理裝置130_3可根據使用者U1與多位其他使用者U3、U4的位置資訊與姿態資訊來辨識使用者U1與多位其他使用者U3、U4的視線資訊E3、E5、E6(步驟S806)。Then, the processing device 130_3 as the first processing device can identify the line of sight information E3, E5, E6 (step S806).

處理裝置130_3判斷感知資訊擷取裝置120_1~120_3其中一者(即感知資訊擷取裝置120_3)是否同時偵測到使用者U1與多位其他使用者U3、U4(步驟S808)。反應於感知資訊擷取裝置120_3同時偵測到使用者U1與其他使用者U3、U4(步驟S808判斷為是),處理裝置130_3根據使用者U1的視線資訊E3與其他使用者U3、U4的視線資訊E5、E6計算得出一共同視線方向(步驟S810),並根據共同視線方向從處理裝置130_1~130_3選擇出第三處理裝置以及顯示器110_1~110_3其中之一者(步驟S812)。於一些實施例中,處理裝置130_3可計算使用者U1的視線資訊E3與其他使用者U3、U4的視線資訊E5、E6於各軸方向上的分量的平均值,以獲取共同視線方向。The processing device 130_3 determines whether one of the sensing information capturing devices 120_1 to 120_3 (ie, the sensing information capturing device 120_3) detects the user U1 and multiple other users U3 and U4 at the same time (step S808). In response to the sensing information acquisition device 120_3 detecting the user U1 and the other users U3 and U4 at the same time (step S808 determines as yes), the processing device 130_3 based on the line of sight information E3 of the user U1 and the line of sight of the other users U3 and U4 The information E5 and E6 calculate a common line of sight direction (step S810), and select one of the third processing device and the display 110_1 to 110_3 from the processing devices 130_1 to 130_3 according to the common line of sight direction (step S812). In some embodiments, the processing device 130_3 can calculate the average value of the components in each axis direction of the line of sight information E3 of the user U1 and the line of sight information E5 and E6 of the other users U3 and U4 to obtain a common line of sight direction.

須說明的是,於一些實施例中,在計算一共同視線方向之前,處理裝置130_3還可判斷使用者U1的視線資訊E3與其他使用者U3、U4的視線資訊E5、E6之間的視線方向差異是否符合預設條件。具體而言,處理裝置130_3可判定使用者U1的視線向量分別與其他使用者U3、U4的視線向量之間的角度差距是否小於門檻值。若是,可判定使用者U1的視線資訊E3與其他使用者U3、U4的視線資訊E5、E6之間的視線方向差異符合一預設條件,代表使用者U1與其他使用者U3、U4注視相似的位置。It should be noted that in some embodiments, before calculating a common gaze direction, the processing device 130_3 can also determine the gaze direction between the gaze information E3 of the user U1 and the gaze information E5 and E6 of other users U3 and U4. Whether the difference meets the preset conditions. Specifically, the processing device 130_3 may determine whether the angle difference between the gaze vector of the user U1 and the gaze vectors of the other users U3 and U4 is less than a threshold value. If so, it can be determined that the gaze direction difference between the gaze information E3 of user U1 and the gaze information E5 and E6 of other users U3 and U4 meets a preset condition, which means that the gaze direction of user U1 and other users U3 and U4 are similar. Location.

此外,於本範例中,由於共同視線方向落在顯示器110_1之上,因此處理裝置130_3根據共同視線方向所投射的顯示器110_1而選擇第三處理裝置。於本範例中,用以控制顯示器110_1的處理裝置130_1為第三處理裝置。處理裝置130_3可經由閘道器G1~Gk其中至少一將共同視線方向傳輸至作為第三處理裝置的處理裝置130_1(步驟S814)。In addition, in this example, since the common line of sight direction falls on the display 110_1, the processing device 130_3 selects the third processing device according to the display 110_1 projected by the common line of sight direction. In this example, the processing device 130_1 used to control the display 110_1 is the third processing device. The processing device 130_3 may transmit the common line of sight direction to the processing device 130_1 as the third processing device via at least one of the gateways G1 to Gk (step S814).

另一方面,感知資訊擷取裝置120_1~120_3可擷取目標物Obj2的位置資訊(步驟S816)。由於處理裝置130_3已經被挑選作為第一處理裝置且處理裝置130_1已經被挑選作為第三處理裝置,因此處理裝置130_2可被選擇作為第二處理裝置(步驟S818)。處理裝置130_2可接收目標物Obj2的位置資訊與其他相關資訊,以進一步處理與目標物Obj2相關的目標物辨識(步驟S820)。接著,處理裝置130_2根據感知資訊擷取裝置120_1~120_3所提供之目標物Obj2的位置資訊進行座標轉換而計算目標物Obj2的目標物座標(步驟S822)。處理裝置130_2可經由閘道器G1~Gk其中至少一將目標物Obj2的目標物座標傳輸至作為第三處理裝置的處理裝置130_1(步驟S824)。On the other hand, the sensing information capturing devices 120_1 to 120_3 can capture the position information of the target object Obj2 (step S816). Since the processing device 130_3 has been selected as the first processing device and the processing device 130_1 has been selected as the third processing device, the processing device 130_2 may be selected as the second processing device (step S818). The processing device 130_2 can receive the position information and other related information of the target Obj2 to further process the target recognition related to the target Obj2 (step S820). Next, the processing device 130_2 performs coordinate conversion based on the position information of the target Obj2 provided by the sensing information acquisition devices 120_1 to 120_3 to calculate the target coordinates of the target Obj2 (step S822). The processing device 130_2 may transmit the target object coordinates of the target Obj2 to the processing device 130_1 as the third processing device via at least one of the gateways G1 to Gk (step S824).

處理裝置130_1可透過閘道器G1~Gk或直接從感知資訊擷取裝置120_1接收使用者U1的位置資訊。於是,處理裝置130_1根據使用者U1的位置資訊進行座標轉換而計算使用者的使用者座標(步驟S826)。處理裝置130_1根據使用者座標、目標物座標以及共同視線方向決定虛擬物件Vf2的顯示位置資訊(步驟S828),並控制顯示器110_1根據虛擬物件Vf2的顯示位置資訊顯示虛擬物件Vf2(步驟S830)。The processing device 130_1 may receive the location information of the user U1 through the gateways G1˜Gk or directly from the sensing information acquisition device 120_1. Therefore, the processing device 130_1 performs coordinate conversion based on the location information of the user U1 to calculate the user coordinates of the user (step S826). The processing device 130_1 determines the display position information of the virtual object Vf2 according to the user coordinates, the target object coordinates and the common sight direction (step S828), and controls the display 110_1 to display the virtual object Vf2 according to the display position information of the virtual object Vf2 (step S830).

圖9是根據本揭露一實施例所繪示的處理裝置的方塊圖。處理裝置900可為前述實施例的處理裝置130_1~130_N。請參照圖9,處理裝置900可包括記憶體901、處理器902以及傳輸元件903。記憶體901可以例如是任意型式的固定式或可移動式隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟或其他類似裝置、積體電路或其組合。處理器902可以例如是中央處理單元(central processing unit,CPU)、應用處理器(application processor,AP),或是其他可程式化之一般用途或特殊用途的微處理器(microprocessor)、數位訊號處理器(digital signal processor,DSP)、影像訊號處理器(image signal processor,ISP)、圖形處理器(graphics processing unit,GPU)或其他類似裝置、積體電路或其組合。傳輸元件903是支援有線/無線傳輸協定的通訊裝置,例如是收發器與天線的組合。處理器902可執行記憶體901所紀錄的指令、程式碼或軟體模組而實現本揭露的資訊顯示方法。FIG. 9 is a block diagram of a processing device according to an embodiment of the present disclosure. The processing device 900 may be the processing devices 130_1˜130_N of the aforementioned embodiments. Referring to FIG. 9 , the processing device 900 may include a memory 901 , a processor 902 and a transmission element 903 . The memory 901 can be, for example, any type of fixed or removable random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), flash memory (flash memory), Hard disk or other similar device, integrated circuit, or combination thereof. The processor 902 may be, for example, a central processing unit (CPU), an application processor (AP), or other programmable general-purpose or special-purpose microprocessor, digital signal processing digital signal processor (DSP), image signal processor (ISP), graphics processing unit (GPU) or other similar devices, integrated circuits or combinations thereof. The transmission element 903 is a communication device that supports wired/wireless transmission protocols, such as a combination of a transceiver and an antenna. The processor 902 can execute instructions, program codes or software modules recorded in the memory 901 to implement the information display method of the present disclosure.

本揭露的範例實施例所提出的資訊顯示方法及其處理裝置與資訊顯示系統,可根據使用者的位置與視線來對多個處理裝置進行運算分配,從而提高計算效率來避免虛實融合顯示服務的延遲顯示。藉此,虛擬物件可即時地且順暢地進行顯示,大幅提升使用者的觀看體驗。The information display method and its processing device and information display system proposed in the exemplary embodiments of the present disclosure can allocate operations to multiple processing devices according to the user's position and line of sight, thereby improving computing efficiency and avoiding the failure of virtual and real integrated display services. Delayed display. In this way, virtual objects can be displayed in real time and smoothly, greatly improving the user's viewing experience.

雖然本揭露已以範例實施例揭露如上,然其並非用以限定本揭露,任何所屬技術領域中具有通常知識者,在不脫離本揭露的精神和範圍內,當可作些許的更動與潤飾,故本揭露的保護範圍當視後附的申請專利範圍及其均等範圍所界定者為準。Although the disclosure has been disclosed above in the form of exemplary embodiments, this is not intended to limit the disclosure. Anyone with ordinary knowledge in the relevant technical field may make slight changes and modifications without departing from the spirit and scope of the disclosure. Therefore, the protection scope of this disclosure shall be determined by the appended patent application scope and its equivalent scope.

10:資訊顯示系統 110_1~110_N:顯示器 120_1~120_N:感知資訊擷取裝置 130_1~130_N:處理裝置 G1~Gk:閘道器 N1:網路拓樸 Vf1, Vf2:虛擬物件 RF1:參考顯示物件框 U1, U2, U3, U4:使用者 Obj1, Obj2:目標物 E1, E2, E3, E4, E5, E6:視線資訊 P1~P4:基準點 901:記憶體 902:處理器 903:傳輸元件 S210~S260、S302~S324、S402~S426、S602~S638、S802~S830:步驟 10:Information display system 110_1~110_N: Display 120_1~120_N: Perceptual information acquisition device 130_1~130_N: Processing device G1~Gk:Gateway N1: Network topology Vf1, Vf2: virtual objects RF1: Reference display object frame U1, U2, U3, U4: User Obj1, Obj2: Target object E1, E2, E3, E4, E5, E6: line of sight information P1~P4: reference point 901:Memory 902: Processor 903:Transmission element S210~S260, S302~S324, S402~S426, S602~S638, S802~S830: steps

附圖包含於本文中以便於進一步理解本揭露,且併入於本說明書中並構成本說明書的一部分。附圖說明本揭露的實施例,並與描述一起用以解釋本揭露的原理。 圖1A是根據本揭露一範例實施例所繪示的資訊顯示系統的方塊圖。 圖1B是根據本揭露一範例實施例所繪示的資訊顯示系統的示意圖。 圖2是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。 圖3A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。 圖3B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。 圖4A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。 圖4B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。 圖5A與圖5B是根據本揭露實施例所繪示的估測視線位置的示意圖。 圖6A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。 圖6B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。 圖7是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。 圖8A是根據本揭露一範例實施例所繪示的資訊顯示系統的應用情境的示意圖。 圖8B是根據本揭露一範例實施例所繪示的資訊顯示方法的流程圖。 圖9是根據本揭露一實施例所繪示的處理裝置的方塊圖。 The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the disclosure and, together with the description, serve to explain principles of the disclosure. FIG. 1A is a block diagram of an information display system according to an exemplary embodiment of the present disclosure. FIG. 1B is a schematic diagram of an information display system according to an exemplary embodiment of the present disclosure. FIG. 2 is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. FIG. 3A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 3B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. FIG. 4A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 4B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. 5A and 5B are schematic diagrams of estimating the line of sight position according to an embodiment of the present disclosure. FIG. 6A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 6B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. FIG. 7 is a schematic diagram of an application scenario of the information display system according to an exemplary embodiment of the present disclosure. FIG. 8A is a schematic diagram of an application scenario of an information display system according to an exemplary embodiment of the present disclosure. FIG. 8B is a flowchart of an information display method according to an exemplary embodiment of the present disclosure. FIG. 9 is a block diagram of a processing device according to an embodiment of the present disclosure.

S210~S260:步驟 S210~S260: steps

Claims (17)

一種資訊顯示系統,包括:可透光的多個顯示器;多個感知資訊擷取裝置,用以擷取一使用者的位置資訊與姿態資訊以及擷取一目標物的位置資訊;多個處理裝置,分別對應於所述顯示器,且經由多個閘道器而彼此連接與通訊,其中,一第一處理裝置是根據所述使用者的位置資訊而從所述處理裝置選擇出來,所述第一處理裝置根據所述感知資訊擷取裝置所提供的所述使用者的位置資訊與姿態資訊決定所述使用者的視線資訊,其中,相異於所述第一處理裝置的一第二處理裝置根據所述第二感知資訊擷取裝置所提供的所述目標物的位置資訊進行座標轉換而計算所述目標物的一目標物座標,其中,所述第一處理裝置根據所述使用者的視線資訊而從所述處理裝置選擇出一第三處理裝置,所述第三處理裝置根據一使用者座標與所述目標物座標決定一虛擬物件的顯示位置資訊,並控制所述顯示器其中之一者根據所述虛擬物件的所述顯示位置資訊顯示所述虛擬物件,其中所述第一處理裝置為所述處理裝置中與所述使用者的位置資訊距離最近的一者。 An information display system, including: multiple light-transmissive displays; multiple sensing information acquisition devices, used to acquire position information and attitude information of a user and position information of a target object; multiple processing devices , respectively corresponding to the display, and connected and communicated with each other through multiple gateways, wherein a first processing device is selected from the processing device according to the user's location information, and the first processing device The processing device determines the user's line of sight information based on the user's position information and posture information provided by the sensory information acquisition device, wherein a second processing device that is different from the first processing device determines the user's line of sight information based on the user's position information and posture information. The position information of the target object provided by the second sensing information acquisition device performs coordinate conversion to calculate a target coordinate of the target object, wherein the first processing device performs coordinate conversion according to the user's line of sight information A third processing device is selected from the processing device. The third processing device determines the display position information of a virtual object according to a user coordinate and the target object coordinate, and controls one of the displays according to The display location information of the virtual object displays the virtual object, wherein the first processing device is the one closest to the location information of the user among the processing devices. 如請求項1所述的資訊顯示系統,其中所述第一處理裝置相同或相異於所述第三處理裝置,而所述第三處理裝置根據所述感知資訊擷取裝置所提供的所述使用者的位置資訊進行座標轉換而計算所述使用者的所述使用者座標。 The information display system as claimed in claim 1, wherein the first processing device is the same as or different from the third processing device, and the third processing device is based on the information provided by the sensing information acquisition device. The user's location information is subjected to coordinate conversion to calculate the user coordinates of the user. 如請求項1所述的資訊顯示系統,其中所述第二處理裝置經由所述閘道器傳輸所述目標物的所述目標物座標至所述第三處理裝置。 The information display system of claim 1, wherein the second processing device transmits the target coordinates of the target to the third processing device via the gateway. 如請求項1所述的資訊顯示系統,其中所述第一處理裝置根據所述使用者的視線資訊識別所述顯示器其中之所述一者,以根據所述顯示器其中之所述一者從所述處理裝置選擇出所述第三處理裝置。 The information display system as claimed in claim 1, wherein the first processing device identifies the one of the displays according to the user's line of sight information, so as to obtain the information from the display based on the one of the displays. The processing device selects the third processing device. 如請求項4所述的資訊顯示系統,其中所述第一處理裝置根據所述使用者的位置資訊計算對應於所述顯示器其中之所述一者的一視線角度範圍,反應於所述使用者的視線資訊落在所述視線角度範圍內,所述第一處理裝置自所述顯示器識別所述顯示器其中之所述一者。 The information display system of claim 4, wherein the first processing device calculates a line of sight angle range corresponding to the one of the displays based on the user's location information, and responds to the user's If the line of sight information falls within the line of sight angle range, the first processing device identifies the one of the displays from the display. 如請求項4所述的資訊顯示系統,其中反應於所述使用者的視線資訊的變化,所述第一處理裝置判斷所述使用者的視線資訊是否依然落在所述顯示器其中之所述一者的所述視線角度範圍內;以及反應於所述使用者的視線資訊未落在所述顯示器其中之所述一者的所述視線角度範圍內,所述第一處理裝置根據所述使用者的視線資訊識別所述顯示器其中之另一者,以根據所述 顯示器其中之所述另一者從所述處理裝置選擇出另一第三處理裝置。 The information display system of claim 4, wherein in response to changes in the user's sight information, the first processing device determines whether the user's sight information still falls on one of the displays. within the line of sight angle range of the user; and in response to the user's line of sight information not falling within the line of sight angle range of one of the displays, the first processing device determines line of sight information to identify the other one of the displays based on the The other one of the displays selects a further third processing means from the processing means. 如請求項1所述的資訊顯示系統,其中所述第一處理裝置判斷所述感知資訊擷取裝置其中一者是否同時偵測到所述使用者與多個其他使用者,反應於所述感知資訊擷取裝置其中所述一者同時偵測到所述使用者與所述其他使用者,所述第一處理裝置根據所述使用者的視線資訊與所述其他使用者的視線資訊計算一共同視線方向,並根據所述共同視線方向從所述處理裝置選擇出所述第三處理裝置以及所述顯示器其中之所述一者。 The information display system of claim 1, wherein the first processing device determines whether one of the sensing information acquisition devices detects the user and multiple other users at the same time, and responds to the sensing One of the information acquisition devices detects the user and the other users at the same time, and the first processing device calculates a common value based on the user's line of sight information and the line of sight information of the other users. The viewing direction, and selecting one of the third processing device and the display from the processing device according to the common viewing direction. 如請求項7所述的資訊顯示系統,其中所述使用者的視線資訊與所述其他使用者的視線資訊之間的視線方向差異符合一預設條件。 The information display system of claim 7, wherein the difference in gaze direction between the user's gaze information and the gaze information of other users meets a preset condition. 一種資訊顯示方法,適用於包括可透光的多個顯示器、多個感知資訊擷取裝置以及多個處理裝置的資訊顯示系統,而所述方法包括:利用所述感知資訊擷取裝置擷取一使用者的位置資訊與姿態資訊以及一目標物的位置資訊;根據所述使用者的位置資訊而從所述處理裝置選擇一第一處理裝置;透過所述第一處理裝置根據所述感知資訊擷取裝置所提供的所述使用者的位置資訊與姿態資訊決定所述使用者的視線資訊;透過相異於所述第一處理裝置的一第二處理裝置根據所述感 知資訊擷取裝置所提供的所述目標物的位置資訊進行座標轉換而計算所述目標物的一目標物座標;根據所述使用者的視線資訊而從所述處理裝置選擇出一第三處理裝置;以及透過所述第三處理裝置根據一使用者座標與所述目標物座標決定一虛擬物件的顯示位置資訊,並控制所述顯示器其中之一者根據所述虛擬物件的所述顯示位置資訊顯示所述虛擬物件,其中根據所述使用者的位置資訊而從所述處理裝置選擇所述第一處理裝置的步驟包括:從所述處理裝置之中選擇與所述使用者的位置資訊距離最近的所述第一處理裝置。 An information display method is suitable for an information display system including multiple light-transmissive displays, multiple perceptual information acquisition devices, and multiple processing devices, and the method includes: using the perceptual information acquisition device to capture a The user's position information and attitude information and the position information of a target object; selecting a first processing device from the processing device according to the user's position information; extracting a first processing device according to the sensing information through the first processing device Obtaining the user's position information and posture information provided by the device to determine the user's line of sight information; through a second processing device that is different from the first processing device according to the sense Perform coordinate conversion on the position information of the target object provided by the information acquisition device to calculate a target coordinate of the target object; select a third process from the processing device according to the user's line of sight information device; and determine the display position information of a virtual object according to a user coordinate and the target object coordinate through the third processing device, and control one of the displays according to the display position information of the virtual object Displaying the virtual object, wherein the step of selecting the first processing device from the processing device according to the user's location information includes: selecting the closest processing device to the user's location information of the first processing device. 如請求項9所述的資訊顯示方法,其中所述第一處理裝置相同或相異於所述第三處理裝置,且所述方法更包括:透過所述第三處理裝置根據所述感知資訊擷取裝置所提供的所述使用者的位置資訊進行座標轉換而計算所述使用者的所述使用者座標。 The information display method according to claim 9, wherein the first processing device is the same as or different from the third processing device, and the method further includes: using the third processing device to extract the information according to the sensing information. Obtain the user's location information provided by the device and perform coordinate conversion to calculate the user coordinates of the user. 如請求項9所述的資訊顯示方法,其中所述方法更包括:透過所述第二處理裝置經由所述閘道器傳輸所述目標物的所述目標物座標至所述第三處理裝置。 The information display method according to claim 9, wherein the method further includes: transmitting the object coordinates of the target object to the third processing device through the gateway through the second processing device. 如請求項9所述的資訊顯示方法,其中根據所述使用者的視線資訊而從所述處理裝置選擇出所述第三處理裝置的步驟包括:透過所述第一處理裝置根據所述使用者的視線資訊識別所述顯示器其中之所述一者;以及根據所述顯示器其中之所述一者從所述處理裝置選擇出所述第三處理裝置。 The information display method according to claim 9, wherein the step of selecting the third processing device from the processing device according to the user's line of sight information includes: using the first processing device to select the third processing device according to the user's line of sight information. The line of sight information identifies the one of the displays; and selecting the third processing device from the processing device based on the one of the displays. 如請求項12所述的資訊顯示方法,其中透過所述第一處理裝置根據所述使用者的視線資訊識別所述顯示器其中之所述一者的步驟包括:透過所述第一處理裝置根據所述使用者的位置資訊計算對應於所述顯示器其中之所述一者的一視線角度範圍;以及反應於所述使用者的視線資訊落在所述視線角度範圍內,透過所述第一處理裝置自所述顯示器識別所述顯示器其中之所述一者。 The information display method according to claim 12, wherein the step of identifying the one of the displays according to the user's line of sight information through the first processing device includes: using the first processing device according to the The user's position information calculates a line of sight angle range corresponding to the one of the displays; and in response to the user's line of sight information falling within the line of sight angle range, through the first processing device Identify the one of the displays from the display. 如請求項12所述的資訊顯示方法,所述方法更包括:反應於所述使用者的視線資訊的變化,透過所述第一處理裝置判斷所述使用者的視線資訊是否依然落在所述顯示器其中之所述一者的所述視線角度範圍內;以及反應於所述使用者的視線資訊未落在所述顯示器其中之所述一者的所述視線角度範圍內,透過所述第一處理裝置根據所述使 用者的視線資訊識別所述顯示器其中之另一者,以根據所述顯示器其中之所述另一者從所述處理裝置選擇出另一第三處理裝置。 The information display method as described in claim 12, the method further includes: in response to changes in the user's line of sight information, determining whether the user's line of sight information still falls on the Within the sight angle range of the one of the displays; and in response to the user's sight information not falling within the sight angle range of the one of the displays, through the first The processing device uses the The user's gaze information identifies the other one of the displays to select another third processing device from the processing device based on the other one of the displays. 如請求項9所述的資訊顯示方法,所述方法更包括:透過所述第一處理裝置判斷所述感知資訊擷取裝置其中一者是否同時偵測到所述使用者與多個其他使用者;反應於所述感知資訊擷取裝置其中所述一者同時偵測到所述使用者與所述其他使用者,透過所述第一處理裝置根據所述使用者的視線資訊與所述其他使用者的視線資訊計算一共同視線方向,其中根據所述使用者的視線資訊而從所述處理裝置選擇出所述第三處理裝置的步驟包括:根據所述共同視線方向從所述處理裝置選擇出所述第三處理裝置以及所述顯示器其中之所述一者。 The information display method of claim 9, further comprising: determining, through the first processing device, whether one of the sensing information acquisition devices detects the user and multiple other users at the same time. ; In response to the one of the sensing information acquisition devices detecting the user and the other users at the same time, through the first processing device, based on the user's line of sight information and the other usage The user's gaze information calculates a common gaze direction, wherein the step of selecting the third processing device from the processing device according to the user's gaze information includes: selecting the third processing device from the processing device according to the common gaze direction. one of the third processing device and the display. 如請求項9所述的資訊顯示方法,其中所述使用者的視線資訊與所述其他使用者的視線資訊之間的視線方向差異符合一預設條件。 The information display method of claim 9, wherein the difference in gaze direction between the user's gaze information and the gaze information of other users meets a preset condition. 一種處理裝置,連接於可透光的一顯示器以及一感知資訊擷取裝置,並經由多個閘道器連接至多個其他處理裝置,其中所述感知資訊擷取裝置用以擷取一使用者的位置資訊與姿態資訊,所述處理裝置包括:記憶體,用以儲存資料;以及處理器,連接所述記憶體並經配置: 透過所述感知資訊擷取裝置判定所述處理裝置與所述使用者之間的距離小於各所述多個其他處理裝置與所述使用者之間的距離,其中所述處理裝置相較於所述多個其他處理裝置與所述使用者的位置資訊距離最近;根據所述感知資訊擷取裝置所提供的所述使用者的位置資訊與姿態資訊決定所述使用者的視線資訊;以及根據所述使用者的視線資訊選擇所述多個處理裝置其中之一者,並經由所述閘道器傳輸所述使用者的視線資訊至所述多個處理裝置其中之一者,其中所述多個處理裝置其中之一者根據所述視線資訊、一使用者座標與一目標物座標決定一虛擬物件的顯示位置資訊,並控制該顯示器或連接於該些其他處理裝置的另一顯示器根據所述虛擬物件的所述顯示位置資訊顯示所述虛擬物件。 A processing device connected to a light-transmissive display and a perceptual information acquisition device, and connected to a plurality of other processing devices through a plurality of gateways, wherein the perceptual information acquisition device is used to acquire a user's For position information and attitude information, the processing device includes: a memory to store data; and a processor connected to the memory and configured: It is determined through the sensing information acquisition device that the distance between the processing device and the user is smaller than the distance between each of the plurality of other processing devices and the user, wherein the processing device is compared with the user. The plurality of other processing devices are closest to the user's position information; determine the user's line of sight information based on the user's position information and posture information provided by the sensory information acquisition device; and based on the user's line of sight information The user's gaze information selects one of the plurality of processing devices, and transmits the user's gaze information to one of the plurality of processing devices via the gateway, wherein the plurality of processing devices One of the processing devices determines the display position information of a virtual object based on the line of sight information, a user coordinate and a target object coordinate, and controls the display or another display connected to the other processing devices according to the virtual object coordinates. The display location information of the object displays the virtual object.
TW111130006A 2021-11-10 2022-08-10 Method, processing device, and display system for information display TWI818665B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211252575.3A CN116107534A (en) 2021-11-10 2022-10-13 Information display method, processing device and information display system
US17/979,785 US11822851B2 (en) 2021-11-10 2022-11-03 Information display system, information display method, and processing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163278071P 2021-11-10 2021-11-10
US63/278,071 2021-11-10

Publications (2)

Publication Number Publication Date
TW202320016A TW202320016A (en) 2023-05-16
TWI818665B true TWI818665B (en) 2023-10-11

Family

ID=87379014

Family Applications (3)

Application Number Title Priority Date Filing Date
TW111130006A TWI818665B (en) 2021-11-10 2022-08-10 Method, processing device, and display system for information display
TW111137134A TWI832459B (en) 2021-11-10 2022-09-30 Method, processing device, and display system for information display
TW111137587A TWI832465B (en) 2021-11-10 2022-10-03 Light-transmitting antenna

Family Applications After (2)

Application Number Title Priority Date Filing Date
TW111137134A TWI832459B (en) 2021-11-10 2022-09-30 Method, processing device, and display system for information display
TW111137587A TWI832465B (en) 2021-11-10 2022-10-03 Light-transmitting antenna

Country Status (1)

Country Link
TW (3) TWI818665B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589199A (en) * 2014-11-06 2016-05-18 精工爱普生株式会社 Display device, method of controlling the same, and program
US20170053437A1 (en) * 2016-06-06 2017-02-23 Jian Ye Method and apparatus for positioning navigation in a human body by means of augmented reality based upon a real-time feedback
TW201814320A (en) * 2016-10-03 2018-04-16 申雲洪 Zoom optical system for calculating coordinates of target object and method for sharing information of object capable of calculating the coordinate of an object, retrieving information of the object according to the coordinate, and sharing the information with other electronic devices
TW202109272A (en) * 2019-08-28 2021-03-01 財團法人工業技術研究院 Interaction display method and interaction display system
TW202125401A (en) * 2019-12-25 2021-07-01 財團法人工業技術研究院 Method, processing device, and display system for information display

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001185938A (en) * 1999-12-27 2001-07-06 Mitsubishi Electric Corp Two-frequency common antenna, multifrequency common antenna, and two-frequency and multifrequency common array antenna
WO2006106759A1 (en) * 2005-04-01 2006-10-12 Nissha Printing Co., Ltd. Transparent antenna for vehicle and vehicle glass with antenna
JP5830987B2 (en) * 2011-07-06 2015-12-09 ソニー株式会社 Display control apparatus, display control method, and computer program
CN203039108U (en) * 2013-01-16 2013-07-03 东莞理工学院 Broadband UHF printing dipole antenna
WO2014183262A1 (en) * 2013-05-14 2014-11-20 Empire Technology Development Llc Detection of user gestures
US10523993B2 (en) * 2014-10-16 2019-12-31 Disney Enterprises, Inc. Displaying custom positioned overlays to a viewer
WO2019070420A1 (en) * 2017-10-05 2019-04-11 Eastman Kodak Company Transparent antenna
JP2020021225A (en) * 2018-07-31 2020-02-06 株式会社ニコン Display control system, display control method, and display control program
TWI734024B (en) * 2018-08-28 2021-07-21 財團法人工業技術研究院 Direction determination system and direction determination method
TW202017368A (en) * 2018-10-29 2020-05-01 品臻聯合系統股份有限公司 A smart glasses, a smart glasses system, and a method for using the smart glasses
CN113437504B (en) * 2021-06-21 2023-08-01 中国科学院重庆绿色智能技术研究院 Transparent antenna preparation method based on film lithography process and transparent antenna

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589199A (en) * 2014-11-06 2016-05-18 精工爱普生株式会社 Display device, method of controlling the same, and program
US20170053437A1 (en) * 2016-06-06 2017-02-23 Jian Ye Method and apparatus for positioning navigation in a human body by means of augmented reality based upon a real-time feedback
TW201814320A (en) * 2016-10-03 2018-04-16 申雲洪 Zoom optical system for calculating coordinates of target object and method for sharing information of object capable of calculating the coordinate of an object, retrieving information of the object according to the coordinate, and sharing the information with other electronic devices
TW202109272A (en) * 2019-08-28 2021-03-01 財團法人工業技術研究院 Interaction display method and interaction display system
TW202125401A (en) * 2019-12-25 2021-07-01 財團法人工業技術研究院 Method, processing device, and display system for information display

Also Published As

Publication number Publication date
TW202319906A (en) 2023-05-16
TWI832459B (en) 2024-02-11
TW202320412A (en) 2023-05-16
TW202320016A (en) 2023-05-16
TWI832465B (en) 2024-02-11

Similar Documents

Publication Publication Date Title
US11368662B2 (en) Multi-baseline camera array system architectures for depth augmentation in VR/AR applications
CN112527102B (en) Head-mounted all-in-one machine system and 6DoF tracking method and device thereof
CA2926861C (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
JP6619871B2 (en) Shared reality content sharing
JP2018511098A (en) Mixed reality system
US20150161762A1 (en) Information processing apparatus, information processing method, and program
WO2019155840A1 (en) Information processing device, information processing method, and program
WO2015093130A1 (en) Information processing device, information processing method, and program
CN110895676A (en) Dynamic object tracking
TWI793390B (en) Method, processing device, and display system for information display
US20180028861A1 (en) Information processing device and information processing method
JPH10198506A (en) System for detecting coordinate
TWI818665B (en) Method, processing device, and display system for information display
EP4168988A1 (en) Low power visual tracking systems
WO2017163648A1 (en) Head-mounted device
US9445015B2 (en) Methods and systems for adjusting sensor viewpoint to a virtual viewpoint
US20230161537A1 (en) Information display system, information display method, and processing device
CN112819970B (en) Control method and device and electronic equipment
EP3971683A1 (en) Human body portion tracking method and human body portion tracking system
WO2017163647A1 (en) Head-mounted device
Bhowmik Embedded 3D-Sensing Devices with Real-Time Depth-Imaging Technologies
CN205139892U (en) Electronic device
CN117716419A (en) Image display system and image display method
JP2020095671A (en) Recognition device and recognition method