TW202318180A - Systems and methods for communicating model uncertainty to users - Google Patents

Systems and methods for communicating model uncertainty to users Download PDF

Info

Publication number
TW202318180A
TW202318180A TW111129002A TW111129002A TW202318180A TW 202318180 A TW202318180 A TW 202318180A TW 111129002 A TW111129002 A TW 111129002A TW 111129002 A TW111129002 A TW 111129002A TW 202318180 A TW202318180 A TW 202318180A
Authority
TW
Taiwan
Prior art keywords
user
level
feedback
uncertainty
probability
Prior art date
Application number
TW111129002A
Other languages
Chinese (zh)
Inventor
阿卡爾 古普塔
馬切羅 吉奧達諾
潭亞 瑞內 強克爾
張廷
沃傑 班寇
Original Assignee
美商元平台技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/575,676 external-priority patent/US11789544B2/en
Application filed by 美商元平台技術有限公司 filed Critical 美商元平台技術有限公司
Publication of TW202318180A publication Critical patent/TW202318180A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosed computer-implemented method may include (1) receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user, (2) determining, based on the information, a level of uncertainty associated with the real-time output, (3) modulating at least one attribute of feedback based on the level of uncertainty, and (4) presenting the feedback to the user substantially contemporaneous with the real-time output of the recognition model. Various other methods, systems, and computer-readable media are also disclosed.

Description

用於對使用者傳達模型不確定性之系統和方法Systems and methods for communicating model uncertainty to users

本申請案與用於對使用者傳達模型不確定性之系統和方法有關。 相關申請案之交互參考 This application relates to systems and methods for communicating model uncertainty to users. Cross-references to related applications

本申請案主張於2021年8月19日申請之美國臨時申請案第63/234,823號及於2022年1月14日申請之美國非臨時申請案第17/575,676號的權益,該等申請案之揭示內容以全文引用之方式併入本文中。This application claims the benefit of U.S. Provisional Application No. 63/234,823, filed August 19, 2021, and U.S. Non-Provisional Application No. 17/575,676, filed January 14, 2022, the The disclosure is incorporated herein by reference in its entirety.

使用者以多種方式與計算系統互動以提供輸入至或以其他方式控制由計算系統執行之動作。使用者可使用將可靠離散輸出(例如,密鑰或按鈕按壓事件)提供至計算系統之簡單實體使用者輸入裝置來與計算系統直接互動。此等簡單實體使用者輸入裝置能夠向使用者獨立地提供對使用者之行為的可行回饋。舉例而言,使用者可使用實體鍵盤向計算系統提供文字輸入,該等實體鍵盤之按鍵提供對其產生離散輸出的點處之移動的觸感抵抗。能夠向使用者獨立地提供即刻回饋之實體使用者輸入裝置可使得使用者能夠將輸入信賴地提供至計算系統而無需驗證使用者輸入中之各者係由計算系統正確地辨識並回應。舉例而言,實體鍵盤可使得許多使用者能夠快速且可靠地將字元輸入提供至計算系統而無需查看其正按壓之按鍵及/或無需驗證個別字元係由計算系統正確地接收並處理。Users interact with computing systems in a variety of ways to provide input to or otherwise control actions performed by the computing system. A user can interact directly with a computing system using a simple physical user input device that provides a reliable discrete output (eg, a key or button press event) to the computing system. These simple physical user input devices are able to independently provide the user with actionable feedback on the user's behavior. For example, a user may provide text input to a computing system using a physical keyboard whose keys provide tactile resistance to movement at the point at which it produces discrete output. A physical user input device capable of independently providing immediate feedback to a user may enable a user to confidently provide input to a computing system without verifying that each of the user inputs was correctly recognized and responded to by the computing system. For example, physical keyboards can enable many users to quickly and reliably provide character input to a computing system without seeing which keys they are pressing and/or without verifying that individual characters are correctly received and processed by the computing system.

作為簡單實體使用者輸入裝置之替代方案,計算系統可使用能夠偵測某些使用者行為並對其作出反應之虛擬使用者輸入裝置,諸如虛擬鍵盤及/或辨識模型。在一些實例中,諸如手勢辨識器等辨識模型可輸出使用者執行或正執行一行為之機率或似然性。為了使用機率來對行為作出反應,計算系統可典型地使用臨限值來離散機率以判定行為之起始或發生。舉例而言,計算系統可使用臨限機率0.6來區分行為發生與行為不發生。在行為發生之機率極高(例如,0.9)之情況下,計算系統可相對確定該行為發生。同樣地,在機率極低(例如,0.1)之情況下,計算系統可相對確定該行為並未發生。然而,計算系統可能不太確定在臨限值附近範圍內之機率(例如,在0.5至0.7範圍內的機率),此可引發對使用者行為之系統回應的錯誤肯定及錯誤否定偵測以及偶發性錯配。As an alternative to simple physical user input devices, computing systems may use virtual user input devices capable of detecting and reacting to certain user actions, such as virtual keyboards and/or recognition models. In some examples, a recognition model, such as a gesture recognizer, may output a probability or likelihood that a user performed or is performing an action. To use probabilities to respond to behaviors, computing systems may typically use thresholds to discretize the probabilities to determine the onset or occurrence of a behavior. For example, the computing system may use a threshold probability of 0.6 to distinguish the occurrence of an action from the non-occurrence of an action. Where the probability of an action occurring is extremely high (eg, 0.9), the computing system can be relatively certain that the action will occur. Likewise, where the probability is extremely low (eg, 0.1), the computing system can be relatively certain that the action did not occur. However, computing systems may be less certain about the probability of being within a range around a threshold (eg, within the range of 0.5 to 0.7), which can lead to false positive and false negative detections of system responses to user actions and occasional sexual mismatch.

本申請案之一具體實例係關於一種電腦實施方法,其包含:接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為;基於該資訊而判定與該即時輸出相關聯之不確定性位準;基於該不確定性位準而調變回饋之至少一個屬性;以及在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。An embodiment of the present application relates to a computer-implemented method comprising: receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; Immediately outputting an associated uncertainty level; modulating at least one attribute of feedback based on the uncertainty level; and presenting the feedback to the user substantially simultaneously with the immediate output of the recognition model.

本申請案之另一具體實例係關於一種系統,其包含:至少一個實體處理器;以及實體記憶體,其包含電腦可執行指令,該電腦可執行指令在由該至少一個實體處理器執行時使該至少一個實體處理器:接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為;基於該資訊而判定與該即時輸出相關聯之不確定性位準;基於該不確定性位準而調變回饋之至少一個屬性;並且在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。Another embodiment of the present application relates to a system comprising: at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the at least one physical processor, use The at least one physical processor: receives information associated with an immediate output of a recognition model adapted to identify at least one behavior of a user; determines an uncertainty level associated with the immediate output based on the information ; modulating at least one attribute of feedback based on the uncertainty level; and presenting the feedback to the user substantially simultaneously with the immediate output of the recognition model.

本申請案之又一具體實例係關於一種非暫時性電腦可讀取媒體,其包含一或多個電腦可執行指令,該一或多個電腦可執行指令在由計算裝置之至少一個處理器執行時使該計算裝置:接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為;基於該資訊而判定與該即時輸出相關聯之不確定性位準;基於該不確定性位準而調變回饋之至少一個屬性;並且在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。Yet another embodiment of the present application relates to a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, causing the computing device to: receive information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; determine an uncertainty level associated with the real-time output based on the information; Modulating at least one attribute of feedback based on the uncertainty level; and presenting the feedback to the user substantially simultaneously with the immediate output of the recognition model.

使用者以多種方式與計算系統互動以提供輸入至或以其他方式控制由計算系統執行之動作。使用者可使用將可靠離散輸出(例如,密鑰或按鈕按壓事件)提供至計算系統之簡單實體使用者輸入裝置來與計算系統直接互動。此等簡單實體使用者輸入裝置能夠向使用者獨立地提供對使用者之行為的可行回饋。舉例而言,使用者可使用實體鍵盤向計算系統提供文字輸入,該等實體鍵盤之按鍵提供對其產生離散輸出的點處之移動的觸感抵抗。能夠向使用者獨立地提供即刻回饋之實體使用者輸入裝置可使得使用者能夠將輸入信賴地提供至計算系統而無需驗證使用者輸入中之各者係由計算系統正確地辨識並回應。舉例而言,實體鍵盤可使得許多使用者能夠快速且可靠地將字元輸入提供至計算系統而無需查看其正按壓之按鍵及/或無需驗證個別字元係由計算系統正確地接收並處理。Users interact with computing systems in a variety of ways to provide input to or otherwise control actions performed by the computing system. A user can interact directly with a computing system using a simple physical user input device that provides a reliable discrete output (eg, a key or button press event) to the computing system. These simple physical user input devices are able to independently provide the user with actionable feedback on the user's behavior. For example, a user may provide text input to a computing system using a physical keyboard whose keys provide tactile resistance to movement at the point at which it produces discrete output. A physical user input device capable of independently providing immediate feedback to a user may enable a user to confidently provide input to a computing system without verifying that each of the user inputs was correctly recognized and responded to by the computing system. For example, physical keyboards can enable many users to quickly and reliably provide character input to a computing system without seeing which keys they are pressing and/or without verifying that individual characters are correctly received and processed by the computing system.

作為簡單實體使用者輸入裝置之替代方案,計算系統可使用能夠偵測某些使用者行為並對其作出反應之虛擬使用者輸入裝置,諸如虛擬鍵盤及/或辨識模型。在一些實例中,諸如手勢辨識器等辨識模型可輸出使用者執行或正執行一行為之機率或似然性。為了使用機率來對行為作出反應,計算系統可典型地使用臨限值來離散機率以判定行為之起始或發生。舉例而言,計算系統可使用臨限機率0.6來區分行為發生與行為不發生。在行為發生之機率極高(例如,0.9)之情況下,計算系統可相對確定該行為發生。同樣地,在機率極低(例如,0.1)之情況下,計算系統可相對確定該行為並未發生。然而,計算系統可能不太確定在臨限值附近範圍內之機率(例如,在0.5至0.7範圍內的機率),此可引發對使用者行為之系統回應的錯誤肯定及錯誤否定偵測以及偶發性錯配。As an alternative to simple physical user input devices, computing systems may use virtual user input devices capable of detecting and reacting to certain user actions, such as virtual keyboards and/or recognition models. In some examples, a recognition model, such as a gesture recognizer, may output a probability or likelihood that a user performed or is performing an action. To use probabilities to respond to behaviors, computing systems may typically use thresholds to discretize the probabilities to determine the onset or occurrence of a behavior. For example, the computing system may use a threshold probability of 0.6 to distinguish the occurrence of an action from the non-occurrence of an action. Where the probability of an action occurring is extremely high (eg, 0.9), the computing system can be relatively certain that the action will occur. Likewise, where the probability is extremely low (eg, 0.1), the computing system can be relatively certain that the action did not occur. However, computing systems may be less certain about the probability of being within a range around a threshold (eg, within the range of 0.5 to 0.7), which can lead to false positive and false negative detections of system responses to user actions and occasional sexual mismatch.

本揭示內容大體上係關於使用各種形式之可行即時回饋來向使用者傳達模型不確定性及相關資訊。在一些具體實例中,本文中所描述之系統及方法可使用即時觸覺回饋(例如,經調變振動)及/或其他類型之非視覺回饋以傳達與辨識模型(例如,經調適以偵測點並選擇行為之辨識模型)的輸出相關聯之不確定性位準。本揭示內容之具體實例可使得使用者能夠即時地理解,當計算系統最可能對使用者行為正確地作出反應時,此可允許使用者不太猶豫或毫不猶豫地將輸入信賴地提供至計算系統。本揭示內容之具體實例亦可使得使用者能夠即時地理解,當計算系統最可能對使用者行為不正確地作出反應時,此可使得使用者修改某些行為以便由計算系統更好地辨識。在一些具體實例中,呈現給使用者之回饋可指示使用者可如何修改其行為以便由計算系統更好地辨識。藉由經由不確定性調變回饋而將不確定性資訊傳送至使用者,本揭示內容之具體實例可使得使用者能夠更好地理解計算系統將如何解譯使用者之行為,此可減少由計算系統作出的辨識誤差之數目且產生改良之使用者互動體驗及效能。The present disclosure generally relates to the use of various forms of actionable real-time feedback to communicate model uncertainty and related information to users. In some embodiments, the systems and methods described herein can use real-time haptic feedback (e.g., modulated vibrations) and/or other types of non-visual feedback to convey and recognize models (e.g., adapted to detect point And select the level of uncertainty associated with the output of the behavioral identification model). Embodiments of the present disclosure may enable a user to instantly understand when the computing system is most likely to respond correctly to the user's actions, which may allow the user to reliably provide input to the computing system with little or no hesitation system. Embodiments of the present disclosure may also enable a user to instantly understand when the computing system is most likely to react incorrectly to user behavior, which may allow the user to modify certain behaviors to be better recognized by the computing system. In some embodiments, the feedback presented to the user can indicate how the user can modify his behavior to be better recognized by the computing system. By communicating uncertainty information to the user via uncertainty modulation feedback, embodiments of the present disclosure may enable the user to better understand how the computing system will interpret the user's actions, which may reduce the risk of Counting the number of recognition errors made by the system and resulting in improved user interaction experience and performance.

來自本文中所描述之具體實例中之任一者的特徵可根據本文中所描述之一般原理彼此組合使用。在結合隨附圖式及申請專利範圍讀取以下詳細描述後,將更全面地理解此等及其他具體實例、特徵及優點。Features from any of the specific examples described herein can be used in combination with each other according to the general principles described herein. These and other specific examples, features and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.

下文將參考圖1提供用於向使用者傳達模型不確定性之例示性系統的詳細描述。對應於圖2至圖12之論述將提供對應方法及資料流之詳細描述。最後,參考圖13至圖21,下文將提供可實施本揭示內容之具體實例的各種擴展實境系統及組件之詳細描述。A detailed description of an exemplary system for communicating model uncertainty to a user is provided below with reference to FIG. 1 . The discussion corresponding to FIGS. 2-12 will provide a detailed description of the corresponding methods and data flows. Finally, with reference to FIGS. 13-21 , detailed descriptions of various augmented reality systems and components that may implement embodiments of the present disclosure will be provided below.

圖1係用於向使用者傳達模型不確定性之實例系統100的方塊圖。如此圖中所繪示,實例系統100可包括用於執行一或多個任務之一或多個模組102。如下文將更詳細地解釋,模組102可包括接收模組104,該接收模組接收與辨識模型之即時輸出相關聯之資訊。實例系統100亦可包括判定模組106,該判定模組基於藉由接收模組104接收之資訊而判定與辨識模型之即時輸出相關聯的不確定性位準。實例系統100亦可包括調變模組108,該調變模組基於不確定性位準而調變回饋之各種屬性。另外,實例系統100可包括呈現模組110,該呈現模組在辨識模型之即時輸出的實質上同時向使用者呈現回饋。在一些具體實例中,實例系統100亦可包括使用者輸入模組112,該使用者輸入模組回應於辨識模型之即時輸出(例如,辨識事件)而執行或避免執行反應性使用者輸入操作。1 is a block diagram of an example system 100 for communicating model uncertainty to a user. As depicted in this figure, example system 100 may include one or more modules 102 for performing one or more tasks. As will be explained in more detail below, modules 102 may include a receiving module 104 that receives information associated with the immediate output of the recognition model. The example system 100 may also include a determination module 106 that determines an uncertainty level associated with the immediate output of the identification model based on information received by the receiving module 104 . The example system 100 may also include a modulation module 108 that modulates various properties of the feedback based on the uncertainty level. Additionally, the example system 100 may include a presentation module 110 that presents feedback to the user substantially simultaneously with the immediate output of the recognition model. In some embodiments, the example system 100 may also include a user input module 112 that executes or refrains from executing reactive user input operations in response to immediate outputs of the recognition model (eg, recognition events).

如圖1中進一步繪示,實例系統100亦可包括一或多個記憶體裝置,諸如記憶體120。記憶體120可包括或表示能夠儲存資料及/或電腦可讀取指令之任何類型或形式的揮發性或非揮發性儲存裝置或媒體。在一個實例中,記憶體120可儲存、載入及/或維護模組102中之一或多者。記憶體120之實例包括但不限於隨機存取記憶體(RAM)、唯讀記憶體(ROM)、快閃記憶體、硬碟機(HDD)、固態磁碟機(SSD)、光碟機、快取記憶體、前述記憶體中之一或多者的變化或組合,或任何其他合適之儲存記憶體。As further shown in FIG. 1 , example system 100 may also include one or more memory devices, such as memory 120 . Memory 120 may include or represent any type or form of volatile or non-volatile storage device or media capable of storing data and/or computer-readable instructions. In one example, the memory 120 can store, load and/or maintain one or more of the modules 102 . Examples of memory 120 include, but are not limited to, random access memory (RAM), read only memory (ROM), flash memory, hard disk drives (HDD), solid state drives (SSD), optical drives, flash drives, Take memory, a variation or combination of one or more of the aforementioned memories, or any other suitable storage memory.

如圖1中進一步繪示,實例系統100亦可包括一或多個實體處理器,諸如實體處理器130。實體處理器130可包括或表示能夠解譯及/或執行電腦可讀取指令之任何類型或形式的硬體實施處理單元。在一個實例中,實體處理器130可存取及/或修改儲存於記憶體120中之模組102中之一或多者。另外或替代地,實體處理器130可執行模組102中之一或多者以促進向使用者傳達模型不確定性。實體處理器130之實例包括但不限於微處理器、微控制器、中央處理單元(CPU)、實施軟核處理器之場可程式化閘陣列(FPGA)、特殊應用積體電路(ASIC)、前述實體處理器中之一或多者的部分、前述實體處理器中之一或多者的變化或組合,或任何其他合適之實體處理器。As further shown in FIG. 1 , example system 100 may also include one or more physical processors, such as physical processor 130 . Physical processor 130 may include or represent any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, physical processor 130 may access and/or modify one or more of modules 102 stored in memory 120 . Additionally or alternatively, entity processor 130 may execute one or more of modules 102 to facilitate communicating model uncertainty to a user. Examples of physical processor 130 include, but are not limited to, microprocessors, microcontrollers, central processing units (CPUs), field programmable gate arrays (FPGAs) implementing soft-core processors, application specific integrated circuits (ASICs), A portion of one or more of the aforementioned physical processors, a variation or combination of one or more of the aforementioned physical processors, or any other suitable physical processor.

如圖1中進一步繪示,實例系統100亦可包括一或多個辨識模型,諸如辨識模型140。辨識模型140可表示或包括任何機器學習模型、演算法、試探式、資料或其組合,其可用以對使用者行為(例如,動作、認知狀態、意圖等)及/或由使用者行為引起之效應(例如,裝置、頭戴式裝置或控制器移動)進行辨識、偵測、計算、估計、預測、標記、推斷及/或作出反應。辨識模型140之實例包括但不限於:經調適以對捏合手勢進行辨識及/或作出反應之捏合偵測模型(例如,捏合偵測模型142)、經調適以對手部狀態及/或手部移動進行辨識及/或作出反應的手動追蹤模型(例如,手動追蹤模型144)、經調適以對使用者之意圖進行辨識及/或作出反應以進行互動的模型(例如,意圖互動模型146)、經調適以對在使用者之認知狀態之間的轉變進行辨識及/或作出反應之模型(例如,認知狀態模型148)、手勢辨識模型、眼睛追蹤模型、語音辨識模型、身體感測模型、身體追蹤模型、經調適以對使用者之生理狀態進行辨識及/或作出反應的生理狀態模型、經調適以對使用者之心理狀態進行辨識及/或作出反應的心理狀態模型、用於預測使用者之意圖並對其作出反應以進行編碼的模型、經調適以執行對計算裝置之由內而外追蹤的模型、前述模型中之一或多者的部分、前述模型中之一或多者的變化或組合,或任何其他合適之辨識模型。在一些實例中,辨識模型140可包括但不限於決策樹(例如,強化決策樹)、神經網路(例如,深度卷積類神經網路)、深度學習模型、支援向量機、線性分類器、非線性分類器、感知器、樸素貝葉斯分類器、任何其他機器學習或分類技術或演算法,或其任何組合。As further shown in FIG. 1 , example system 100 may also include one or more recognition models, such as recognition model 140 . Recognition model 140 may represent or include any machine learning model, algorithm, heuristic, data, or combination thereof that can be used to identify user behavior (e.g., actions, cognitive states, intentions, etc.) and/or Recognize, detect, calculate, estimate, predict, flag, infer, and/or react to effects such as device, headset, or controller movement. Examples of recognition models 140 include, but are not limited to: pinch detection models (e.g., pinch detection model 142 ) adapted to recognize and/or respond to pinch gestures, adapted to recognize hand state and/or hand movement Manual tracking models that recognize and/or respond to (e.g., manual tracking model 144), models adapted to recognize and/or respond to user intent for interaction (e.g., intent interaction model 146), Models adapted to recognize and/or respond to transitions between cognitive states of the user (e.g., cognitive state model 148), gesture recognition models, eye tracking models, speech recognition models, body sensing models, body tracking Models, physiological state models adapted to recognize and/or respond to the user's physiological state, psychological state models adapted to recognize and/or respond to the user's psychological state, predictive of the user's A model intended for and responsive to encoding, a model adapted to perform inside-out tracking of a computing device, a portion of one or more of the foregoing models, a variation of one or more of the foregoing models, or combination, or any other suitable identification model. In some examples, the recognition model 140 may include, but is not limited to, decision trees (e.g., reinforced decision trees), neural networks (e.g., deep convolutional neural networks), deep learning models, support vector machines, linear classifiers, Non-linear classifiers, perceptrons, naive Bayesian classifiers, any other machine learning or classification techniques or algorithms, or any combination thereof.

在一些實例中,本文中所描述之系統可使用辨識模型140中之一或多者來對使用者的指向手勢、選擇或與經由系統100顯示給使用者之圖形元素(例如,圖形使用者介面的元素或自虛擬實境或擴增實境環境渲染之元素)相關的互動進行辨識及/或作出反應。舉例而言,本文中所描述之系統可使用捏合偵測模型142以偵測使用者意圖作出選擇的時刻,且可使用手動追蹤模型144以判定使用者最可能嘗試選擇之圖形元素。In some examples, the systems described herein may use one or more of the recognition models 140 to recognize a user's pointing gestures, selections, or graphical elements displayed to the user via the system 100 (e.g., a graphical user interface). elements or elements rendered from a virtual reality or augmented reality environment) to recognize and/or respond to interactions. For example, the system described herein may use a pinch detection model 142 to detect when a user intends to make a selection, and may use a hand tracking model 144 to determine the graphical element the user is most likely to attempt to select.

在一些實例中,系統100可包括用於搜集指示使用者行為之即時量測的一或多個感測器150。辨識模型140可使用來源於自感測器150搜集之量測的資訊以即時辨識使用者行為。感測器150之實例包括但不限於觸摸感測器、影像感測器、近接感測器、生物辨識感測器、慣性量測單元、生物感測器、心率感測器、飽和氧感測器、神經肌肉感測器、高度計感測器、溫度感測器、生物阻抗感測器、計步器感測器、光學感測器、汗液感測器、前述感測器中之一或多者的變化或組合,或任何其他類型或形式之感測硬體或軟體。In some examples, system 100 may include one or more sensors 150 for collecting real-time measurements indicative of user behavior. The recognition model 140 can use information derived from measurements collected from the sensors 150 to recognize user behavior in real time. Examples of sensor 150 include, but are not limited to, touch sensors, image sensors, proximity sensors, biometric sensors, inertial measurement units, biometric sensors, heart rate sensors, saturated oxygen sensors sensor, neuromuscular sensor, altimeter sensor, temperature sensor, bioimpedance sensor, pedometer sensor, optical sensor, sweat sensor, one or more of the foregoing sensors variations or combinations thereof, or any other type or form of sensing hardware or software.

圖2係用於使用系統100之元件中之一或多者向使用者傳達與辨識模型140相關聯的不確定性位準之例示性資料流200之圖。在此實例中,辨識模型140中之一或多者可自感測器150中之一或多者取得輸入以產生指示及/或預測一或多個使用者行為的一或多個即時輸出202(例如,辨識事件)。如所展示,接收模組104及使用者輸入模組112可對即時輸出202進行監視及/或作出反應。除了監視即時輸出202或作為其替代方案,接收模組104亦可監視至辨識模型140之一些或所有輸入(例如,感測器150之量測),該等輸入可由辨識模型140使用以產生即時輸出202。在一些實例中,即時輸出202可表示時間序列(例如,一或多個使用者行為之諸如圖3中所繪示之機率302等機率的時間序列)。FIG. 2 is a diagram of an exemplary data flow 200 for communicating uncertainty levels associated with identification model 140 to a user using one or more of the components of system 100 . In this example, one or more of the recognition models 140 may take input from one or more of the sensors 150 to generate one or more immediate outputs 202 indicative of and/or predicting one or more user actions. (for example, identifying events). As shown, the receiving module 104 and the user input module 112 can monitor and/or react to the real-time output 202 . In addition to or as an alternative to monitoring real-time output 202, receiving module 104 may also monitor some or all inputs to recognition model 140 (e.g., measurements from sensors 150), which may be used by recognition model 140 to generate real-time Output 202. In some examples, real-time output 202 may represent a time series (eg, a time series of probabilities of one or more user actions, such as probabilities 302 depicted in FIG. 3 ).

在一些實例中,辨識模型140可經調適以對一或多個使用者行為進行辨識、偵測、計算、估計、預測、標記、推斷及/或作出反應,且即時輸出202可表示或包括指示一或多個使用者行為之發生或不發生的任何資訊。在一些實例中,即時輸出202可包括或表示使用者正執行特定行為(例如,捏合或指向手勢)之機率或似然性。另外或替代地,辨識模型140可經調適以辨識許多使用者行為,且即時輸出202可包括或表示使用者有可能正執行之行為的標籤或分類及/或與該標籤或分類相關聯之信賴位準或分數。在一些實例中,辨識模型140可經調適以估計、預測及/或推斷一或多個使用者行為之各種屬性,且即時輸出202可表示或包括行為之一或多個經估計屬性(例如,預測位置、位向或狀態)及/或與經估計屬性相關聯的信賴位準或分數。In some examples, recognition model 140 may be adapted to recognize, detect, calculate, estimate, predict, flag, infer, and/or react to one or more user behaviors, and immediate output 202 may represent or include an indication Any information about the occurrence or non-occurrence of one or more user actions. In some examples, immediate output 202 may include or represent a probability or likelihood that the user is performing a particular action (eg, a pinch or point gesture). Additionally or alternatively, recognition model 140 may be adapted to recognize a number of user behaviors, and immediate output 202 may include or represent a label or category of behavior that the user is likely to be performing and/or a trust associated with that label or category. level or score. In some examples, recognition model 140 may be adapted to estimate, predict, and/or infer various attributes of one or more user behaviors, and immediate output 202 may represent or include one or more estimated attributes of behavior (e.g., predicted position, orientation, or state) and/or a confidence level or score associated with the estimated property.

在一些實例中,使用者輸入模組112可經調適以執行使用者輸入操作及/或對由即時輸出202指示之相關聯使用者行為以其他方式作出回應。可由使用者輸入模組112回應於即時輸出202而執行之使用者輸入操作的實例包括但不限於:基於由即時輸出202(例如,指向手勢)指示之使用者行為而更新游標的位置;基於由即時輸出202(例如,捏合手勢)指示之使用者行為而作出選擇;更新虛擬字元或環境之狀態以反映由即時輸出202指示的使用者行為;經由圖形使用者介面元件顯示與由即時輸出202指示之使用者行為相關聯的使用者輸入;處理與由即時輸出202指示之使用者行為相關聯的使用者輸入;儲存與由即時輸出202指示之使用者行為相關聯的使用者輸入;觸發與由與即時輸出202相關聯之使用者輸入指示之使用者行為相關聯的動作或功能;將與由即時輸出202指示之使用者行為相關聯的使用者輸入傳遞至另一功能或應用程式以供進一步處理;及/或將與由即時輸出202指示之使用者行為相關聯的使用者輸入傳輸至另一計算系統以供進一步處理。In some examples, user input module 112 may be adapted to perform user input operations and/or otherwise respond to associated user actions indicated by immediate output 202 . Examples of user input operations that may be performed by user input module 112 in response to immediate output 202 include, but are not limited to: updating the cursor's position based on user action indicated by immediate output 202 (e.g., a pointing gesture); Make selections based on user actions indicated by real-time output 202 (e.g., a pinch gesture); update the state of a virtual character or environment to reflect user actions indicated by real-time output 202; User input associated with indicated user action; process user input associated with user action indicated by immediate output 202; store user input associated with user action indicated by immediate output 202; trigger and an action or function associated with a user action indicated by a user input associated with the immediate output 202; passing the user input associated with the user action indicated by the instant output 202 to another function or application for further processing; and/or transmitting user input associated with user actions indicated by real-time output 202 to another computing system for further processing.

如圖2中所展示,接收模組104可監視即時輸出202及/或與辨識模型140及/或即時輸出202相關聯之任何資訊,其可直接及/或間接地指示關於即時輸出202的不確定性及/或即時輸出202之解譯。舉例而言,接收模組104可接收與感測器150相關聯之資訊,該資訊可指示感測器150之量測的可靠性及/或與即時輸出202相關聯之資訊(例如,機率或信賴位準)。判定模組106可使用由接收模組104搜集之資訊以判定或計算與即時輸出202相關聯之一或多個不確定性位準204。調變模組108可接著使用不確定性位準204來調變待由呈現模組110呈現之適當回饋的至少一個屬性。As shown in FIG. 2 , receiving module 104 may monitor real-time output 202 and/or any information associated with recognition model 140 and/or real-time output 202 , which may directly and/or indirectly indicate incorrect information about real-time output 202 . Interpretation of deterministic and/or immediate output 202 . For example, receiving module 104 may receive information associated with sensor 150 that may indicate the reliability of measurements by sensor 150 and/or information associated with immediate output 202 (eg, probability or level of trust). Decision module 106 may use the information gathered by receiving module 104 to determine or calculate one or more uncertainty levels 204 associated with immediate output 202 . The modulation module 108 may then use the uncertainty level 204 to modulate at least one attribute of the appropriate feedback to be presented by the presentation module 110 .

如上文所提及,辨識模型140之即時輸出202可包括指示某些使用者行為之發生及/或不發生的機率之時間序列。圖3係機率302之例示性時間序列之圖示。機率302中之各者可表示使用者正在對應時刻執行一行為(例如,捏合手勢)的機率。在此實例中,所揭示系統可使用臨限值304(例如,0.6)來區分行為發生與行為不發生。舉例而言,所揭示系統可將大於臨限值304之機率視為行為發生,且將小於臨限值304之機率視為行為不發生。在行為發生之機率極高(例如,機率306)之情況下,所揭示系統可相對確定該行為發生且可相應地作出反應。同樣地,在機率極低(例如,機率308)之情況下,所揭示系統可相對確定該行為並未發生且可相應地作出反應。然而,所揭示系統可能不太確定在臨限值304附近範圍內之機率(例如,機率310及312),且可能以與使用者之意圖產生衝突的方式作出反應。舉例而言,所揭示系統可對機率310作出反應,如同使用者正執行行為一般,即使使用者實際上並未執行該行為。類似地,所揭示系統可對機率312作出反應以如同使用者執行行為,即使使用者實際上並未執行該行為亦如此。如下文將更詳細地解釋,所揭示系統可使用各種形式之回饋以將與機率302相關聯的不確定性位準傳送至使用者,此可使得使用者調適後續行為,以使得與行為相關聯之未來不確定性位準得以減小。As mentioned above, the immediate output 202 of the recognition model 140 may include a time series indicating the probability of occurrence and/or non-occurrence of certain user actions. FIG. 3 is a graphical representation of an exemplary time series of probabilities 302 . Each of the probabilities 302 may represent a probability that the user is performing an action (eg, a pinch gesture) at the corresponding moment. In this example, the disclosed system can use a threshold value 304 (eg, 0.6) to distinguish between behavior occurrence and behavior non-occurrence. For example, the disclosed system may consider a probability greater than a threshold value 304 to be an action occurrence, and a probability less than the threshold value 304 to be an action non-occurrence. Where the behavior occurs with a high probability (eg, probability 306 ), the disclosed system can be relatively certain that the behavior occurred and can respond accordingly. Likewise, where the probability is extremely low (eg, probability 308 ), the disclosed system can be relatively certain that the action did not occur and can react accordingly. However, the disclosed system may be less certain of probabilities (eg, probabilities 310 and 312 ) within a range around threshold 304 and may react in ways that conflict with the user's intent. For example, the disclosed system can react to the chance 310 as if the user was performing the action, even though the user did not actually perform the action. Similarly, the disclosed system can react to the chance 312 as if the user performed the behavior even though the user did not actually perform the behavior. As will be explained in more detail below, the disclosed system can use various forms of feedback to communicate to the user a level of uncertainty associated with the probability 302, which can allow the user to adapt subsequent behavior such that the associated behavior The level of uncertainty in the future is reduced.

圖4係用於向使用者傳達模型不確定性之例示性電腦實施方法400的流程圖。圖4中所展示之步驟可由任何合適電腦可執行碼及/或計算系統(包括圖1中所繪示之系統)執行。在一個實例中,圖4中所展示之步驟中之各者可表示演算法,該演算法的結構包括多個子步驟及/或由其表示,該等子步驟之實例將在下文更詳細地提供。在一些實例中,可重複執行電腦實施方法400之步驟。舉例而言,可針對辨識模型之即時輸出中之各者而執行一次電腦實施方法400的步驟。4 is a flowchart of an exemplary computer-implemented method 400 for communicating model uncertainty to a user. The steps shown in FIG. 4 may be performed by any suitable computer-executable code and/or computing system, including the system shown in FIG. 1 . In one example, each of the steps shown in FIG. 4 may represent an algorithm whose structure includes and/or is represented by a plurality of sub-steps, examples of which are provided in more detail below . In some examples, the steps of computer-implemented method 400 may be performed repeatedly. For example, the steps of computer-implemented method 400 may be performed once for each of the immediate outputs of the recognition model.

如圖4中所繪示,在步驟410處,本文中所描述之系統中的一或多者可接收與經調適以辨識使用者之至少一個行為的辨識模型之即時輸出相關聯的資訊。舉例而言,接收模組104可接收與即時輸出202相關聯之一些或所有資訊,如結合圖2所描述。As shown in FIG. 4, at step 410, one or more of the systems described herein may receive information associated with the immediate output of a recognition model adapted to recognize at least one behavior of a user. For example, the receiving module 104 can receive some or all of the information associated with the real-time output 202, as described in connection with FIG. 2 .

本文中所描述之系統可接收及/或監視各種類型之資訊,該資訊可指示與辨識模型之即時輸出相關聯的不確定性。在一個實例中,辨識模型可輸出與其輸出相關聯之不確定性的指示(例如,機率、確定性位準、信賴度分數等),且接收模組104可即時地監視此資訊。在一些實例中,辨識模型之輸入的某些特性(例如,信雜比)可引起或影響與辨識模型之即時輸出相關聯的不確定性,且接收模組104可即時地監視此等特性。在至少一個實例中,所揭示系統可使用一或多個臨限值來離散化及/或解譯辨識模型之輸出。舉例而言,使用者輸入模組112可使用臨限值304來離散化及/或解譯機率302。臨限值可影響及/或指示與辨識模型之即時輸出相關聯之不確定性位準。至少出於此原因,接收模組104可識別用以離散化及/或解譯辨識模型之輸出之任何臨限值。The systems described herein can receive and/or monitor various types of information that can indicate uncertainties associated with the immediate output of an identification model. In one example, the identification model can output an indication of uncertainty (eg, probability, certainty level, reliability score, etc.) associated with its output, and the receiving module 104 can monitor this information in real time. In some examples, certain properties of the input to the identification model (eg, signal-to-noise ratio) may cause or affect uncertainties associated with the immediate output of the identification model, and the receiving module 104 may monitor such properties in real time. In at least one example, the disclosed system can use one or more thresholds to discretize and/or interpret the output of the identification model. For example, user input module 112 may use threshold value 304 to discretize and/or interpret probability 302 . Threshold values can influence and/or indicate a level of uncertainty associated with the immediate output of an identification model. For at least this reason, receiving module 104 may identify any thresholds for discretizing and/or interpreting the output of the identification model.

在步驟420處,本文中所描述之系統中之一或多者可基於在步驟410處接收的資訊而判定與即時輸出相關聯之不確定性位準。舉例而言,判定模組106可基於由接收模組104接收之資訊而判定不確定性位準204,如圖2中所展示。At step 420 , one or more of the systems described herein may determine a level of uncertainty associated with the immediate output based on the information received at step 410 . For example, determination module 106 may determine uncertainty level 204 based on information received by receiving module 104 , as shown in FIG. 2 .

本文中所描述之系統可以多種方式判定與辨識模型之即時輸出相關聯的不確定性位準。在一些實例中,辨識模型可輸出不確定性之量度,且所揭示系統可判定與來自辨識模型自身之即時輸出相關聯的不確定性位準。舉例而言,當即時輸出包括機率、確定性位準或信賴度分數時,所揭示系統可基於該機率、該確定性位準或該信賴度分數而判定不確定性位準。在一些實例中,若一機率指示使用者行為正發生之似然性,則所揭示系統可針對該機率產生不確定性位準,該不確定性位準與該機率成反比(亦即,高機率可與低不確定性位準相關聯,且低機率可與高不確定性位準相關聯)。類似地,若與辨識模型之即時輸出相關聯之確定性位準或信賴度分數指示即時輸出係準確的似然性,則所揭示系統可針對該即時輸出產生不確定性位準,該不確定性位準與確定性位準或信賴度分數成反比(亦即,高確定性位準或信賴度分數可與低不確定性位準相關聯,且低確定性位準或信賴度分數可與高不確定性位準相關聯)。The system described herein can determine the level of uncertainty associated with the immediate output of the identification model in a variety of ways. In some examples, the identification model can output a measure of uncertainty, and the disclosed system can determine the level of uncertainty associated with the immediate output from the identification model itself. For example, when the immediate output includes a probability, a certainty level, or a confidence score, the disclosed system can determine an uncertainty level based on the probability, the certainty level, or the confidence score. In some examples, if a probability indicates the likelihood that a user action is occurring, the disclosed system can generate an uncertainty level for the probability that is inversely proportional to the probability (i.e., a high A probability may be associated with a low uncertainty level, and a low probability may be associated with a high uncertainty level). Similarly, the disclosed system can generate an uncertainty level for an immediate output of a recognition model if the certainty level or confidence score associated with the immediate output indicates a likelihood that the immediate output is accurate, the uncertainty The certainty level is inversely proportional to the certainty level or confidence score (that is, a high certainty level or confidence score can be correlated with a low uncertainty level, and a low certainty level or confidence score can be correlated with associated with high uncertainty levels).

在一些實例中,機率可反映使用者行為正發生或並未發生之似然性,且所揭示系統可使用一或多個臨限機率(例如,圖3中之臨限值304)以區分將被解譯為該使用者行為發生的機率與將被解譯為該使用者行為不發生之機率。在此等實例中,所揭示系統可使用臨限值以判定與機率相關聯之不確定性位準。舉例而言,所揭示系統可針對該機率產生不確定性位準,該確定性位準與在該機率與臨限值之間的距離成反比(亦即,最遠離臨限值之機率可與低不確定性位準相關聯,且最接近臨限值之機率可與高不確定性位準相關聯)。在至少一個實例中,所揭示系統可針對機率判定不確定性位準,如方程式(1)中所展示:

Figure 02_image001
(1) In some examples, probabilities can reflect the likelihood that a user action is occurring or not occurring, and the disclosed systems can use one or more threshold probabilities (e.g., threshold 304 in FIG. is interpreted as the probability that the user action occurred and will be interpreted as the probability that the user action did not occur. In such examples, the disclosed systems can use thresholds to determine levels of uncertainty associated with probabilities. For example, the disclosed system can generate an uncertainty level for the probability that is inversely proportional to the distance between the probability and the threshold (i.e., the probability furthest from the threshold can be equal to A low uncertainty level is associated, and the probability closest to the threshold can be associated with a high uncertainty level). In at least one example, the disclosed system can determine an uncertainty level for probability, as shown in equation (1):
Figure 02_image001
(1)

其中 不確定性係在0(指示低不確定性)與1(指示高不確定性)之間的範圍內之不確定性位準,其中 機率係單一機率,且其中 臨限值係用以將機率解譯為行為發生或行為不發生之臨限值。圖5繪示例示性不確定性位準502,其可使用方程式(1)及來自圖3之臨限值304導出。如可見,與各機率相關聯之不確定性位準可與在機率與臨限值304之間的距離成反比。在此實例中,與最高不確定性位準相關聯之機率係最接近臨限值304的機率,且與最低不確定性位準相關聯之機率係最遠離臨限值304的機率。 where Uncertainty is the level of uncertainty in the range between 0 (indicating low uncertainty) and 1 (indicating high uncertainty), where Probability is a single probability, and where Threshold is used to Probability is interpreted as the threshold for the occurrence or non-occurrence of an action. FIG. 5 depicts an exemplary uncertainty level 502 that can be derived using equation (1) and threshold 304 from FIG. 3 . As can be seen, the level of uncertainty associated with each probability may be inversely proportional to the distance between the probability and the threshold 304 . In this example, the probability associated with the highest uncertainty level is the probability closest to the threshold value 304 and the probability associated with the lowest uncertainty level is the probability furthest from the threshold value 304 .

在一些實例中,辨識模型之輸入的某些特性(例如,信雜比)可引起或影響與辨識模型之即時輸出相關聯的不確定性。在此等實例中,所揭示系統可至少部分地基於此等特性及其對不確定性之貢獻而產生或調整即時輸出之不確定性位準。In some instances, certain characteristics of the input to the identification model (eg, signal-to-noise ratio) can cause or affect uncertainties associated with the immediate output of the identification model. In such examples, the disclosed systems can generate or adjust the uncertainty level of the immediate output based at least in part on these characteristics and their contributions to the uncertainty.

在步驟430處,本文中所描述之系統中之一或多者可基於在步驟420處判定的不確定性位準而調變回饋之至少一個屬性。舉例而言,調變模組108可基於與即時輸出202相關聯之不確定性位準204而調變回饋206之至少一個屬性。At step 430 , one or more of the systems described herein may modulate at least one property of the feedback based on the uncertainty level determined at step 420 . For example, modulation module 108 may modulate at least one property of feedback 206 based on uncertainty level 204 associated with immediate output 202 .

本文中所描述之系統可使用各種形式的感官回饋以指示及/或傳送與辨識模型之即時輸出相關聯的不確定性位準及/或嘗試使得使用者改良與辨識模型之即時輸出相關聯的不確定性位準。在一個實例中,所揭示系統可使用任何合適類型或形式之觸覺回饋,諸如振動、施力、壓力、牽引、紋理及/或溫度,以傳達關於不確定性位準之資訊。在一些實例中,所揭示系統可使用任何合適類型或形式之非視覺回饋,諸如音訊回饋,以傳達關於不確定性位準之資訊。在至少一個實例中,所揭示系統可使用一形式之視覺回饋以傳達關於不確定性位準之資訊。The systems described herein can use various forms of sensory feedback to indicate and/or communicate the level of uncertainty associated with the immediate output of the recognition model and/or attempt to cause the user to improve the level of uncertainty associated with the immediate output of the recognition model. level of uncertainty. In one example, the disclosed system can use any suitable type or form of haptic feedback, such as vibration, force, pressure, traction, texture, and/or temperature, to convey information about the level of uncertainty. In some examples, the disclosed systems can use any suitable type or form of non-visual feedback, such as audio feedback, to convey information about the level of uncertainty. In at least one example, the disclosed system can use a form of visual feedback to convey information about the level of uncertainty.

本文中所描述之系統可調變回饋的各種屬性以傳達關於不確定性位準之資訊。舉例而言,所揭示系統可調變回饋幅度、回饋頻率、回饋持續時間、回饋型態、回饋空間化、回饋位置、回饋力度、回饋效應、回饋強度、前述回饋中之一或多者的變化或組合,或回饋之任何其他可感知屬性。The system described herein can modulate various properties of the feedback to convey information about the level of uncertainty. For example, the disclosed system can adjust feedback amplitude, feedback frequency, feedback duration, feedback type, feedback spatialization, feedback location, feedback intensity, feedback effect, feedback intensity, changes in one or more of the foregoing feedback Or combination, or any other perceivable attribute of the feedback.

本文中所描述之系統可以多種方式基於與辨識模型之即時輸出相關聯的不確定性位準而調變回饋之屬性。在一些實例中,所揭示系統可藉由設定、調整、調節或改變屬性及/或屬性之可感知性而調變回饋的屬性以實質上反映或以其他方式傳送與辨識模型之即時輸出相關聯的不確定性位準,及/或反映或以其他方式傳送與辨識模型之即時輸出相關聯的不確定性位準之變化。The system described herein can modulate the properties of the feedback in a variety of ways based on the level of uncertainty associated with the immediate output of the identification model. In some examples, the disclosed systems can modulate the attributes of the feedback by setting, tuning, tuning, or changing the attributes and/or the perceptibility of the attributes to substantially reflect or otherwise communicate the immediate output associated with the recognition model and/or reflect or otherwise communicate changes in the uncertainty level associated with the immediate output of the identification model.

在一些實例中,所揭示系統可使用次級、三級、四級或五級回饋通道來傳達不確定性位準。舉例而言,在視覺回饋通道主要用於向使用者提供回饋之情況下,所揭示系統可使用聽覺回饋通道或觸覺回饋通道來傳達不確定性位準。在一些實例中,所揭示系統可選擇回饋,該回饋與經由主要回饋通道傳達至使用者的其他形式之主要回饋相比可為細微及/或較不分散的。In some examples, the disclosed systems can communicate the level of uncertainty using secondary, tertiary, quaternary, or quintuple feedback channels. For example, where the visual feedback channel is primarily used to provide feedback to the user, the disclosed system can use an auditory feedback channel or a tactile feedback channel to communicate the level of uncertainty. In some examples, the disclosed systems can select feedback that can be subtle and/or less diffuse than other forms of primary feedback communicated to the user via the primary feedback channel.

在一些實例中,所揭示系統可將一或多個類型或形式之回饋與一或多個類型或形式之模型不確定性相關聯。舉例而言,所揭示系統可使單一類型或形式之回饋(例如,單一類型的振動或一形式之振動)與有關於所辨識之使用者行為發生及不發生的不確定性相關聯。在此等實例中,所揭示系統可調變回饋之一或多個屬性以傳送不確定性位準。替代地,所揭示系統可使第一類型或形式之回饋與有關於所辨識之行為發生的不確定性相關聯,且可使第二類型或形式之回饋與有關於行為不發生的不確定性相關聯。在此等實例中,所揭示系統可調變第一類型之回饋的一或多個屬性以傳送與行為發生有關的不確定性位準,及/或可調變第二類型之回饋的一或多個屬性以傳送與行為不發生有關的不確定性位準。In some examples, the disclosed systems can associate one or more types or forms of feedback with one or more types or forms of model uncertainty. For example, the disclosed systems can associate a single type or form of feedback (eg, a single type of vibration or a form of vibration) with uncertainty about the occurrence and non-occurrence of an identified user action. In such examples, the disclosed systems can modulate one or more properties of the feedback to convey the level of uncertainty. Alternatively, the disclosed system can associate a first type or form of feedback with uncertainty about the occurrence of the identified behavior, and can associate a second type or form of feedback with the uncertainty about the non-occurrence of the behavior Associated. In these examples, the disclosed systems may modulate one or more properties of a first type of feedback to convey a level of uncertainty associated with the occurrence of behavior, and/or may modulate one or more of a second type of feedback. Multiple attributes to convey the level of uncertainty associated with the non-occurrence of the behavior.

在一些實例中,所揭示系統可向使用者提供關於如何在向使用者呈現任何回饋之前減小與辨識模型之即時輸出相關聯之不確定性的指令。在存在使用者可減小不確定性位準所藉以之多種方法及/或存在低不確定性位準之多個原因的情況下,所揭示系統可使用於各方法之指令與不同類型之回饋相關聯,以使得使用者可快速知曉適當方法以在接收到回饋時加以應用。In some examples, the disclosed system can provide instructions to the user on how to reduce the uncertainty associated with the immediate output of the recognition model before presenting any feedback to the user. Where there are multiple methods by which a user can reduce the level of uncertainty and/or multiple reasons for a low level of uncertainty, the disclosed system can use commands and different types of feedback for each method Correlation so that the user can quickly know the appropriate method to apply when feedback is received.

在一些實例中,所揭示系統可調變回饋之一或多個屬性以傳送用於減小辨識模型之即時輸出的不確定性位準之方法。舉例而言,若使用者之手部必須保持在頭戴式攝影機系統的視圖內以待追蹤,則只要使用者之手部開始自頭戴式攝影機系統之視圖偏移以提醒使用者將其手部返回至用於追蹤的更理想位置,所揭示系統便可調變經由手腕穿戴裝置呈現之觸覺回饋的一或多個屬性。在至少一個實例中,所揭示系統可調變觸覺回饋之位置以指示使用者應移動其手部以將其手部返回至用於追蹤之更理想位置的方向。In some examples, the disclosed system can modulate one or more properties of the feedback to deliver a method for reducing the level of uncertainty in the immediate output of the identification model. For example, if the user's hand must remain within the view of the head-mounted camera system to be tracked, as soon as the user's hand begins to deviate from the view of the head-mounted camera system to remind the user to move their hand The disclosed system can modulate one or more properties of the haptic feedback presented via the wrist-worn device by returning the portion to a more ideal position for tracking. In at least one example, the disclosed system can adjust the position of the haptic feedback to indicate the direction in which the user should move their hand to return their hand to a more ideal position for tracking.

在步驟440處,本文中所描述之系統中之一或多者可在辨識模型之即時輸出的實質上同時向使用者呈現回饋。舉例而言,呈現模組110可在即時輸出202中之一者的實質上同時向使用者呈現回饋206。At step 440, one or more of the systems described herein may present feedback to the user substantially simultaneously with the immediate output of the recognition model. For example, the presentation module 110 can present the feedback 206 to the user substantially simultaneously with one of the real-time outputs 202 .

本文中所描述之系統可以多種方式在與辨識模型之即時輸出的同時向使用者呈現回饋。在一些實例中,所揭示系統可在回應於辨識模型之即時輸出而進行的使用者輸入操作之執行的實質上同時向使用者呈現回饋。若辨識模型之即時輸出指示將不執行使用者輸入操作,則所揭示系統可在判定即時輸出指示將不執行使用者輸入操作的實質上同時向使用者呈現回饋。The system described herein can present feedback to the user in a variety of ways simultaneously with the immediate output of the recognition model. In some examples, the disclosed systems can present feedback to the user substantially simultaneously with the execution of user input operations in response to the immediate output of the recognition model. If the immediate output of the recognition model indicates that the user input operation will not be performed, the disclosed system can present feedback to the user substantially simultaneously with the determination that the immediate output indicates that the user input operation will not be performed.

圖6至圖11繪示所揭示系統可調變回饋幅度以傳達不確定性位準502之各種例示性方式,該等不確定性位準可與指示使用者行為之發生及/或不發生的機率相關聯。提供關於幅度調變及機率之此等實例以易於理解。應注意,所揭示系統可以相同或類似方式調變上文所描述之回饋屬性中之任一者以傳達與任何類型或形式的辨識模型輸出相關聯之不確定性位準。FIGS. 6-11 illustrate various exemplary ways in which the disclosed system may modulate the magnitude of the feedback to convey a level of uncertainty 502 that may be correlated with an indication of the occurrence and/or non-occurrence of a user action. Probability is related. These examples on amplitude modulation and probability are provided for ease of understanding. It should be noted that the disclosed system can modulate any of the feedback properties described above in the same or similar manner to convey the level of uncertainty associated with any type or form of identification model output.

圖6繪示在例示性幅度位準602與不確定性位準502之間的例示性比例關係。在此實例中,所揭示系統之範圍可為自回應於與低不確定性位準相關聯之機率(例如,最遠離臨限值304的機率)而輸出低幅度回饋至回應於與高不確定性位準相關聯之機率(例如,最接近臨限值304的機率)而輸出高幅度回饋。在此實例中,當所揭示系統正遇到最不確定之辨識模型輸出時,使用者可接收最可分辨之回饋。FIG. 6 illustrates an exemplary proportional relationship between an exemplary amplitude level 602 and an uncertainty level 502 . In this example, the disclosed system ranges from outputting a low magnitude feedback in response to a probability associated with a low uncertainty level (e.g., the probability furthest away from the threshold 304) to a response to a high uncertainty level. The probability associated with the sex level (eg, the probability of being closest to the threshold value 304 ) outputs a high magnitude feedback. In this example, the user may receive the most discriminative feedback when the disclosed system is encountering the most uncertain identification model output.

圖7繪示在例示性幅度位準702與不確定性位準502之間的例示性關係。在此實例中,所揭示系統可取決於機率將被解譯為使用者行為之發生還是不發生而以不同方式調變幅度位準702。當機率可經解譯為使用者行為不發生時,所揭示系統之範圍可為自回應於與低不確定性位準相關聯的機率(例如,最遠離臨限值304之機率)而輸出低幅度回饋至回應於與高不確定性位準相關聯之機率(例如,最接近臨限值304的機率)而輸出高幅度回饋。另一方面,當機率可經解譯為使用者行為發生時,所揭示系統可輸出高幅度回饋而不管與機率相關聯之不確定性位準。在此實例中,當使用者最可能嘗試執行使用者行為時,使用者可接收最可分辨之回饋。FIG. 7 illustrates an exemplary relationship between an exemplary amplitude level 702 and uncertainty level 502 . In this example, the disclosed system can modulate the amplitude level 702 differently depending on whether chance is to be interpreted as the occurrence or non-occurrence of user action. The scope of the disclosed system can be self-output low in response to probabilities associated with low uncertainty levels (e.g., probabilities furthest away from threshold 304) when probabilities can be interpreted as user actions not occurring. The magnitude feedback is to output a high magnitude feedback in response to the probability associated with the high uncertainty level (eg, the probability of being closest to the threshold 304 ). On the other hand, when chance can be interpreted as a user action occurring, the disclosed system can output high magnitude feedback regardless of the level of uncertainty associated with the chance. In this example, the user may receive the most discernible feedback when the user is most likely to attempt to perform the user action.

圖8至圖11繪示所揭示系統可基於所遇到之不確定性位準是高於還是低於不確定性臨限值802而調變回饋幅度的各種例示性方式。圖9繪示在例示性幅度位準902與不確定性位準502之間的例示性關係。在此實例中,所揭示系統可取決於機率是與高於還是低於臨限值802之不確定性位準相關聯而以不同方式調變幅度位準902。當所遇到之機率與低於臨限值802之不確定性位準相關聯時,所揭示系統可不輸出回饋。另一方面,當所遇到之機率與高於臨限值802之不確定性位準相關聯時,所揭示系統可輸出高幅度回饋而不管與機率相關聯的不確定性位準。在此實例中,當所揭示系統正遇到最不確定之辨識模型輸出時,使用者可接收可分辨回饋。FIGS. 8-11 illustrate various exemplary ways in which the disclosed system may modulate the magnitude of the feedback based on whether the encountered uncertainty level is above or below an uncertainty threshold 802 . FIG. 9 illustrates an exemplary relationship between an exemplary amplitude level 902 and an uncertainty level 502 . In this example, the disclosed system may modulate the amplitude level 902 differently depending on whether a probability is associated with an uncertainty level above or below the threshold value 802 . When the encountered probability is associated with an uncertainty level below a threshold 802, the disclosed system may output no feedback. On the other hand, when the encountered probability is associated with a level of uncertainty above the threshold 802, the disclosed system can output a high magnitude feedback regardless of the level of uncertainty associated with the probability. In this example, the user can receive discriminative feedback when the disclosed system is encountering the most uncertain identification model output.

圖10繪示在例示性幅度位準1002與不確定性位準502之間的例示性關係。在此實例中,所揭示系統可取決於機率是否高於臨限值304及/或機率是與高於還是低於臨限值802之不確定性位準相關聯而以不同方式調變幅度位準1002。當所遇到之機率低於臨限值304且與低於臨限值802之不確定性位準相關聯時,所揭示系統可不輸出回饋。另一方面,當遇到高於臨限值304之機率或當遇到與高於臨限值802之不確定性位準相關聯的機率時,所揭示系統之範圍可為自回應於較低機率而輸出較低幅度回饋至回應於較高機率而輸出較高幅度回饋。在此實例中,當使用者最可能嘗試執行使用者行為時,使用者可接收最可分辨之回饋。FIG. 10 illustrates an exemplary relationship between an exemplary amplitude level 1002 and an uncertainty level 502 . In this example, the disclosed system may modulate the amplitude level differently depending on whether the probability is above the threshold 304 and/or whether the probability is associated with an uncertainty level above or below the threshold 802 Standard 1002. When the encountered probability is below threshold 304 and associated with an uncertainty level below threshold 802 , the disclosed system may output no feedback. On the other hand, the range of the disclosed system may be responsive to lower Outputting a lower magnitude feedback in response to a higher probability to output a higher magnitude feedback in response to a higher probability. In this example, the user may receive the most discernible feedback when the user is most likely to attempt to perform the user action.

圖11繪示在例示性幅度位準1102與不確定性位準502之間的例示性比例關係。在此實例中,所揭示系統可取決於機率是與高於還是低於臨限值802之不確定性位準相關聯而以不同方式調變幅度位準1102。當所遇到之機率與低於臨限值802之不確定性位準相關聯時,所揭示系統可不輸出回饋。另一方面,當所遇到之機率與高於臨限值802的不確定性位準相關聯時,所揭示系統之範圍可為自回應於與低不確定性位準相關聯的機率(例如,最遠離臨限值304之機率)而輸出低幅度回饋至回應於與高不確定性位準相關聯之機率(例如,最接近臨限值304的機率)而輸出高幅度回饋。在此實例中,當所揭示系統正遇到最不確定之辨識模型輸出時,使用者可接收最可分辨之回饋,但當所揭示系統正遇到最確定的辨識模型輸出時,使用者可能不會受到回饋所困擾。FIG. 11 illustrates an exemplary proportional relationship between an exemplary amplitude level 1102 and an uncertainty level 502 . In this example, the disclosed system may modulate the amplitude level 1102 differently depending on whether a probability is associated with an uncertainty level above or below the threshold value 802 . When the encountered probability is associated with an uncertainty level below a threshold 802, the disclosed system may output no feedback. On the other hand, when the encountered probabilities are associated with uncertainty levels above the threshold 802, the range of the disclosed system may be responsive to probabilities associated with low uncertainty levels (e.g. , the probability furthest from the threshold value 304) and output a low magnitude feedback to output a high magnitude feedback in response to the probability associated with a high uncertainty level (eg, the probability closest to the threshold value 304). In this example, the user may receive the most discriminative feedback when the disclosed system is encountering the most uncertain identification model output, but the user may receive the most discriminative feedback when the disclosed system is encountering the most certain identification model output. Don't be bothered by feedback.

圖12係用於在對使用者行為作出反應的同時向使用者傳達基於機率之不確定性位準的例示性電腦實施方法1200之流程圖。圖12中所展示之步驟可由任何合適電腦可執行碼及/或計算系統(包括圖1中所繪示之系統)執行。在一個實例中,圖12中所展示之步驟中之各者可表示演算法,該演算法的結構包括多個子步驟及/或由其表示,該等子步驟之實例將在下文更詳細地提供。展示於圖12中之一些步驟類似於展示於圖4中的步驟,因此,對展示於圖4中之步驟的論述亦可適用於展示於圖12中的步驟。12 is a flowchart of an exemplary computer-implemented method 1200 for communicating probability-based uncertainty levels to a user while responding to user behavior. The steps shown in FIG. 12 may be performed by any suitable computer-executable code and/or computing system, including the system depicted in FIG. 1 . In one example, each of the steps shown in FIG. 12 may represent an algorithm whose structure includes and/or is represented by a plurality of sub-steps, examples of which are provided in more detail below. . Some of the steps shown in FIG. 12 are similar to the steps shown in FIG. 4 , therefore, the discussion of the steps shown in FIG. 4 may also apply to the steps shown in FIG. 12 .

如圖12中所繪示,在步驟1210處,本文中所描述之系統中之一或多者可自辨識模型接收使用者行為發生的機率。舉例而言,接收模組104可接收圖3中所繪示之機率306至312中之一者。在步驟1220處,本文中所描述之系統中之一或多者接下來可基於在步驟1210處接收的機率而判定不確定性位準。在一個實例中,判定模組106可使用如上文所描述之方程式(1)來判定在步驟1210處接收之機率的不確定性位準。舉例而言,判定模組106可判定用於機率306及308之相對較低不確定性位準及/或用於機率310及312之相對較高不確定性位準。As shown in FIG. 12, at step 1210, one or more of the systems described herein may receive from the recognition model the probability of occurrence of a user action. For example, the receiving module 104 may receive one of the probabilities 306-312 shown in FIG. 3 . At step 1220 , one or more of the systems described herein may then determine an uncertainty level based on the probabilities received at step 1210 . In one example, the decision module 106 may use equation (1) as described above to determine the uncertainty level of the probability received at step 1210 . For example, decision module 106 may determine a relatively lower uncertainty level for probabilities 306 and 308 and/or a relatively higher uncertainty level for probabilities 310 and 312 .

在步驟1230處,本文中所描述之系統中之一或多者可接著基於在步驟1220處判定的不確定性位準而調變回饋之至少一個屬性。舉例而言,調變模組108可調變觸覺回饋之至少一個屬性以傳送在步驟1220處判定的不確定性位準(例如,較低幅度振動可傳送機率306及308之相對較低不確定性位準,或較高幅度振動可傳送機率310及312之相對較高不確定性位準)。在步驟1240處,本文中所描述之系統中之一或多者可判定在步驟1210處接收的機率是否高於預定臨限值。舉例而言,判定模組106可判定在步驟1210處接收之機率是高於還是低於臨限值304。若在步驟1210處接收之機率經判定為高於臨限值(例如,如可能隨機率306及310發生),則方法1200的執行可在步驟1250處繼續進行,其中所揭示系統可作出反應,如同行為係藉由執行與該行為相關聯之使用者輸入操作而偵測到一般。另一方面,若在步驟1210處接收之機率經判定為低於臨限值(例如,如可能隨機率308及312發生),則方法1200的執行可在步驟1260處繼續,其中所揭示系統可作出反應,如同相關聯行為並非藉由避免執行與該行為相關聯之使用者輸入操作而偵測到一般。最後,在步驟1270處,本文中所描述之系統中之一或多者可在執行步驟1250或步驟1260的實質上同時向使用者呈現在步驟1230處調變之回饋。At step 1230 , one or more of the systems described herein may then modulate at least one property of the feedback based on the uncertainty level determined at step 1220 . For example, modulation module 108 may modulate at least one property of the haptic feedback to convey the level of uncertainty determined at step 1220 (e.g., relatively low uncertainty for the probability that lower magnitude vibrations may convey 306 and 308 sex level, or the relatively high uncertainty level of the transmittable probabilities 310 and 312 of higher amplitude vibrations). At step 1240, one or more of the systems described herein may determine whether the probability received at step 1210 is above a predetermined threshold. For example, the determination module 106 can determine whether the probability of receiving at step 1210 is above or below the threshold 304 . If the probability of receipt at step 1210 is determined to be above a threshold (e.g., as likely for random rates 306 and 310 to occur), execution of method 1200 may continue at step 1250, where the disclosed system may respond, As if the behavior was detected by performing a user input action associated with that behavior. On the other hand, if at step 1210 the probability of receipt is determined to be below the threshold (e.g., as likely random rates 308 and 312 occur), execution of method 1200 may continue at step 1260, where the disclosed system may React as if the associated behavior was not detected by avoiding user input associated with the behavior. Finally, at step 1270 , one or more of the systems described herein may present the feedback modulated at step 1230 to the user substantially simultaneously with the execution of step 1250 or step 1260 .

如上文所描述,所揭示系統可使用經調變觸覺及/或其他形式之使用者回饋以傳達預測模型的可能變化之不確定性位準,該等預測模型尤其係預測使用者位置、控制器位置及/或使用者輸入之模型。在一些實例中,捏合偵測模型可輸出指示使用者正執行捏合手勢之機率的值。所揭示系統可作出反應,如同使用者在所輸出機率處於上部範圍內時正執行捏合手勢一般,及/或可作出反應,如同使用者在所輸出機率處於下部範圍內時並未執行捏合手勢一般。當所輸出機率處於中間範圍內時,所揭示系統可能不太確定或不確定使用者是否正執行捏合手勢。在至少此等不確定情況下,所揭示系統可向使用者呈現已經過調變以指示系統之不確定性/不確定性位準的觸覺回饋。在另一實例中,手動追蹤模型可估計使用者之手部的位置,且可輸出指示任何經估計手部位置之確定性或不確定性的值。隨著使用者移動其手部,所揭示系統可經由經調變觸覺回饋而向使用者傳達經估計手部位置之確定性或不確定性。如上文所解釋,藉由提供關於模型不確定性之透明度,所揭示系統可減少使用者沮喪及/或可使得使用者以引起改良之模型預測及使用者體驗的方式調適其行為。As described above, the disclosed system can use modulated haptics and/or other forms of user feedback to communicate the level of uncertainty of possible changes in predictive models that predict, among other things, user position, controller Model of location and/or user input. In some examples, the pinch detection model may output a value indicating the probability that the user is performing a pinch gesture. The disclosed system can react as if the user is performing a pinch gesture when the output probability is in the upper range, and/or can react as if the user is not performing a pinch gesture when the output probability is in the lower range . When the output probability is in the middle range, the disclosed system may be less certain or uncertain whether the user is performing a pinch gesture. In at least these uncertain situations, the disclosed system can present to the user haptic feedback that has been modulated to indicate the uncertainty/level of uncertainty of the system. In another example, the hand tracking model can estimate the location of the user's hands, and can output a value indicative of the certainty or uncertainty of any estimated hand location. As the user moves their hand, the disclosed system can communicate the certainty or uncertainty of the estimated hand position to the user through modulated haptic feedback. As explained above, by providing transparency about model uncertainty, the disclosed systems can reduce user frustration and/or can allow users to adapt their behavior in a manner that results in improved model predictions and user experience.

實例具體實例Instance Concrete example

實施例1:一種用於傳達模型不確定性之電腦實施方法可包括:(1)接收與一辨識模型之一即時輸出相關聯的資訊,該辨識模型經調適以辨識一使用者之至少一個行為;(2)基於該資訊而判定與該即時輸出相關聯之一不確定性位準;(3)基於該不確定性位準而調變回饋之至少一個屬性;以及(4)在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。Example 1: A computer-implemented method for communicating model uncertainty may include: (1) receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user ; (2) determining a level of uncertainty associated with the immediate output based on the information; (3) modulating at least one property of the feedback based on the level of uncertainty; and (4) in the identification model Presenting the feedback to the user substantially simultaneously with the immediate output.

實施例2:如實施例1之電腦實施方法,其中:(1)與該辨識模型之該即時輸出相關聯的該資訊包括該使用者執行該行為之一機率;(2)該電腦實施方法進一步包括以下中之至少一者:(a)當該使用者執行該行為之該機率高於一預定臨限值時執行一使用者輸入操作;及/或(b)當該使用者執行該行為之該機率低於該預定臨限值時避免執行該使用者輸入操作;並且(3)與該即時輸出相關聯之該不確定性位準可基於在該機率與該預定臨限值之間的一距離而判定。在一些實例中,該不確定性位準可與該距離成反比。Embodiment 2: The computer-implemented method of Embodiment 1, wherein: (1) the information associated with the real-time output of the recognition model includes a probability that the user performs the behavior; (2) the computer-implemented method further Including at least one of the following: (a) performing a user input operation when the probability of the user performing the action is above a predetermined threshold; and/or (b) performing a user input operation when the user performs the action Avoid performing the user input operation when the probability is below the predetermined threshold; and (3) the uncertainty level associated with the immediate output may be based on a value between the probability and the predetermined threshold judged by distance. In some examples, the uncertainty level can be inversely proportional to the distance.

實施例3:如實施例1至2中任一項之電腦實施方法,其中:(1)與該辨識模型之該即時輸出相關聯的該資訊包括該使用者執行該行為之一機率;並且(2)該回饋之該屬性可經調變為具有與該使用者執行該行為之該機率成比例的一可感知性位準。Embodiment 3: The computer-implemented method of any one of Embodiments 1-2, wherein: (1) the information associated with the real-time output of the recognition model includes a probability that the user performs the behavior; and ( 2) The attribute of the feedback can be tuned to have a perceptibility level proportional to the probability of the user performing the behavior.

實施例4:如實施例1至3中任一項之電腦實施方法,其中該回饋之該屬性可經調變為具有與該不確定性位準成比例的一可感知性位準,該不確定性位準與該即時輸出相關聯。Embodiment 4: The computer-implemented method of any one of Embodiments 1 to 3, wherein the property of the feedback is modulated to have a perceptibility level proportional to the uncertainty level, the non- A certainty level is associated with this immediate output.

實施例5:如實施例1至4中任一項之電腦實施方法,其中:(1)該辨識模型包括經調適以輸出該使用者執行一捏合手勢之機率的一捏合辨識模型;(2)該資訊包括該使用者執行該捏合手勢之一機率;(3)該電腦實施方法進一步包括當該使用者執行該捏合手勢之該機率高於一預定臨限值時執行一使用者輸入操作;並且(4)當該使用者執行該捏合手勢之該機率高於該預定臨限值時,在執行該使用者輸入操作的同時,該回饋可經呈現給該使用者。Embodiment 5: The computer-implemented method of any one of Embodiments 1 to 4, wherein: (1) the recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; (2) The information includes a probability that the user performs the pinch gesture; (3) the computer-implemented method further includes performing a user input operation when the probability that the user performs the pinch gesture is above a predetermined threshold; and (4) When the probability of the user performing the pinch gesture is higher than the predetermined threshold value, the feedback may be presented to the user while performing the user input operation.

實施例6:如實施例1至5中任一項之電腦實施方法,其中:(1)該辨識模型包括經調適以輸出該使用者執行一捏合手勢之機率的一捏合辨識模型;(2)該資訊包括該使用者執行該捏合手勢之一機率;(3)該電腦實施方法進一步包括當該使用者執行該捏合手勢之該機率低於一預定臨限值時避免執行一使用者輸入操作;並且(4)在判定該使用者執行該捏合手勢之該機率低於該預定臨限值的同時,該回饋可經呈現給該使用者。Embodiment 6: The computer-implemented method of any one of Embodiments 1 to 5, wherein: (1) the recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; (2) The information includes a probability that the user performs the pinch gesture; (3) the computer-implemented method further includes refraining from performing a user input operation when the probability that the user performs the pinch gesture is below a predetermined threshold; And (4) while determining that the probability of the user performing the pinch gesture is lower than the predetermined threshold, the feedback may be presented to the user.

實施例7:如實施例1至6中任一項之電腦實施方法,其中:(1)該辨識模型可包括一手動追蹤模型,該手動追蹤模型經調適以輸出(a)用於該使用者之手部的一或多個部分之位置或位向資訊,以及(b)該位置或位向資訊之一信賴位準;並且(2)與該即時輸出相關聯之該不確定性位準可基於該信賴位準而判定。在一些實例中,該不確定性位準可與該信賴位準成反比。Embodiment 7: The computer-implemented method of any one of Embodiments 1-6, wherein: (1) the recognition model may include a manual tracking model adapted to output (a) for the user position or orientation information of one or more portions of the hand, and (b) a confidence level for the position or orientation information; and (2) the uncertainty level associated with the real-time output can be Judgment based on this trust level. In some examples, the uncertainty level can be inversely proportional to the confidence level.

實施例8:如實施例1至7中任一項之電腦實施方法,其中:(1)該回饋可為觸覺回饋;並且(2)該觸覺回饋之至少一個屬性可基於與該即時輸出相關聯之該不確定性位準。Embodiment 8: The computer-implemented method of any one of Embodiments 1-7, wherein: (1) the feedback can be haptic feedback; and (2) at least one attribute of the haptic feedback can be based on the the level of uncertainty.

實施例9:如實施例1至8中任一項之電腦實施方法,其中:(1)該回饋可為一振動;並且(2)該振動之至少一個屬性可基於與該即時輸出相關聯之該不確定性位準。Embodiment 9: The computer-implemented method of any one of Embodiments 1-8, wherein: (1) the feedback can be a vibration; and (2) at least one attribute of the vibration can be based on an the level of uncertainty.

實施例10:如實施例1至9中任一項之電腦實施方法,其中基於該不確定性位準而調變該回饋的該屬性包括調變以下中之一或多者:(1)該回饋之一幅度,以將該不確定性位準傳達至該使用者;(2)該回饋之一頻率,以將該不確定性位準傳達至該使用者;(3)該回饋之一持續時間,以將該不確定性位準傳達至該使用者;(4)該回饋之一型態,以將該不確定性位準傳達至該使用者;及/或(5)該回饋之一空間化,以將該不確定性位準傳達至該使用者。Embodiment 10: The computer-implemented method of any one of Embodiments 1 to 9, wherein modulating the property of the feedback based on the uncertainty level comprises modulating one or more of: (1) the an amplitude of the feedback to communicate the level of uncertainty to the user; (2) a frequency of the feedback to communicate the level of uncertainty to the user; (3) a duration of the feedback time to communicate the level of uncertainty to the user; (4) a type of feedback to communicate the level of uncertainty to the user; and/or (5) one of the feedback spatialized to communicate the uncertainty level to the user.

實施例11:如實施例1至10中任一項之電腦實施方法,其中該回饋指示用於減小該即時輸出之該不確定性位準的一方式。Embodiment 11: The computer-implemented method of any one of Embodiments 1-10, wherein the feedback indicates a means for reducing the uncertainty level of the immediate output.

實施例12:如實施例1至11中任一項之電腦實施方法,其進一步包括:(1)接收與該辨識模型之一額外即時輸出相關聯的額外資訊;(2)基於該額外資訊而判定與該額外即時輸出相關聯之一額外不確定性位準;以及(3)基於該額外不確定性位準而調變經呈現給該使用者之該回饋。Embodiment 12: The computer-implemented method of any one of Embodiments 1 to 11, further comprising: (1) receiving additional information associated with an additional real-time output of the recognition model; (2) based on the additional information, determining an additional uncertainty level associated with the additional immediate output; and (3) modulating the feedback presented to the user based on the additional uncertainty level.

實施例13:一種系統可包括:(1)至少一個實體處理器;以及(2)實體記憶體,其包括電腦可執行指令,該等電腦可執行指令在由該實體處理器執行時使該實體處理器:(a)接收與一辨識模型之一即時輸出相關聯的資訊,該辨識模型經調適以辨識一使用者之至少一個行為;(b)基於該資訊而判定與該即時輸出相關聯之一不確定性位準;(c)基於該不確定性位準而調變回饋之至少一個屬性;並且(d)在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。Embodiment 13: A system may include: (1) at least one physical processor; and (2) physical memory including computer-executable instructions that, when executed by the physical processor, cause the physical The processor: (a) receives information associated with an immediate output of a recognition model adapted to identify at least one behavior of a user; (b) determines based on the information an an uncertainty level; (c) modulating at least one attribute of feedback based on the uncertainty level; and (d) presenting the feedback to the user substantially simultaneously with the immediate output of the recognition model .

實施例14:如實施例13之系統,其中:(1)與該辨識模型之該即時輸出相關聯的該資訊包括該使用者執行該行為之一機率;(2)該等電腦可執行指令在由該實體處理器執行時進一步使該實體處理器:(a)當該使用者執行該行為之該機率高於一預定臨限值時執行一使用者輸入操作;及/或(b)當該使用者執行該行為之該機率低於該預定臨限值時避免執行該使用者輸入操作;並且(3)與該即時輸出相關聯之該不確定性位準可基於在該機率與該預定臨限值之間的一距離而判定。在一些實例中,該不確定性位準可與該距離成反比。Embodiment 14: The system of Embodiment 13, wherein: (1) the information associated with the real-time output of the recognition model includes a probability that the user performs the action; (2) the computer-executable instructions are in When executed by the physical processor, the physical processor further causes the physical processor to: (a) perform a user input operation when the probability of the user performing the action is above a predetermined threshold; and/or (b) when the refraining from performing the user input operation when the probability of the user performing the action is below the predetermined threshold; and (3) the uncertainty level associated with the immediate output may be based on It is judged by a distance between the limit values. In some examples, the uncertainty level can be inversely proportional to the distance.

實施例15:如實施例13至14中任一項之系統,其中:(1)與該辨識模型之該即時輸出相關聯的該資訊包括該使用者執行該行為之一機率;並且(2)該回饋之該屬性可經調變為具有與該使用者執行該行為之該機率成比例的一可感知性位準。Embodiment 15: The system of any of Embodiments 13 to 14, wherein: (1) the information associated with the real-time output of the recognition model includes a probability that the user performs the behavior; and (2) The attribute of the feedback can be tuned to have a perceptibility level proportional to the probability of the user performing the behavior.

實施例16:如實施例13至15中任一項之系統,其中該回饋之該屬性可經調變為具有與該不確定性位準成比例的一可感知性位準,該不確定性位準與該即時輸出相關聯。Embodiment 16: The system of any of Embodiments 13 to 15, wherein the property of the feedback is modulated to have a level of perceptibility proportional to the level of uncertainty, the uncertainty A level is associated with the immediate output.

實施例17:如實施例13至16中任一項之系統,其中:(1)該辨識模型包括經調適以輸出該使用者執行一捏合手勢之機率的一捏合辨識模型;(2)該資訊包括該使用者執行該捏合手勢之一機率;(3)該等電腦可執行指令在由該實體處理器執行時進一步使該實體處理器在該使用者執行該捏合手勢之該機率高於一預定臨限值時執行一使用者輸入操作;並且(4)當該使用者執行該捏合手勢之該機率高於該預定臨限值時,在執行該使用者輸入操作的同時,該回饋可經呈現給該使用者。Embodiment 17: The system of any of Embodiments 13-16, wherein: (1) the recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; (2) the information Including the probability that the user performs the pinch gesture; (3) When the computer-executable instructions are executed by the physical processor, the physical processor further makes the probability that the user performs the pinch gesture higher than a predetermined and (4) when the probability that the user performs the pinch gesture is higher than the predetermined threshold, the feedback may be presented while performing the user input operation to the user.

實施例18:如實施例13至17中任一項之系統,其中:(1)該辨識模型包括經調適以輸出該使用者執行一捏合手勢之機率的一捏合辨識模型;(2)該資訊包括該使用者執行該捏合手勢之一機率;(3)該等電腦可執行指令在由該實體處理器執行時進一步使該實體處理器在該使用者執行該捏合手勢之該機率低於一預定臨限值時避免執行一使用者輸入操作;並且(4)在判定該使用者執行該捏合手勢之該機率低於該預定臨限值的同時,該回饋可經呈現給該使用者。Embodiment 18: The system of any of Embodiments 13 to 17, wherein: (1) the recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; (2) the information Including the probability that the user performs the pinch gesture; (3) the computer-executable instructions, when executed by the physical processor, further cause the physical processor to lower the probability of the user performing the pinch gesture below a predetermined and (4) while determining that the probability of the user performing the pinch gesture is lower than the predetermined threshold, the feedback may be presented to the user.

實施例19:如實施例13至18中任一項之系統,其中:(1)該辨識模型可包括一手動追蹤模型,該手動追蹤模型經調適以輸出(a)用於該使用者之手部的一或多個部分之位置或位向資訊,以及(b)該位置或位向資訊之一信賴位準;並且(2)與該即時輸出相關聯之該不確定性位準可基於該信賴位準而判定。在一些實例中,該不確定性位準可與該信賴位準成反比。Embodiment 19: The system of any of Embodiments 13 to 18, wherein: (1) the recognition model may include a manual tracking model adapted to output (a) for the user's hand position or orientation information for one or more portions of the portion, and (b) a level of confidence in the position or orientation information; and (2) the level of uncertainty associated with the immediate output may be based on the Judgment based on trust level. In some examples, the uncertainty level can be inversely proportional to the confidence level.

實施例20:一種非暫時性電腦可讀取媒體可包括一或多個電腦可執行指令,該等電腦可執行指令在由一計算裝置之至少一個處理器執行時使該計算裝置:(1)接收與一辨識模型之一即時輸出相關聯的資訊,該辨識模型經調適以辨識一使用者之至少一個行為;(2)基於該資訊而判定與該即時輸出相關聯之一不確定性位準;(3)基於該不確定性位準而調變回饋之至少一個屬性;並且(4)在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。Embodiment 20: A non-transitory computer-readable medium may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: (1) receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; (2) determining an uncertainty level associated with the real-time output based on the information ; (3) modulating at least one attribute of feedback based on the uncertainty level; and (4) presenting the feedback to the user substantially simultaneously with the immediate output of the recognition model.

本揭示內容之具體實例可包括各種類型之人工實境系統或結合各種類型之人工實境系統加以實施。人工實境係在呈現給使用者之前已以某一方式調整之實境形式,其可包括例如虛擬實境、擴增實境、混合實境、混雜實境或其某一組合及/或衍生物。人工實境內容可包括完全電腦產生之內容或與所俘獲之(例如,真實世界)內容組合的電腦產生之內容。人工實境內容可包括視訊、音訊、觸覺回饋或其某一組合,其中之任一者可在單一通道中或在多個通道中呈現(諸如,對觀看者產生三維(3D)效應之立體視訊)。另外,在一些具體實例中,人工實境亦可與用以例如在人工實境中產生內容及/或以其他方式用於人工實境中(例如,在人工實境中執行活動)之應用程式、產品、配件、服務或其某一組合相關聯。Embodiments of the present disclosure may include or be implemented in combination with various types of artificial reality systems. Artificial reality is a form of reality that has been modified in some way before being presented to the user, which may include, for example, virtual reality, augmented reality, mixed reality, hybrid reality, or some combination and/or derivative thereof things. Artificial reality content may include fully computer-generated content or computer-generated content combined with captured (eg, real-world) content. Artificial reality content may include video, audio, tactile feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereoscopic video that creates a three-dimensional (3D) effect on the viewer ). Additionally, in some embodiments, AR can also be used with applications that are used, for example, to generate content in AR and/or otherwise used in AR (e.g., to perform activities in AR) , products, accessories, services or a combination thereof.

人工實境系統可以各種不同外觀尺寸及組態來實施。一些人工實境系統可設計為在無近眼顯示器(NED)之情況下工作。其他人工實境系統可包括NED,該NED亦提供對真實世界(諸如,圖13中之擴增實境系統1300)之可見性或在視覺上使得使用者沉浸在人工實境(諸如,圖14中之虛擬實境系統1400)中。雖然一些人工實境裝置可為自給式系統,但其他人工實境裝置可與外部裝置通信及/或協調以向使用者提供人工實境體驗。此類外部裝置之實例包括手持式控制器、行動裝置、桌上型電腦、由使用者穿戴之裝置、由一或多個其他使用者穿戴的裝置,及/或任何其他合適之外部系統。The artificial reality system can be implemented in various form factors and configurations. Some artificial reality systems can be designed to work without a near-eye display (NED). Other artificial reality systems may include NEDs that also provide visibility into the real world (such as augmented reality system 1300 in FIG. 13 ) or visually immerse the user in an artificial reality (such as in FIG. 14 In the virtual reality system 1400). While some augmented reality devices may be self-contained systems, other augmented reality devices may communicate and/or coordinate with external devices to provide a user with an augmented reality experience. Examples of such external devices include handheld controllers, mobile devices, desktop computers, devices worn by a user, devices worn by one or more other users, and/or any other suitable external system.

轉向圖13,擴增實境系統1300可包括具有框架1310之眼鏡裝置1302,該框架經組態以將左側顯示裝置1315(A)及右側顯示裝置1315(B)固持在使用者眼睛前方。左側顯示裝置1315(A)及右側顯示裝置1315(B)可共同地或獨立地起作用以向使用者呈現一影像或一系列影像。雖然擴增實境系統1300包括兩個顯示器,但本揭示內容之具體實例可實施於具有單一NED或多於兩個NED之擴增實境系統中。Turning to FIG. 13 , an augmented reality system 1300 may include a glasses device 1302 having a frame 1310 configured to hold a left display device 1315 (A) and a right display device 1315 (B) in front of a user's eyes. Left display device 1315(A) and right display device 1315(B) may function together or independently to present an image or series of images to the user. Although augmented reality system 1300 includes two displays, embodiments of the present disclosure can be implemented in augmented reality systems with a single NED or more than two NEDs.

在一些具體實例中,擴增實境系統1300可包括一或多個感測器,諸如感測器1340。感測器1340可回應於擴增實境系統1300之運動而產生量測信號,且可位於框架1310之實質上任何部分上。感測器1340可表示多種不同感測機構中之一或多者,該等感測機構諸如為位置感測器、慣性量測單元(IMU)、深度攝影機總成、結構化光發射器及/或偵測器,或其任何組合。在一些具體實例中,擴增實境系統1300可包括或可不包括感測器1340或可包括多於一個感測器。在感測器1340包括IMU之具體實例中,IMU可基於來自感測器1340之量測信號而產生校準資料。感測器1340之實例可包括但不限於加速計、陀螺儀、磁力計、偵測運動之其他合適類型的感測器、用於IMU之誤差校正的感測器,或其某一組合。In some embodiments, augmented reality system 1300 may include one or more sensors, such as sensor 1340 . Sensor 1340 may generate measurement signals in response to motion of augmented reality system 1300 and may be located on substantially any portion of frame 1310 . Sensor 1340 may represent one or more of a variety of different sensing mechanisms, such as position sensors, inertial measurement units (IMUs), depth camera assemblies, structured light emitters, and/or or detectors, or any combination thereof. In some embodiments, the augmented reality system 1300 may or may not include the sensor 1340 or may include more than one sensor. In embodiments where sensor 1340 includes an IMU, the IMU can generate calibration data based on measurement signals from sensor 1340 . Examples of sensors 1340 may include, but are not limited to, accelerometers, gyroscopes, magnetometers, other suitable types of sensors to detect motion, sensors for error correction of IMUs, or some combination thereof.

在一些實例中,擴增實境系統1300亦可包括具有統稱為聲能轉換器1320之複數個聲能轉換器1320(A)至1320(J)的麥克風陣列。聲能轉換器1320可表示偵測由聲波誘發之氣壓變化的轉換器。各聲能轉換器1320可經組態以偵測聲音且將經偵測聲音轉換為電子格式(例如,類比或數位格式)。圖13中之麥克風陣列可包括例如十個聲能轉換器:1320(A)及1320(B),其可經設計以置放在使用者之對應耳朵內部;聲能轉換器1320(C)、1320(D)、1320(E)、1320(F)、1320(G)及1320(H),其可定位在框架1310上之各個部位處;及/或聲能轉換器1320(I)及1320(J),其可定位在對應頸帶1305上。In some examples, augmented reality system 1300 may also include a microphone array with a plurality of acoustic transducers 1320 (A)- 1320 (J), collectively referred to as acoustic transducers 1320 . Acoustic energy transducer 1320 may represent a transducer that detects changes in air pressure induced by sound waves. Each acoustic transducer 1320 may be configured to detect sound and convert the detected sound to an electronic format (eg, analog or digital format). The microphone array in FIG. 13 may include, for example, ten acoustic transducers: 1320(A) and 1320(B), which may be designed to be placed inside the corresponding ears of the user; acoustic transducers 1320(C), 1320(D), 1320(E), 1320(F), 1320(G), and 1320(H), which may be positioned at various locations on frame 1310; and/or acoustic transducers 1320(1) and 1320 (J), which can be positioned on the corresponding neck strap 1305.

在一些具體實例中,聲能轉換器1320(A)至1320(J)中之一或多者可用作輸出轉換器(例如,揚聲器)。舉例而言,聲能轉換器1320(A)及/或1320(B)可為耳塞式耳機或任何其他合適類型之頭戴式耳機或揚聲器。In some embodiments, one or more of the acoustic energy transducers 1320(A)-1320(J) may function as an output transducer (eg, a speaker). For example, acoustic transducers 1320(A) and/or 1320(B) may be earphones or any other suitable type of headphones or speakers.

麥克風陣列之聲能轉換器1320的組態可不同。雖然擴增實境系統1300在圖13中展示為具有十個聲能轉換器1320,但聲能轉換器1320之數目可大於或小於十。在一些具體實例中,使用較高數目之聲能轉換器1320可增加所收集的音訊資訊之量及/或音訊資訊之敏感度及準確度。相比之下,使用較低數目之聲能轉換器1320可降低相關聯控制器1350處理所收集的音訊資訊所需之計算能力。另外,麥克風陣列之各聲能轉換器1320的位置可不同。舉例而言,聲能轉換器1320之位置可包括使用者上的經界定位置、框架1310上之經界定座標、與各聲能轉換器1320相關聯之位向,或其某一組合。The configuration of the acoustic transducer 1320 of the microphone array can vary. Although the augmented reality system 1300 is shown in FIG. 13 as having ten acoustic transducers 1320 , the number of acoustic transducers 1320 may be greater or less than ten. In some embodiments, using a higher number of acoustic transducers 1320 can increase the amount of audio information collected and/or the sensitivity and accuracy of the audio information. In contrast, using a lower number of acoustic transducers 1320 reduces the computational power required by the associated controller 1350 to process the collected audio information. In addition, the positions of the acoustic transducers 1320 of the microphone array may be different. For example, the location of the acoustic transducers 1320 may include a defined location on the user, defined coordinates on the frame 1310, an orientation associated with each acoustic transducer 1320, or some combination thereof.

聲能轉換器1320(A)及1320(B)可定位在使用者耳朵之不同部分上,諸如耳廓後方、耳屏後方及/或在耳廓或耳窩內。或者,除了耳道內部之聲能轉換器1320,耳朵上或周圍亦可存在額外聲能轉換器1320。使聲能轉換器1320緊鄰使用者之耳道而定位可使得麥克風陣列能夠收集關於聲音如何到達耳道之資訊。藉由將聲能轉換器1320中之至少兩者定位在使用者頭部之任一側上(例如,作為雙耳麥克風),擴增實境裝置1300可模擬雙耳聽覺且俘獲使用者頭部周圍的3D立體聲聲場。在一些具體實例中,聲能轉換器1320(A)及1320(B)可經由有線連接1330連接至擴增實境系統1300,且在其他具體實例中,聲能轉換器1320(A)及1320(B)可經由無線連接(例如,藍芽連接)連接至擴增實境系統1300。在另外其他具體實例中,聲能轉換器1320(A)及1320(B)可根本不結合擴增實境系統1300而使用。Acoustic transducers 1320(A) and 1320(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the pinna or ear socket. Alternatively, in addition to the acoustic transducer 1320 inside the ear canal, there may also be an additional acoustic transducer 1320 on or around the ear. Positioning the acoustic transducer 1320 in close proximity to the user's ear canal may enable the microphone array to gather information about how sound reaches the ear canal. By positioning at least two of the acoustic transducers 1320 on either side of the user's head (e.g., as binaural microphones), the augmented reality device 1300 can simulate binaural hearing and capture the user's head Surrounding 3D stereo sound field. In some embodiments, acoustic energy transducers 1320(A) and 1320(B) can be connected to augmented reality system 1300 via wired connection 1330, and in other embodiments, acoustic energy transducers 1320(A) and 1320 (B) Can be connected to the augmented reality system 1300 via a wireless connection (eg, Bluetooth connection). In still other embodiments, acoustic energy transducers 1320(A) and 1320(B) may not be used in conjunction with augmented reality system 1300 at all.

框架1310上之聲能轉換器1320可以多種不同方式定位,包括沿著鏡腿之長度、跨越橋接件、在左側顯示裝置1315(A)及右側顯示裝置1315(B)上方或下方,或其某一組合。聲能轉換器1320可定向成使得麥克風陣列能夠在環繞穿戴擴增實境系統1300之使用者的廣泛範圍之方向上偵測聲音。在一些具體實例中,可在擴增實境系統1300之製造期間執行最佳化程序以判定麥克風陣列中之各聲能轉換器1320的相對定位。The acoustic transducers 1320 on the frame 1310 can be positioned in a number of different ways, including along the length of the temples, across bridges, above or below the left display device 1315(A) and right display device 1315(B), or in some other way. a combination. Acoustic transducer 1320 may be oriented such that the microphone array is capable of detecting sound in a wide range of directions around the user wearing augmented reality system 1300 . In some embodiments, an optimization procedure may be performed during manufacture of the augmented reality system 1300 to determine the relative positioning of the acoustic transducers 1320 in the microphone array.

在一些實例中,擴增實境系統1300可包括或連接至外部裝置(例如,配對裝置),諸如頸帶1305。頸帶1305通常表示任何類型或形式之配對裝置。因此,頸帶1305之以下論述亦可適用於各種其他配對裝置,諸如充電箱、智慧型手錶、智慧型手機、腕帶、其他可穿戴裝置、手持式控制器、平板電腦、膝上型電腦、其他外部計算裝置等。In some examples, augmented reality system 1300 may include or be connected to an external device (eg, a companion device), such as neckband 1305 . Neckband 1305 generally represents any type or form of paired device. Accordingly, the following discussion of the neckband 1305 is also applicable to various other paired devices, such as charging cases, smart watches, smartphones, wristbands, other wearable devices, handheld controllers, tablets, laptops, other external computing devices, etc.

如所展示,頸帶1305可經由一或多個連接器耦接至眼鏡裝置1302。連接器可為有線或無線的,且可包括電及/或非電(例如,結構化)組件。在一些情況下,眼鏡裝置1302及頸帶1305可在兩者之間無任何有線或無線連接之情況下獨立地操作。雖然圖13繪示處於眼鏡裝置1302及頸帶1305上之實例部位中之眼鏡裝置1302及頸帶1305的組件,但該等組件可位於其他地方及/或以不同方式分佈在眼鏡裝置1302及/或頸帶1305上。在一些具體實例中,眼鏡裝置1302及頸帶1305之組件可位於與眼鏡裝置1302、頸帶1305或其某一組合配對的一或多個額外周邊裝置上。As shown, the neckband 1305 can be coupled to the eyewear device 1302 via one or more connectors. Connectors may be wired or wireless, and may include electrical and/or non-electrical (eg, structural) components. In some cases, glasses device 1302 and neckband 1305 may operate independently without any wired or wireless connection between the two. Although FIG. 13 depicts components of the eyewear device 1302 and neckband 1305 in example locations on the eyewear device 1302 and neckband 1305, such components may be located elsewhere and/or distributed in different ways across the eyewear device 1302 and/or or neck strap 1305 on. In some embodiments, components of the glasses device 1302 and neckband 1305 may be located on one or more additional peripheral devices paired with the glasses device 1302, neckband 1305, or some combination thereof.

使諸如頸帶1305等外部裝置與擴增實境眼鏡裝置配對可使得眼鏡裝置能夠達成一副眼鏡之外觀尺寸,同時仍為擴展能力提供充足電池及計算能力。擴增實境系統1300之電池電力、計算資源及/或額外特徵中的一些或全部可由配對裝置提供或在配對裝置與眼鏡裝置之間共用,因此總體上減小眼鏡裝置之重量、熱分佈及外觀尺寸,同時仍保留所要功能性。舉例而言,頸帶1305可允許將原本包括在眼鏡裝置上之組件包括於頸帶1305中,此係因為使用者可在其肩部上承受比其將在其頭部上所承受更重之重量負載。頸帶1305亦可具有較大表面積,在其上將熱量擴散及分散至周圍環境。因此,頸帶1305可允許比獨立眼鏡裝置上可能另外存在之電池容量及計算能力大的電池容量及計算能力。由於頸帶1305中所承載之重量相比於眼鏡裝置1302中所承載之重量而言對於使用者的侵襲感可較小,因此使用者可承受穿戴較輕眼鏡裝置且承受承載或穿戴配對裝置之時間長度大於使用者將承受穿戴較重之獨立式眼鏡裝置的時間長度,由此使得使用者能夠將人工實境環境更充分地併入至其日常活動中。Pairing an external device such as the neckband 1305 with the augmented reality glasses device may enable the glasses device to achieve the form factor of a pair of glasses while still providing sufficient battery and computing power for expansion capabilities. Some or all of the battery power, computing resources, and/or additional features of the augmented reality system 1300 may be provided by the paired device or shared between the paired device and the glasses device, thus reducing the weight, heat distribution, and form factor while still retaining desired functionality. For example, the neck strap 1305 can allow components that would otherwise be included on the eyewear device to be included in the neck strap 1305 because the user can bear more weight on their shoulders than they would on their head weight load. The neckband 1305 may also have a larger surface area on which to spread and dissipate heat to the surrounding environment. Thus, the neckband 1305 may allow for greater battery capacity and computing power than might otherwise exist on a standalone eyewear device. Since the weight carried in the neckband 1305 may be less intrusive to the user than the weight carried in the eyewear device 1302, the user can afford to wear a lighter eyewear device and bear the burden of carrying or wearing a paired device. The length of time is greater than the length of time a user would endure wearing a heavier standalone eyewear device, thereby enabling the user to more fully incorporate the artificial reality environment into their daily activities.

頸帶1305可與眼鏡裝置1302及/或其他裝置以通信方式耦接。此等其他裝置可向擴增實境系統1300提供某些功能(例如,追蹤、定位、深度映射、處理、儲存等)。在圖13之具體實例中,頸帶1305可包括兩個聲能轉換器(例如,1320(I)及1320(J)),其為麥克風陣列之部分(或可能形成其自身的麥克風子陣列)。頸帶1305亦可包括控制器1325及電源1335。Neckband 1305 may be communicatively coupled with eyewear device 1302 and/or other devices. These other devices may provide certain functionality to the augmented reality system 1300 (eg, tracking, positioning, depth mapping, processing, storage, etc.). In the particular example of FIG. 13, the neckband 1305 may include two acoustic transducers (e.g., 1320(I) and 1320(J)) that are part of a microphone array (or may form their own microphone sub-array) . Neckband 1305 may also include controller 1325 and power supply 1335 .

頸帶1305之聲能轉換器1320(I)及1320(J)可經組態以偵測聲音且將經偵測聲音轉換為電子格式(類比或數位)。在圖13之具體實例中,聲能轉換器1320(I)及1320(J)可定位於頸帶1305上,由此增加在頸帶上之聲能轉換器1320(I)及1320(J)與定位在眼鏡裝置1302上之其他聲能轉換器1320之間的距離。在一些情況下,增加在麥克風陣列之聲能轉換器1320之間的距離可提高經由麥克風陣列執行之波束成形的準確度。舉例而言,若聲音係由聲能轉換器1320(C)及1320(D)偵測到且在聲能轉換器1320(C)與1320(D)之間的距離大於例如聲能轉換器1320(D)與1320(E)之間的距離,則經偵測聲音之經判定源位置可比在聲音係由聲能轉換器1320(D)及1320(E)偵測到的情況下更準確。Acoustic transducers 1320(I) and 1320(J) of neckband 1305 may be configured to detect sound and convert the detected sound to an electronic format (analog or digital). In the specific example of FIG. 13, acoustic transducers 1320(I) and 1320(J) may be positioned on neckband 1305, thereby increasing the number of acoustic transducers 1320(I) and 1320(J) on the neckband. The distance from other acoustic transducers 1320 positioned on the eyewear device 1302 . In some cases, increasing the distance between the acoustic transducers 1320 of the microphone arrays can improve the accuracy of beamforming performed via the microphone arrays. For example, if sound is detected by acoustic transducers 1320(C) and 1320(D) and the distance between acoustic transducers 1320(C) and 1320(D) is greater than, for example, acoustic transducer 1320 (D) and 1320(E), the determined source location of the detected sound can be more accurate than if the sound was detected by acoustic transducers 1320(D) and 1320(E).

頸帶1305之控制器1325可處理由頸帶1305及/或擴增實境系統1300上之感測器產生的資訊。舉例而言,控制器1325可處理來自麥克風陣列的描述由麥克風陣列偵測到之聲音的資訊。對於各經偵測聲音,控制器1325可執行到達方向(DOA)估計以估計一方向,經偵測聲音自該方向到達麥克風陣列。當麥克風陣列偵測到聲音時,控制器1325可用資訊來填充音訊資料集。在擴增實境系統1300包括慣性量測單元之具體實例中,控制器1325可根據位於眼鏡裝置1302上之IMU來計算所有慣性及空間計算。連接器可在擴增實境系統1300與頸帶1305之間且在擴增實境系統1300與控制器1325之間傳送資訊。該資訊可呈光學資料、電資料、無線資料或任何其他可傳輸資料形式之形式。將對藉由擴增實境系統1300產生的資訊進行之處理移動至頸帶1305可減小眼鏡裝置1302中之重量及熱,由此使該眼鏡裝置對於使用者而言更舒適。Controller 1325 of neckband 1305 may process information generated by sensors on neckband 1305 and/or augmented reality system 1300 . For example, the controller 1325 may process information from the microphone array describing the sound detected by the microphone array. For each detected sound, the controller 1325 may perform a direction of arrival (DOA) estimation to estimate a direction from which the detected sound arrives at the microphone array. When sound is detected by the microphone array, the controller 1325 can populate the audio dataset with information. In embodiments where augmented reality system 1300 includes an inertial measurement unit, controller 1325 may calculate all inertial and spatial calculations from an IMU located on eyewear device 1302 . The connectors can communicate information between the augmented reality system 1300 and the neckband 1305 and between the augmented reality system 1300 and the controller 1325 . This information may be in the form of optical data, electrical data, wireless data or any other form of transmittable data. Moving the processing of information generated by the augmented reality system 1300 to the neckband 1305 can reduce weight and heat in the eyewear device 1302, thereby making the eyewear device more comfortable for the user.

頸帶1305中之電源1335可向眼鏡裝置1302及/或頸帶1305供電。電源1335可包括但不限於鋰離子電池、鋰聚合物電池、鋰原電池、鹼性電池或任何其他形式之電力儲存器。在一些情況下,電源1335可為有線電源。將電源1335包括在頸帶1305上而非眼鏡裝置1302上可有助於較佳地分佈由電源1335產生之重量及熱。A power supply 1335 in the neckband 1305 may provide power to the eyewear device 1302 and/or the neckband 1305 . Power source 1335 may include, but is not limited to, lithium ion batteries, lithium polymer batteries, lithium primary batteries, alkaline batteries, or any other form of power storage. In some cases, power source 1335 may be a wired power source. Including the power supply 1335 on the neckband 1305 instead of the eyewear device 1302 can help to better distribute the weight and heat generated by the power supply 1335 .

如所提及,代替將人工實境與實際實境摻合,一些人工實境系統可實質上用虛擬體驗來替換使用者對真實世界之感官感知中之一或多者。此類型之系統的一個實例係頭戴式顯示系統,諸如圖14中之虛擬實境系統1400,其主要或完全地覆蓋使用者之視野。虛擬實境系統1400可包括前部剛體1402及經塑形成圍繞使用者頭部適配之帶1404。虛擬實境系統1400亦可包括輸出音訊轉換器1406(A)及1406(B)。此外,雖然圖14中未展示,但前部剛體1402可包括一或多個電子元件,其包括一或多個電子顯示器、一或多個慣性量測單元(IMU)、一或多個追蹤發射器或偵測器及/或用於產生人工實境體驗之任何其他合適的裝置或系統。As mentioned, instead of blending artificial reality with actual reality, some artificial reality systems may essentially replace one or more of the user's sensory perception of the real world with a virtual experience. An example of a system of this type is a head mounted display system, such as virtual reality system 1400 in FIG. 14, which primarily or completely covers the user's field of view. The virtual reality system 1400 may include a front rigid body 1402 and a band 1404 shaped to fit around a user's head. The virtual reality system 1400 may also include output audio converters 1406(A) and 1406(B). Additionally, although not shown in FIG. 14 , front rigid body 1402 may include one or more electronic components including one or more electronic displays, one or more inertial measurement units (IMUs), one or more tracking transmitters sensor or detector and/or any other suitable device or system for generating an artificial reality experience.

人工實境系統可包括多種類型之視覺回饋機構。舉例而言,擴增實境系統1300及/或虛擬實境系統1400中之顯示裝置可包括一或多個液晶顯示器(LCD)、發光二極體(LED)顯示器、微型LED顯示器、有機LED(OLED)顯示器、數位光投影(DLP)微型顯示器、矽上液晶(LCoS)微型顯示器,及/或任何其他合適類型之顯示螢幕。此等人工實境系統可包括用於兩隻眼睛之單一顯示螢幕或可向各眼睛提供顯示螢幕,此可允許用於變焦調整或用於校正使用者之屈光不正的額外靈活性。此等人工實境系統中之一些亦可包括具有一或多個透鏡(例如,習知的凹透鏡或凸透鏡、菲涅爾透鏡、可調式液體透鏡等)之光學子系統,使用者可藉由該等透鏡來查看顯示螢幕。此等光學子系統可用於多種目的,包括使光準直(例如,使物件出現在比其實體距離更大的距離處)、放大光(例如,使物件看起來比其實際大小更大)及/或中繼光(將光中繼至例如觀看者之眼睛)。此等光學子系統可用於非瞳孔形成架構(諸如,直接使光準直但產生所謂的枕形畸變之單透鏡組態)及/或瞳孔形成架構(諸如,產生所謂的桶形畸變以消除枕形畸變之多透鏡組態)中。An artificial reality system may include various types of visual feedback mechanisms. For example, display devices in augmented reality system 1300 and/or virtual reality system 1400 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, micro LED displays, organic LED ( OLED) displays, digital light projection (DLP) microdisplays, liquid crystal on silicon (LCoS) microdisplays, and/or any other suitable type of display screen. These artificial reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow additional flexibility for zoom adjustment or for correcting the user's refractive error. Some of these artificial reality systems may also include an optical subsystem with one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, tunable liquid lenses, etc.) Wait for the lens to view the display screen. These optical subsystems can be used for a variety of purposes, including collimating light (for example, making objects appear at a greater distance than they actually are), amplifying light (for example, making objects appear larger than they really are), and / or relay light (relay light to eg the eyes of the viewer). These optical subsystems can be used in non-pupil forming architectures (such as single lens configurations that directly collimate light but produce so-called pincushion distortion) and/or pupil-forming architectures (such as producing so-called barrel distortion to eliminate the pincushion distortion). shape distortion of the multi-lens configuration).

除了使用顯示螢幕或代替使用顯示螢幕,本文中所描述之一些人工實境系統亦可包括一或多個投影系統。舉例而言,擴增實境系統1300及/或虛擬實境系統1400中之顯示裝置可包括微型LED投影儀,其(使用例如波導)將光投影至顯示裝置中,該等顯示裝置諸如為允許環境光穿過之清晰組合器透鏡。顯示裝置可將經投影光朝向使用者瞳孔折射且可使得使用者能夠同時觀看人工實境內容及真實世界兩者。顯示裝置可使用多種不同光學組件中之任一者來實現此目的,該等光學組件包括波導組件(例如,全像、平面、繞射、偏振及/或反射波導元件)、光操控表面及元件(諸如繞射、反射及折射元件以及光柵)、耦接元件等。人工實境系統亦可經組態成具有任何其他合適類型或形式之影像投影系統,諸如用於虛擬視網膜顯示器中之視網膜投影儀。Some of the artificial reality systems described herein may also include one or more projection systems in addition to or instead of using a display screen. For example, display devices in augmented reality system 1300 and/or virtual reality system 1400 may include micro LED projectors that project light (using, for example, waveguides) into display devices such as those that allow Clear combiner lens through which ambient light passes. The display device can refract the projected light toward the user's pupil and can enable the user to view both the artificial reality content and the real world at the same time. Display devices may use any of a variety of different optical components for this purpose, including waveguide components (e.g., holographic, planar, diffractive, polarizing, and/or reflective waveguide elements), light manipulating surfaces, and elements (such as diffractive, reflective and refractive elements and gratings), coupling elements, etc. The artificial reality system may also be configured with any other suitable type or form of image projection system, such as a retinal projector used in a virtual retinal display.

本文中所描述之人工實境系統亦可包括各種類型之電腦視覺組件及子系統。舉例而言,擴增實境系統1300及/或虛擬實境系統1400可包括一或多個光學感測器,諸如二維(2D)或3D攝影機、結構化光傳輸器及偵測器、飛行時間深度感測器、單光束或掃掠雷射測距儀、3D光達感測器及/或任何其他合適類型或形式之光學感測器。人工實境系統可處理來自此等感測器中之一或多者的資料以識別使用者之位置、繪製真實世界、向使用者提供關於真實世界環境之情境及/或執行多種其他功能。The artificial reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented reality system 1300 and/or virtual reality system 1400 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras, structured light transmitters and detectors, flight Temporal depth sensors, single beam or swept laser range finders, 3D lidar sensors and/or any other suitable type or form of optical sensors. An artificial reality system may process data from one or more of these sensors to identify a user's location, map the real world, provide the user with context about the real world environment, and/or perform a variety of other functions.

本文中所描述之人工實境系統亦可包括一或多個輸入及/或輸出音訊轉換器。輸出音訊轉換器可包括音圈揚聲器、帶式揚聲器、靜電揚聲器、壓電揚聲器、骨傳導轉換器、軟骨傳導轉換器、耳屏振動轉換器及/或任何其他合適類型或形式之音訊轉換器。類似地,輸入音訊轉換器可包括電容式麥克風、動態麥克風、帶式麥克風及/或任何其他類型或形式之輸入轉換器。在一些具體實例中,單一轉換器可用於音訊輸入及音訊輸出兩者。The artificial reality systems described herein may also include one or more input and/or output audio converters. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus vibration transducers, and/or any other suitable type or form of audio transducers. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducers. In some embodiments, a single converter can be used for both audio input and audio output.

在一些具體實例中,本文中所描述之人工實境系統亦可包括觸感(亦即,觸覺)回饋系統,該觸感回饋系統可併入至頭飾、手套、連體套裝、手持式控制器、環境裝置(例如,椅子、地墊等)及/或任何其他類型之裝置或系統中。觸覺回饋系統可提供各種類型之皮膚回饋,包括振動、施力、牽引、紋理及/或溫度。觸覺回饋系統亦可提供各種類型之動覺回饋,諸如運動及順應性。觸覺回饋可使用馬達、壓電致動器、流體系統及/或各種其他類型之回饋機構來實施。觸覺回饋系統可獨立於其他人工實境裝置、在其他人工實境裝置內及/或結合其他人工實境裝置來實施。In some embodiments, the artificial reality systems described herein can also include haptic (i.e., sense of touch) feedback systems that can be incorporated into headgear, gloves, one-piece suits, handheld controllers , environmental fixtures (eg, chairs, floor mats, etc.) and/or any other type of device or system. Haptic feedback systems can provide various types of skin feedback including vibration, force, traction, texture and/or temperature. Haptic feedback systems can also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback can be implemented using motors, piezoelectric actuators, fluid systems, and/or various other types of feedback mechanisms. The haptic feedback system can be implemented independently of, within, and/or in conjunction with other VR devices.

藉由提供觸覺感覺、聽覺內容及/或視覺內容,人工實境系統可在多種情境及環境中產生整個虛擬體驗或增強使用者之真實世界體驗。舉例而言,人工實境系統可在特定環境內輔助或延伸使用者之感知、記憶或認知。一些系統可增強使用者與真實世界中之其他人的互動或可實現與虛擬世界中之其他人的更具沉浸式之互動。人工實境系統亦可用於教學目的(例如,用於在學校、醫院、政府組織、軍事組織、商業企業等中進行教學或訓練)、娛樂目的(例如,用於播放視訊遊戲、聽音樂、觀看視訊內容等)及/或用於無障礙性目的(例如,作為助聽器、視覺輔助物等)。本文中所揭示之具體實例可在此等情境及環境中之一或多者中及/或在其他情境及環境中實現或增強使用者的人工實境體驗。By providing tactile sensations, auditory content, and/or visual content, an artificial reality system can create an entire virtual experience or enhance a user's real-world experience in a variety of situations and environments. For example, an artificial reality system can assist or extend a user's perception, memory, or cognition within a specific environment. Some systems may enhance a user's interaction with others in the real world or enable more immersive interactions with others in a virtual world. Artificial reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, commercial enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.) and/or for accessibility purposes (e.g. as hearing aids, visual aids, etc.). Embodiments disclosed herein may enable or enhance a user's artificial reality experience in one or more of these contexts and environments and/or in other contexts and environments.

一些擴增實境系統可使用被稱為「同時位置及映射」(SLAM)之技術來繪製使用者及/或裝置的環境。SLAM映射及位置識別技術可涉及多種硬體及軟體工具,其可產生或更新環境之地圖,同時追蹤使用者在經繪製環境內之位置。SLAM可使用許多不同類型之感測器以產生地圖且判定使用者在地圖內之位置。Some augmented reality systems may use a technique known as "simultaneous location and mapping" (SLAM) to map the environment of the user and/or the device. SLAM mapping and location recognition techniques can involve a variety of hardware and software tools that can generate or update a map of an environment while tracking a user's location within the mapped environment. SLAM can use many different types of sensors to generate a map and determine the user's location within the map.

SLAM技術可例如實施光學感測器以判定使用者之位置。包括WiFi、藍芽、全球定位系統(GPS)、蜂巢式或其他通信裝置之無線電亦可用於判定使用者相對於無線電收發器或收發器群組(例如,WiFi路由器或GPS衛星群組)的位置。諸如麥克風陣列或2D或3D聲納感測器之聲音感測器亦可用於判定使用者在環境內之位置。擴增實境裝置及虛擬實境裝置(諸如分別為圖13及圖14之系統1300及1400)可併入任何或所有此等類型的感測器以執行SLAM操作,諸如產生且連續地更新使用者之當前環境的地圖。在本文中所描述之具體實例中之至少一些中,由此等感測器產生的SLAM資料可被稱為「環境資料」且可指示使用者之當前環境。此資料可儲存在本地端或遠端資料儲存器(例如,雲端資料儲存器)中且可按需要提供至使用者之AR/VR裝置。SLAM technology can, for example, implement optical sensors to determine the location of a user. Radios including WiFi, Bluetooth, Global Positioning System (GPS), cellular, or other communication devices can also be used to determine a user's location relative to a radio transceiver or group of transceivers (for example, a WiFi router or a group of GPS satellites) . Acoustic sensors such as microphone arrays or 2D or 3D sonar sensors can also be used to determine the user's position within the environment. Augmented reality devices and virtual reality devices such as systems 1300 and 1400 of FIGS. 13 and 14 , respectively, can incorporate any or all of these types of sensors to perform SLAM operations, such as generating and continuously updating using A map of the reader's current environment. In at least some of the embodiments described herein, the SLAM data generated by these sensors may be referred to as "environmental data" and may be indicative of the user's current environment. This data can be stored in local or remote data storage (eg, cloud data storage) and can be provided to the user's AR/VR device on demand.

如所提及,人工實境系統1300及1400可與多種其他類型之裝置一起使用以提供更令人信服的人工實境體驗。此等裝置可為具有轉換器之觸覺介面,該等轉換器提供觸覺回饋及/或收集關於使用者與環境互動的觸覺資訊。本文中所揭示之人工實境系統可包括各種類型的觸覺介面,該等觸覺介面偵測或傳送各種類型之觸覺資訊,包括觸感回饋(例如,使用者經由皮膚中之神經偵測的回饋,其亦可被稱為皮膚回饋)及/或動覺回饋(例如,使用者經由位於肌肉、關節及/或肌腱中之受體偵測的回饋)。As mentioned, the augmented reality systems 1300 and 1400 can be used with various other types of devices to provide a more convincing augmented reality experience. These devices can be haptic interfaces with transducers that provide haptic feedback and/or collect haptic information about the user's interaction with the environment. The artificial reality systems disclosed herein may include various types of tactile interfaces that detect or transmit various types of tactile information, including tactile feedback (e.g., feedback detected by a user via nerves in the skin, It may also be referred to as cutaneous feedback) and/or kinesthetic feedback (eg, feedback detected by the user via receptors located in muscles, joints and/or tendons).

觸覺回饋可由定位在使用者之環境(例如,椅、桌、地板等)內的介面及/或可由使用者穿戴或攜帶之物品(例如,手套、腕帶等)上的介面提供。作為實例,圖15繪示呈可穿戴手套(觸覺裝置1510)及腕帶(觸覺裝置1520)形式之振動觸感系統1500。觸覺裝置1510及觸覺裝置1520展示為包括可撓性、可穿戴紡織材料1530之可穿戴裝置的實例,該紡織材料經塑形且經組態以用於分別抵靠使用者之手部及手腕而定位。本揭示內容亦包括振動觸感系統,其可經塑形且經組態以用於抵靠諸如手指、手臂、頭部、軀幹、足部或腿等其他人體部位而定位。藉助於實例而非限制,除了其他可能性,根據本揭示內容之各種具體實例的振動觸感系統亦可呈手套、頭帶、臂帶、袖套、頭罩、襪子、襯衫或褲子之形式。在一些實例中,術語「紡織物」可包括任何可撓性、可穿戴材料,包括編織品、非編織品、皮革、織布、可撓性聚合物材料、複合材料等。Haptic feedback may be provided by an interface positioned within the user's environment (eg, chair, table, floor, etc.) and/or an interface on an item (eg, glove, wristband, etc.) that may be worn or carried by the user. As an example, FIG. 15 illustrates a vibrohaptic system 1500 in the form of a wearable glove (haptic device 1510 ) and wristband (haptic device 1520 ). Haptic device 1510 and haptic device 1520 are shown as examples of wearable devices including flexible, wearable textile material 1530 shaped and configured to fit against a user's hand and wrist, respectively. position. The present disclosure also includes vibrotactile systems that can be shaped and configured for positioning against other body parts such as fingers, arms, head, torso, feet, or legs. By way of example and not limitation, vibrotactile systems according to various embodiments of the present disclosure may also take the form of gloves, headbands, armbands, cuffs, hoods, socks, shirts, or pants, among other possibilities. In some instances, the term "textile" may include any flexible, wearable material, including wovens, non-wovens, leather, woven cloth, flexible polymeric materials, composite materials, and the like.

一或多個振動觸感裝置1540可至少部分地定位在形成於振動觸感系統1500之紡織材料1530中的一或多個對應凹部內。振動觸感裝置1540可定位在向振動觸感系統1500之使用者提供振動感覺(例如,觸覺回饋)的位置中。舉例而言,振動觸感裝置1540可抵靠使用者之手指、拇指或手腕而定位,如圖15中所展示。在一些實例中,振動觸感裝置1540可具有足夠可撓性以貼合使用者之對應身體部位或與該身體部位一起彎曲。One or more vibro-tactile devices 1540 may be positioned at least partially within one or more corresponding recesses formed in textile material 1530 of vibro-tactile system 1500 . The vibro-tactile device 1540 may be positioned in a location that provides a vibratory sensation (eg, tactile feedback) to a user of the vibro-tactile system 1500 . For example, vibrotactile device 1540 may be positioned against a user's finger, thumb, or wrist, as shown in FIG. 15 . In some examples, the vibro-tactile device 1540 may be flexible enough to conform to or bend with the corresponding body part of the user.

用於將電壓施加至振動觸感裝置1540以用於啟動該振動觸感裝置之電源1550(例如,電池)可諸如經由導電佈線1552而電耦接至振動觸感裝置1540。在一些實例中,振動觸感裝置1540中之各者可獨立地電耦接至電源1550以用於個別啟動。在一些具體實例中,處理器1560可操作性地耦接至電源1550,且經組態(例如,經程式化)以控制振動觸感裝置1540之啟動。A power source 1550 (eg, a battery) for applying a voltage to the vibro-tactile device 1540 for activating the vibro-tactile device may be electrically coupled to the vibro-tactile device 1540 , such as via conductive wiring 1552 . In some examples, each of vibratory haptic devices 1540 may be independently electrically coupled to power source 1550 for individual activation. In some embodiments, processor 1560 is operatively coupled to power source 1550 and configured (eg, programmed) to control activation of vibro-tactile device 1540 .

振動觸感系統1500可以多種方式實施。在一些實例中,振動觸感系統1500可為獨立系統,其具有用於獨立於其他裝置及系統所操作之一體式子系統及組件。作為另一實例,振動觸感系統1500可經組態以用於與另一裝置或系統1570互動。舉例而言,在一些實例中,振動觸感系統1500可包括用於接收信號及/或將信號發送至另一裝置或系統1570之通信介面1580。另一裝置或系統1570可為行動裝置、遊戲控制台、人工實境(例如,虛擬實境、擴增實境、混合實境)裝置、個人電腦、平板電腦、網路裝置(例如,數據機、路由器等)、手持式控制器等。通信介面1580可經由無線(例如,Wi-Fi、藍芽、蜂巢式、無線電等)鏈路或有線鏈路來實現在振動觸感系統1500與另一裝置或系統1570之間的通信。若存在,則通信介面1580可與處理器1560通信,以便將信號提供至處理器1560以啟動或撤銷啟動振動觸感裝置1540中之一或多者。The vibrotactile system 1500 can be implemented in a variety of ways. In some examples, vibrotactile system 1500 may be a stand-alone system with integrated subsystems and components for operation independently of other devices and systems. As another example, vibrotactile system 1500 may be configured for interaction with another device or system 1570 . For example, vibrotactile system 1500 may include a communication interface 1580 for receiving signals and/or sending signals to another device or system 1570 in some examples. Another device or system 1570 can be a mobile device, a game console, an artificial reality (e.g., virtual reality, augmented reality, mixed reality) device, a personal computer, a tablet computer, a network device (e.g., a modem) , routers, etc.), handheld controllers, etc. Communication interface 1580 may enable communication between vibro-haptic system 1500 and another device or system 1570 via a wireless (eg, Wi-Fi, Bluetooth, cellular, radio, etc.) link or a wired link. If present, communication interface 1580 may communicate with processor 1560 to provide a signal to processor 1560 to activate or deactivate one or more of vibro-tactile devices 1540 .

振動觸感系統1500可視情況包括其他子系統及組件,諸如觸敏襯墊1590、壓力感測器、運動感測器、位置感測器、照明元件及/或使用者介面元件(例如,接通/斷開按鈕、振動控制元件等)。在使用期間,振動觸感裝置1540可經組態以出於多種不同原因而經啟動,諸如回應於使用者與使用者介面元件之互動、來自運動或位置感測器之信號、來自觸敏襯墊1590的信號、來自壓力感測器之信號、來自另一裝置或系統1570之信號等。Vibrotactile system 1500 may optionally include other subsystems and components, such as touch sensitive pads 1590, pressure sensors, motion sensors, position sensors, lighting elements, and/or user interface elements (e.g., switch on /disconnect buttons, vibration control elements, etc.). During use, the vibrotactile device 1540 can be configured to be activated for a number of different reasons, such as in response to user interaction with a user interface element, a signal from a motion or position sensor, from a touch-sensitive backing A signal from the pad 1590, a signal from a pressure sensor, a signal from another device or system 1570, etc.

儘管電源1550、處理器1560及通信介面1580在圖15中繪示為定位在觸覺裝置1520中,但本揭示內容不限於此。舉例而言,電源1550、處理器1560或通信介面1580中之一或多者可定位在觸覺裝置1510內或另一可穿戴紡織物內。Although power supply 1550, processor 1560, and communication interface 1580 are depicted in FIG. 15 as being positioned within haptic device 1520, the disclosure is not so limited. For example, one or more of power supply 1550, processor 1560, or communication interface 1580 may be positioned within haptic device 1510 or within another wearable textile.

觸覺可穿戴物(諸如在圖15中展示且結合該圖描述之彼等觸覺可穿戴物)可實施於多種類型之人工實境系統及環境中。圖16展示包括一個頭戴式虛擬實境顯示器及兩個觸覺裝置(亦即,手套)之實例人工實境環境1600,且在其他具體實例中,任何數目個此等組件及其他組件及/或此等組件與其他組件之任一組合可包括在人工實境系統中。舉例而言,在一些具體實例中,可存在多個頭戴式顯示器,其各自具有相關聯觸覺裝置,其中各頭戴式顯示器及各觸覺裝置與相同控制台、攜帶型計算裝置或其他計算系統通信。Haptic wearables such as those shown in and described in connection with FIG. 15 can be implemented in many types of artificial reality systems and environments. 16 shows an example artificial reality environment 1600 that includes a head-mounted virtual reality display and two haptic devices (i.e., gloves), and in other embodiments, any number of these and other components and/or Any combination of these components and other components may be included in an artificial reality system. For example, in some embodiments there may be multiple head-mounted displays, each with an associated haptic device, where each head-mounted display and each haptic device is connected to the same console, portable computing device, or other computing system communication.

頭戴式顯示器1602大體上表示任何類型或形式之虛擬實境系統,諸如圖14中之虛擬實境系統1400。觸覺裝置1604大體上表示由人工實境系統之使用者穿戴的任何類型或形式之可穿戴裝置,其將觸覺回饋提供至使用者以給予使用者他或她與虛擬物件以物理方式聯結之感知。在一些具體實例中,觸覺裝置1604可藉由將振動、運動及/或力施加至使用者來提供觸覺回饋。舉例而言,觸覺裝置1604可限制或擴增使用者之移動。以特定實例而言,觸覺裝置1604可限制使用者之手部向前移動,以使得使用者讓他或她的手部與虛擬壁實體接觸之感知。在此特定實例中,觸覺裝置內之一或多個致動器可藉由將流體泵送至觸覺裝置之可充氣氣囊中來實現物理移動限制。在一些實例中,使用者亦可使用觸覺裝置1604以將動作請求發送至控制台。動作請求之實例包括但不限於啟動應用程式及/或結束應用程式的請求及/或執行應用程式內之特定動作的請求。Head-mounted display 1602 generally represents any type or form of virtual reality system, such as virtual reality system 1400 in FIG. 14 . Haptic device 1604 generally represents any type or form of wearable device worn by a user of an artificial reality system that provides haptic feedback to the user to give the user a sense that he or she is physically connected to a virtual object. In some embodiments, haptic device 1604 can provide haptic feedback by applying vibration, motion, and/or force to the user. For example, the haptic device 1604 can limit or amplify the movement of the user. As a particular example, the haptic device 1604 can constrain the forward movement of the user's hand to give the user the perception that his or her hand is in physical contact with the virtual wall. In this particular example, one or more actuators within the haptic device can achieve physical movement restriction by pumping fluid into an inflatable bladder of the haptic device. In some examples, the user may also use the haptic device 1604 to send action requests to the console. Examples of action requests include, but are not limited to, requests to start an application and/or end an application and/or to perform a specific action within an application.

雖然觸覺介面可與虛擬實境系統一起使用,如圖16中所展示,但觸覺介面亦可與擴增實境系統一起使用,如圖17中所展示。圖17係使用者1710與擴增實境系統1700互動之立體圖。在此實例中,使用者1710可穿戴可具有一或多個顯示器1722且與觸覺裝置1730配對之一對擴增實境眼鏡1720。在此實例中,觸覺裝置1730可為包括複數個帶元件1732及將帶元件1732彼此連接之張緊機構1734的腕帶。While the haptic interface can be used with a virtual reality system, as shown in FIG. 16 , the haptic interface can also be used with an augmented reality system, as shown in FIG. 17 . FIG. 17 is a perspective view of user 1710 interacting with augmented reality system 1700 . In this example, a user 1710 may wear a pair of augmented reality glasses 1720 that may have one or more displays 1722 and pair with a haptic device 1730 . In this example, haptic device 1730 may be a wristband including a plurality of strap elements 1732 and a tensioning mechanism 1734 connecting strap elements 1732 to each other.

帶元件1732中之一或多者可包括任何類型或形式之適於提供觸覺回饋的致動器。舉例而言,帶元件1732中之一或多者可經組態以提供各種類型的皮膚回饋中之一或多者,包括振動、施力、牽引、紋理及/或溫度。為了提供此類回饋,帶元件1732可包括各種類型之致動器中之一或多者。在一個實例中,帶元件1732中之各者可包括振動觸感器(例如,振動觸感致動器),該振動觸感器經組態以共同或獨立地振動以向使用者提供各種類型的觸覺感覺中之一或多者。替代地,僅單一帶元件或帶元件之子集可包括振動觸感器。One or more of strap elements 1732 may include any type or form of actuator suitable for providing tactile feedback. For example, one or more of strap elements 1732 may be configured to provide one or more of various types of skin feedback, including vibration, force, traction, texture, and/or temperature. To provide such feedback, strap element 1732 may include one or more of various types of actuators. In one example, each of strap elements 1732 may include vibratory haptics (eg, vibrotactile actuators) configured to vibrate collectively or independently to provide the user with various types of One or more of the tactile sensations. Alternatively, only a single strap element or a subset of strap elements may include vibratory tactile sensors.

觸覺裝置1510、1520、1604及1730可包括任何合適數目個及/或任何合適類型之觸覺轉換器、感測器及/或回饋機構。舉例而言,觸覺裝置1510、1520、1604及1730可包括一或多個機械轉換器、壓電轉換器及/或流體轉換器。觸覺裝置1510、1520、1604及1730亦可包括不同類型及形式之轉換器的各種組合,該等轉換器共同地或獨立地工作以增強使用者之人工實境體驗。在一個實例中,觸覺裝置1730之帶元件1732中之各者可包括振動觸感器(例如,振動觸感致動器),該振動觸感器經組態以共同或獨立地振動以向使用者提供各種類型的觸覺感覺中之一或多者。Haptic devices 1510, 1520, 1604, and 1730 may include any suitable number and/or type of haptic transducers, sensors, and/or feedback mechanisms. For example, haptic devices 1510, 1520, 1604, and 1730 may include one or more mechanical, piezoelectric, and/or fluidic transducers. Haptic devices 1510, 1520, 1604, and 1730 may also include various combinations of different types and forms of transducers that work together or independently to enhance the user's artificial reality experience. In one example, each of the strap elements 1732 of the haptic device 1730 may include a vibrotactile sensor (e.g., a vibrotactile actuator) configured to vibrate collectively or independently for use or provide one or more of various types of tactile sensations.

在一些具體實例中,本文中所描述之系統亦可包括眼睛追蹤子系統,該眼睛追蹤子系統經設計以識別並追蹤使用者眼睛的各種特性,諸如使用者之凝視方向。片語「眼睛追蹤」在一些實例中可指量測、偵測、感測、判定及/或監視眼睛之位置、位向及/或運動所藉以的程序。所揭示系統可以多種不同方式量測眼睛的位置、位向及/或運動,該等方式包括經由使用各種基於光學之眼睛追蹤技術、基於超音波之眼睛追蹤技術等。眼睛追蹤子系統可以多種不同方式組態且可包括多種不同眼睛追蹤硬體組件或其他電腦視覺組件。舉例而言,眼睛追蹤子系統可包括多種不同之光學感測器,諸如二維(2D)或3D攝影機、飛行時間深度感測器、單光束或掃掠雷射測距儀、3D光達感測器及/或任何其他合適類型或形式之光學感測器。在此實例中,處理子系統可處理來自此等感測器中之一或多者的資料以量測、偵測、判定及/或以其他方式監視使用者眼睛之位置、位向及/或運動。In some embodiments, the systems described herein may also include an eye-tracking subsystem designed to identify and track various characteristics of a user's eyes, such as the user's gaze direction. The phrase "eye tracking" may refer in some instances to a process by which the position, orientation and/or movement of an eye is measured, detected, sensed, determined and/or monitored. The disclosed system can measure eye position, orientation, and/or motion in a number of different ways, including through the use of various optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, and the like. The eye-tracking subsystem can be configured in many different ways and can include many different eye-tracking hardware components or other computer vision components. For example, an eye-tracking subsystem can include a variety of different optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors sensor and/or any other suitable type or form of optical sensor. In this example, the processing subsystem may process data from one or more of these sensors to measure, detect, determine, and/or otherwise monitor the position, orientation, and/or sports.

圖18係併入能夠追蹤使用者眼睛之眼睛追蹤子系統的例示性系統1800之圖示。如圖18中所描繪,系統1800可包括光源1802、光學子系統1804、眼睛追蹤子系統1806及/或控制子系統1808。在一些實例中,光源1802可產生用於影像之光(例如,待呈現給觀看者之眼睛1801)。光源1802可表示多種合適裝置中之任一者。舉例而言,光源1802可包括二維投影儀(例如,LCoS顯示器)、掃描源(例如,掃描雷射)或其他裝置(例如,LCD、LED顯示器、OLED顯示器、主動矩陣OLED顯示器(AMOLED)、透明OLED顯示器(TOLED)、波導或能夠產生光以用於將影像呈現給觀看者的某一其他顯示器)。在一些實例中,影像可表示虛擬影像,相較於由光線之實際發散形成的影像,該虛擬影像可指由來自空間中之點的光線之明顯發散所形成的光學影像。FIG. 18 is a diagram of an exemplary system 1800 incorporating an eye tracking subsystem capable of tracking a user's eyes. As depicted in FIG. 18 , system 1800 may include light source 1802 , optics subsystem 1804 , eye-tracking subsystem 1806 , and/or control subsystem 1808 . In some examples, light source 1802 may generate light for an image (eg, to be presented to a viewer's eye 1801 ). Light source 1802 may represent any of a variety of suitable devices. For example, light source 1802 may include a two-dimensional projector (e.g., an LCoS display), a scanning source (e.g., a scanning laser), or other device (e.g., LCD, LED display, OLED display, active matrix OLED display (AMOLED), A transparent OLED display (TOLED), a waveguide, or some other display capable of generating light for presenting an image to a viewer). In some examples, an image may represent a virtual image, which may refer to an optical image formed by the apparent divergence of light rays from a point in space, as opposed to an image formed by the actual divergence of light rays.

在一些具體實例中,光學子系統1804可接收藉由光源1802產生之光並基於所接收之光而產生包括影像的會聚光1820。在一些實例中,光學子系統1804可包括任何數目個透鏡(例如,菲涅爾透鏡、凸透鏡、凹透鏡)、孔徑、濾光器、鏡面、稜鏡及/或其他光學組件,其可能與致動器及/或其他裝置組合。特別地,致動器及/或其他裝置可平移及/或旋轉光學組件中之一或多者以更改會聚光1820之一或多個態樣。另外,各種機械耦接可用以維持任何合適組合中之光學組件之相對間距及/或位向。In some embodiments, optical subsystem 1804 can receive light generated by light source 1802 and generate convergent light 1820 including an image based on the received light. In some examples, optics subsystem 1804 may include any number of lenses (e.g., Fresnel lenses, convex lenses, concave lenses), apertures, filters, mirrors, lenses, and/or other optical components that may be associated with actuation device and/or other device combinations. In particular, actuators and/or other devices may translate and/or rotate one or more of the optical components to alter one or more aspects of the focused light 1820 . Additionally, various mechanical couplings may be used to maintain the relative spacing and/or orientation of the optical components in any suitable combination.

在一個具體實例中,眼睛追蹤子系統1806可產生指示觀看者之眼睛1801之凝視角的追蹤資訊。在此具體實例中,控制子系統1808可至少部分地基於此追蹤資訊而控制光學子系統1804之態樣(例如,會聚光1820之入射角)。另外,在一些實例中,控制子系統1808可儲存並利用歷史追蹤資訊(例如,在諸如前一秒或其小部分等給定持續時間內的追蹤資訊之歷史)以預期眼睛1801的凝視角(例如,在眼睛1801之視軸與解剖軸之間的角)。在一些具體實例中,眼睛追蹤子系統1806可偵測源自眼睛1801之某一部分(例如,角膜、虹膜、瞳孔或其類似者)的輻射以判定眼睛1801之當前凝視角。在其他實例中,眼睛追蹤子系統1806可採用波前感測器以追蹤瞳孔之當前部位。In one embodiment, the eye tracking subsystem 1806 can generate tracking information indicative of the gaze angle of the viewer's eyes 1801 . In this embodiment, control subsystem 1808 can control an aspect of optics subsystem 1804 (eg, the angle of incidence of convergent light 1820 ) based at least in part on the tracking information. Additionally, in some examples, the control subsystem 1808 may store and utilize historical tracking information (e.g., a history of tracking information for a given duration such as the previous second or fraction thereof) to anticipate the gaze angle of the eye 1801 ( For example, the angle between the visual axis of the eye 1801 and the anatomical axis). In some embodiments, eye tracking subsystem 1806 may detect radiation originating from a portion of eye 1801 (eg, cornea, iris, pupil, or the like) to determine the current gaze angle of eye 1801 . In other examples, the eye tracking subsystem 1806 may employ a wavefront sensor to track the current location of the pupil.

任何數目個技術可用以追蹤眼睛1801。一些技術可涉及藉由紅外光來照明眼睛1801,以及藉由經調節以對紅外光敏感之至少一個光學感測器來量測反射。關於紅外光如何自眼睛1801反射之資訊可經分析以判定諸如角膜、瞳孔、虹膜及/或視網膜血管等一或多個眼睛特徵之位置、位向及/或運動。Any number of techniques may be used to track eyes 1801. Some techniques may involve illuminating the eye 1801 with infrared light, and measuring reflection with at least one optical sensor tuned to be sensitive to infrared light. Information about how infrared light is reflected from the eye 1801 may be analyzed to determine the position, orientation, and/or motion of one or more features of the eye, such as the cornea, pupil, iris, and/or retinal blood vessels.

在一些實例中,由眼睛追蹤子系統1806之感測器俘獲的輻射可經數位化(亦即,轉換成電子信號)。此外,感測器可將此電子信號之數位表示傳輸至一或多個處理器(例如,與包括眼睛追蹤子系統1806之裝置相關聯的處理器)。眼睛追蹤子系統1806可包括呈多種不同組態之多種感測器中之任一者。舉例而言,眼睛追蹤子系統1806可包括對紅外線輻射作出反應之紅外線偵測器。紅外線偵測器可為熱偵測器、光子偵測器及/或任何其他合適類型之偵測器。熱偵測器可包括對入射紅外線輻射之熱效應作出反應的偵測器。In some examples, radiation captured by sensors of eye-tracking subsystem 1806 may be digitized (ie, converted into electronic signals). Additionally, the sensor may transmit a digital representation of this electronic signal to one or more processors (eg, a processor associated with a device including eye-tracking subsystem 1806). Eye tracking subsystem 1806 may include any of a variety of sensors in a variety of different configurations. For example, eye tracking subsystem 1806 may include an infrared detector that responds to infrared radiation. Infrared detectors may be thermal detectors, photon detectors, and/or any other suitable type of detector. Thermal detectors may include detectors that respond to the thermal effect of incident infrared radiation.

在一些實例中,一或多個處理器可處理藉由用以追蹤眼睛1801之移動的眼睛追蹤子系統1806之感測器所產生的數位表示。在另一實例中,此等處理器可藉由實行由儲存於非暫時性記憶體上之電腦可執行指令所表示的演算法來追蹤眼睛1801之移動。在一些實例中,晶片上邏輯(例如,特殊應用積體電路或ASIC)可用以執行此類演算法之至少部分。如所提及,眼睛追蹤子系統1806可經程式化以使用感測器之輸出來追蹤眼睛1801之移動。在一些具體實例中,眼睛追蹤子系統1806可分析藉由感測器產生之數位表示以自反射之變化而萃取眼睛旋轉資訊。在一個具體實例中,眼睛追蹤子系統1806可使用角膜反射或閃光(亦稱為浦金埃氏(Purkinje)影像)及/或眼睛之瞳孔1822的中心作為特徵以隨時間追蹤。In some examples, one or more processors may process digital representations generated by sensors of eye tracking subsystem 1806 to track the movement of eyes 1801 . In another example, the processors can track the movement of the eyes 1801 by executing algorithms represented by computer-executable instructions stored on non-transitory memory. In some instances, on-chip logic (eg, application specific integrated circuits or ASICs) may be used to perform at least a portion of such algorithms. As mentioned, the eye tracking subsystem 1806 can be programmed to track the movement of the eye 1801 using the output of the sensors. In some embodiments, eye tracking subsystem 1806 can analyze digital representations generated by sensors to extract eye rotation information from changes in reflection. In one embodiment, the eye tracking subsystem 1806 can use the corneal reflection or glint (also known as the Purkinje image) and/or the center of the pupil 1822 of the eye as features to track over time.

在一些具體實例中,眼睛追蹤子系統1806可使用眼睛之瞳孔1822的中心以及紅外或近紅外非準直光來產生角膜反射。在此等具體實例中,眼睛追蹤子系統1806可使用在眼睛之瞳孔1822之中心與角膜反射之間的向量來計算眼睛1801之凝視方向。在一些具體實例中,所揭示系統可在追蹤使用者眼睛之前執行用於個人之校準程序(使用例如受監督或無監督技術)。舉例而言,校準程序可包括引導使用者查看顯示在顯示器上之一或多個點,同時眼睛追蹤系統記錄對應於與各點相關聯之各凝視位置的值。In some embodiments, the eye-tracking subsystem 1806 may use the center of the pupil 1822 of the eye and infrared or near-infrared uncollimated light to generate corneal reflections. In these embodiments, the eye-tracking subsystem 1806 may calculate the gaze direction of the eye 1801 using the vector between the center of the eye's pupil 1822 and the corneal reflection. In some embodiments, the disclosed system can perform a calibration procedure for the individual (using, for example, supervised or unsupervised techniques) prior to tracking the user's eyes. For example, the calibration procedure may include directing the user to look at one or more points displayed on the display while the eye-tracking system records values corresponding to each gaze position associated with each point.

在一些具體實例中,眼睛追蹤子系統1806可使用兩種類型之紅外及/或近紅外(亦稱為主動光)眼睛追蹤技術:亮瞳孔及暗瞳孔眼睛追蹤,其可基於照明源相對於所使用之光學元件的部位而予以區分。若照明與光學路徑同軸,則眼睛1801可在光反射出視網膜時充當回反射器,藉此建立類似於攝影中之紅眼效應的亮瞳孔效應。若照明源自光學路徑偏移,則眼睛之瞳孔1822可顯現為暗,此係因為來自視網膜之回反射經引導遠離感測器。在一些具體實例中,亮瞳孔追蹤可產生較大虹膜/瞳孔對比度,由此允許藉由虹膜色素沈著進行更穩固眼睛追蹤,且可提供減小之干擾(例如,由睫毛及其他遮擋特徵所引起的干擾)。亮瞳孔追蹤亦可允許在介於全暗至極亮環境範圍內之照明條件下追蹤。In some embodiments, the eye-tracking subsystem 1806 can use two types of infrared and/or near-infrared (also known as active light) eye-tracking technology: bright pupil and dark pupil eye tracking, which can be based on the relative The parts of the optical components used are distinguished. If the illumination is coaxial with the optical path, the eye 1801 can act as a back reflector when light reflects off the retina, thereby creating a bright pupil effect similar to the red eye effect in photography. If the illumination originates from an optical path shift, the pupil 1822 of the eye may appear dark because retroreflections from the retina are directed away from the sensor. In some embodiments, bright pupil tracking can result in greater iris/pupil contrast, thereby allowing for more robust eye tracking due to iris pigmentation, and can provide reduced interference (e.g., caused by eyelashes and other occluding features) interference). Bright pupil tracking also allows tracking in lighting conditions ranging from total darkness to very bright environments.

在一些具體實例中,控制子系統1808可控制光源1802及/或光學子系統1804以減小可由眼睛1801引起或影響之影像的光學像差(例如,色像差及/或單色像差)。在一些實例中,如上文所提及,控制子系統1808可使用來自眼睛追蹤子系統1806之追蹤資訊來執行此類控制。舉例而言,在控制光源1802中,控制子系統1808可更改藉由光源1802(例如,藉助於影像渲染)產生之光以修改(例如,預扭曲)影像而使得由眼睛1801所引起的影像之像差得以減小。In some embodiments, control subsystem 1808 may control light source 1802 and/or optical subsystem 1804 to reduce optical aberrations (e.g., chromatic aberrations and/or monochromatic aberrations) of images that may be caused or affected by eye 1801 . In some examples, as mentioned above, control subsystem 1808 may use tracking information from eye-tracking subsystem 1806 to perform such control. For example, in controlling the light source 1802, the control subsystem 1808 may alter the light produced by the light source 1802 (e.g., by means of image rendering) to modify (e.g., pre-distort) the image such that the image caused by the eye 1801 Aberrations are reduced.

所揭示系統可追蹤瞳孔之位置及相對大小兩者(此係由於例如瞳孔擴大及/或收縮)。在一些實例中,用於偵測及/或追蹤瞳孔之眼睛追蹤裝置及組件(例如,感測器及/或源)可對於不同類型的眼睛而為不同的(或以不同方式校準)。舉例而言,感測器之頻率範圍可對於不同顏色及/或不同瞳孔類型、大小及/或其類似者的眼睛而為不同的(或經單獨校準)。因而,本文中所描述之各種眼睛追蹤組件(例如,紅外源及/或感測器)可需要針對各個別使用者及/或眼睛而校準。The disclosed system can track both the position and relative size of the pupil (due to, for example, pupil dilation and/or constriction). In some examples, eye-tracking devices and components (eg, sensors and/or sources) used to detect and/or track pupils may be different (or calibrated differently) for different types of eyes. For example, the frequency range of the sensor may be different (or calibrated separately) for eyes of different colors and/or different pupil types, sizes and/or the like. As such, the various eye tracking components (eg, infrared sources and/or sensors) described herein may need to be calibrated for each individual user and/or eye.

所揭示系統可運用或不運用諸如藉由使用者所穿戴之隱形眼鏡提供的眼科校正來追蹤兩隻眼睛。在一些具體實例中,眼科校正元件(例如,可調式透鏡)可直接併入至本文中所描述之人工實境系統中。在一些實例中,使用者眼睛之顏色可需要修改對應眼睛追蹤演算法。舉例而言,可需要在至少部分地基於棕色眼睛與例如藍色眼睛之間的不同顏色對比度而修改眼睛追蹤演算法。The disclosed system can track both eyes with or without ophthalmic corrections such as provided by contact lenses worn by the user. In some embodiments, ophthalmic corrective elements (eg, adjustable lenses) can be incorporated directly into the artificial reality systems described herein. In some instances, the color of the user's eyes may require modification of the corresponding eye tracking algorithm. For example, it may be desirable to modify the eye tracking algorithm based at least in part on the different color contrast between brown eyes and, for example, blue eyes.

圖19係圖18中所繪示之眼睛追蹤子系統的各種態樣之更詳細圖示。如此圖中所展示,眼睛追蹤子系統1900可包括至少一個源1904及至少一個感測器1906。源1904大體上表示能夠發射輻射之任何類型或形式的元件。在一個實例中,源1904可產生可見光、紅外光及/或近紅外光輻射。在一些實例中,源1904可朝向使用者之眼睛1902輻射電磁波譜之非準直紅外光及/或近紅外光部分。源1904可利用多種取樣速率及速度。舉例而言,所揭示系統可使用具有較高取樣速率之源以便俘獲使用者之眼睛1902的注視眼睛移動及/或正確地量測使用者之眼睛1902的眼跳動力學。如上文所提及,任何類型或形式之眼睛追蹤技術可用以追蹤使用者的眼睛1902,包括基於光學之眼睛追蹤技術、基於超音波之眼睛追蹤技術等。19 is a more detailed illustration of various aspects of the eye tracking subsystem depicted in FIG. 18 . As shown in this figure, eye tracking subsystem 1900 may include at least one source 1904 and at least one sensor 1906 . Source 1904 generally represents any type or form of element capable of emitting radiation. In one example, source 1904 can generate visible light, infrared light, and/or near infrared light radiation. In some examples, source 1904 may radiate uncollimated infrared light and/or near-infrared light portions of the electromagnetic spectrum toward user's eye 1902 . Source 1904 may utilize a variety of sampling rates and speeds. For example, the disclosed system may use a source with a higher sampling rate in order to capture the gaze eye movement of the user's eye 1902 and/or correctly measure the saccade dynamics of the user's eye 1902 . As mentioned above, any type or form of eye-tracking technology may be used to track the user's eyes 1902, including optical-based eye-tracking technology, ultrasound-based eye-tracking technology, and the like.

感測器1906大體上表示能夠偵測輻射(諸如自使用者之眼睛1902反射的輻射)之任何類型或形式的元件。感測器1906之實例包括但不限於電荷耦接裝置(CCD)、光電二極體陣列、基於互補金屬氧化物半導體(CMOS)之感測器裝置,及/或其類似者。在一個實例中,感測器1906可表示具有預定參數之感測器,該等預定參數包括但不限於動態解析度範圍、線性及/或經選擇及/或特定設計以用於眼睛追蹤的其他特性。Sensor 1906 generally represents any type or form of element capable of detecting radiation, such as radiation reflected from a user's eye 1902 . Examples of sensor 1906 include, but are not limited to, charge coupled devices (CCDs), photodiode arrays, complementary metal oxide semiconductor (CMOS) based sensor devices, and/or the like. In one example, sensor 1906 may represent a sensor having predetermined parameters including, but not limited to, dynamic resolution range, linearity, and/or other sensors selected and/or specifically designed for eye tracking. characteristic.

如上文所詳述,眼睛追蹤子系統1900可產生一或多個閃光。如上文所詳述,閃光1903可表示輻射(例如,來自諸如源1904等紅外光源之紅外光輻射)自使用者眼睛之結構的反射。在各種具體實例中,可使用藉由處理器(在人工實境裝置內或外部)執行之眼睛追蹤演算法來追蹤閃光1903及/或使用者之瞳孔。舉例而言,人工實境裝置可包括為了在本地端執行眼睛追蹤之處理器及/或記憶體裝置,及/或用以發送及接收在外部裝置(例如,行動電話、雲端伺服器或其他計算裝置)上執行眼睛追蹤所必需之資料的收發器。As detailed above, eye tracking subsystem 1900 may generate one or more flashes of light. As detailed above, glint 1903 may represent the reflection of radiation (eg, infrared light radiation from an infrared light source such as source 1904 ) off the structure of the user's eye. In various embodiments, the flash 1903 and/or the user's pupils may be tracked using an eye-tracking algorithm executed by a processor (inside or outside the artificial reality device). For example, an artificial reality device may include a processor and/or memory device for performing eye tracking locally, and/or for sending and receiving A transceiver for the data necessary to perform eye-tracking on the device).

圖19展示由諸如眼睛追蹤子系統1900等眼睛追蹤子系統俘獲之實例影像1905。在此實例中,影像1905可包括使用者之瞳孔1908及在該瞳孔附近之閃光1910兩者。在一些實例中,可使用諸如基於電腦視覺之演算法等基於人工智慧之演算法來識別瞳孔1908及/或閃光1910。在一個具體實例中,影像1905可表示可經不斷地分析以便追蹤使用者之眼睛1902的一系列訊框中之單一訊框。另外,瞳孔1908及/或閃光1910可在一時段內被追蹤以判定使用者之凝視。FIG. 19 shows an example image 1905 captured by an eye-tracking subsystem, such as eye-tracking subsystem 1900 . In this example, image 1905 may include both the user's pupil 1908 and a flash of light 1910 near the pupil. In some examples, pupils 1908 and/or glint 1910 may be identified using artificial intelligence-based algorithms, such as computer vision-based algorithms. In one embodiment, image 1905 may represent a single frame in a series of frames that may be analyzed over time in order to track user's eyes 1902 . In addition, pupil 1908 and/or flash 1910 may be tracked over a period of time to determine the user's gaze.

在一個實例中,眼睛追蹤子系統1900可經組態以識別並量測使用者之瞳孔間距離(IPD)。在一些具體實例中,在使用者正穿戴人工實境系統的同時,眼睛追蹤子系統1900可量測及/或計算使用者之IPD。在此等具體實例中,眼睛追蹤子系統1900可偵測使用者眼睛之位置且可使用此資訊以計算使用者之IPD。In one example, the eye tracking subsystem 1900 can be configured to identify and measure a user's interpupillary distance (IPD). In some embodiments, the eye-tracking subsystem 1900 can measure and/or calculate the user's IPD while the user is wearing the artificial reality system. In these embodiments, the eye-tracking subsystem 1900 can detect the location of the user's eyes and can use this information to calculate the user's IPD.

如所提及,本文中所揭示之眼睛追蹤系統或子系統可以多種方式追蹤使用者之眼睛位置及/或眼睛移動。在一個實例中,一或多個光源及/或光學感測器可俘獲使用者眼睛之影像。眼睛追蹤子系統接著可使用所俘獲資訊以判定使用者之瞳孔間距離、眼間距離及/或各眼睛之3D位置(例如,出於畸變調整目的),包括扭轉及旋轉(亦即,橫搖、縱搖及偏向)的量值及/或用於各眼睛之凝視方向。在一個實例中,紅外光可藉由眼睛追蹤子系統發射並自各眼睛反射。反射光可藉由光學感測器接收或偵測並經過分析以自由各眼睛反射的紅外光之變化來萃取眼睛旋轉資料。As mentioned, eye tracking systems or subsystems disclosed herein can track a user's eye position and/or eye movement in a variety of ways. In one example, one or more light sources and/or optical sensors may capture images of the user's eyes. The eye-tracking subsystem can then use the captured information to determine the user's interpupillary distance, interocular distance, and/or the 3D position of each eye (e.g., for distortion adjustment purposes), including twist and rotation (i.e., roll , pitch and yaw) and/or the gaze direction for each eye. In one example, infrared light may be emitted by the eye-tracking subsystem and reflected from each eye. The reflected light may be received or detected by an optical sensor and analyzed to extract eye rotation data from changes in infrared light reflected from each eye.

眼睛追蹤子系統可使用多種不同方法中之任一者以追蹤使用者之眼睛。舉例而言,光源(例如,紅外發光二極體)可發射點圖案至使用者之各眼睛上。眼睛追蹤子系統接著可偵測(例如,經由耦接至人工實境系統之光學感測器)並分析點圖案自使用者之各眼睛的反射以識別使用者之各瞳孔的部位。因此,眼睛追蹤子系統可追蹤各眼睛之至多六個自由度(亦即,3D位置、橫搖、縱搖及偏向)且追蹤數量的至少一子集可自使用者之兩隻眼睛組合以估計凝視點(亦即,使用者正觀看之虛擬場景中的3D部位或位置)及/或IPD。The eye tracking subsystem may use any of a number of different methods to track the user's eyes. For example, a light source (eg, an infrared light emitting diode) may emit a pattern of dots onto each eye of the user. The eye-tracking subsystem can then detect (eg, via optical sensors coupled to the artificial reality system) and analyze the reflection of the dot pattern from each eye of the user to identify the location of each pupil of the user. Thus, the eye-tracking subsystem can track up to six degrees of freedom (i.e., 3D position, roll, pitch, and yaw) for each eye and at least a subset of the tracked quantities can be estimated from a combination of the user's two eyes Gaze point (ie, the 3D part or position in the virtual scene that the user is looking at) and/or IPD.

在一些情況下,在使用者瞳孔與顯示器之間的距離可隨著使用者之眼睛移動以在不同方向上觀看而改變。當觀看方向改變時在瞳孔與顯示器之間的變化距離可被稱作「瞳孔遊動」且可促成使用者所感知之畸變,此係由於當在瞳孔與顯示器之間的距離改變時光聚焦在不同部位中而引起。因此,量測在相對於顯示器之不同眼睛位置及瞳孔距離處的畸變以及產生用於不同位置及距離之畸變校正可允許藉由追蹤使用者眼睛的3D位置以及在給定時間點施加對應於使用者眼睛中之各者之3D位置的畸變校正而減輕由瞳孔遊動所引起之畸變。因此,知曉使用者眼睛中之各者的3D位置可允許藉由施加用於各3D眼睛位置之畸變校正而減輕由在眼睛瞳孔與顯示器之間的距離之變化所引起之畸變。此外,如上文所提及,知曉使用者眼睛中之各者的位置亦可使得眼睛追蹤子系統能夠對於使用者之IPD作出自動調整。In some cases, the distance between the user's pupil and the display may change as the user's eyes move to look in different directions. The varying distance between the pupil and the display when the viewing direction changes can be referred to as "pupil wandering" and can contribute to the distortion perceived by the user because light is focused at different locations when the distance between the pupil and the display changes caused by. Therefore, measuring distortion at different eye positions and pupillary distances relative to the display and generating distortion corrections for different positions and distances may allow for tracking the 3D position of the user's eyes and applying the corresponding Distortion correction of the 3D position of each of the subject's eyes to mitigate distortion caused by pupil wander. Thus, knowing the 3D position of each of the user's eyes may allow distortion caused by changes in the distance between the pupil of the eye and the display to be mitigated by applying a distortion correction for each 3D eye position. Furthermore, as mentioned above, knowledge of the location of each of the user's eyes may also enable the eye-tracking subsystem to make automatic adjustments to the user's IPD.

在一些具體實例中,顯示子系統可包括可結合本文中所描述之眼睛追蹤子系統而工作的多種額外子系統。舉例而言,顯示子系統可包括變焦子系統、場景渲染模組及/或輻輳處理模組。變焦子系統可使左右顯示元件改變顯示裝置之焦距。在一個具體實例中,變焦子系統可藉由移動顯示器、光學件或兩者而實體地改變在顯示器與查看該變焦子系統所經由之光學件之間的距離。另外,相對於彼此移動或平移兩個透鏡亦可用以改變顯示器之焦距。因此,變焦子系統可包括移動顯示器及/或光學件以改變兩者之間的距離之致動器或馬達。此變焦子系統可與顯示子系統分開或整合至顯示子系統中。變焦子系統亦可整合至其致動子系統及/或本文中所描述之眼睛追蹤子系統中或與之分開。In some embodiments, the display subsystem can include various additional subsystems that can work in conjunction with the eye-tracking subsystem described herein. For example, the display subsystem may include a zoom subsystem, a scene rendering module, and/or a vergence processing module. The zoom subsystem enables the left and right display elements to change the focal length of the display device. In one embodiment, the zoom subsystem can physically change the distance between the display and the optics through which the zoom subsystem is viewed by moving the display, the optics, or both. Additionally, moving or translating the two lenses relative to each other can also be used to change the focal length of the display. Thus, the zoom subsystem may include actuators or motors that move the display and/or optics to change the distance between the two. This zoom subsystem can be separate from the display subsystem or integrated into the display subsystem. The zoom subsystem may also be integrated into or separate from its actuation subsystem and/or the eye tracking subsystem described herein.

在一個實例中,顯示子系統可包括經組態以基於凝視點及/或藉由眼睛追蹤子系統判定之凝視線的估計相交點而判定使用者凝視之輻輳深度的輻輳處理模組。輻輳可指兩隻眼睛在相反方向上同時移動或旋轉以維持單雙目視覺,此可由人眼自然地且自動地執行。因此,使用者眼睛轉動之部位係使用者正在觀看之部位,且典型地亦為使用者眼睛聚焦的部位。舉例而言,輻輳處理模組可對凝視線進行三角量測以估計與凝視線之相交點相關聯之相距使用者的距離或深度。與凝視線之相交點相關聯之深度可接著用作調節距離的近似值,其可識別使用者眼睛所指向之相距使用者的距離。因此,輻輳距離可允許對使用者之眼睛應聚焦的部位及眼睛聚焦所在之相距使用者眼睛的深度進行判定,藉此提供資訊(諸如物件或焦點之平面)以用於渲染對虛擬場景之調整。In one example, the display subsystem may include a vergence processing module configured to determine a depth of convergence of a user's gaze based on gaze points and/or estimated intersection points of gaze lines determined by the eye tracking subsystem. Vergence may refer to simultaneous movement or rotation of two eyes in opposite directions to maintain monocular vision, which may be performed naturally and automatically by the human eye. Thus, the part where the user's eyes turn is the part where the user is looking at, and typically also the part where the user's eyes are focused. For example, the convergence processing module may triangulate gaze lines to estimate distances or depths from the user associated with intersection points of gaze lines. The depth associated with the intersection of the gaze lines can then be used as an approximation of the adjustment distance, which can identify the distance from the user at which the user's eyes are pointing. Thus, the vergence distance allows a determination of where the user's eyes should focus and at what depth from the user's eyes at which the eyes should focus, thereby providing information (such as the plane of the object or focal point) for rendering adjustments to the virtual scene .

輻輳處理模組可與本文中所描述之眼睛追蹤子系統協調以對顯示子系統作出調整以考慮使用者之輻輳深度。當使用者聚焦於在一距離處之某物時,使用者之瞳孔可比當使用者聚焦於附近某物時稍微間隔更遠。眼睛追蹤子系統可獲得關於使用者之輻輳或聚焦深度的資訊且可調整顯示子系統以當使用者之眼睛聚焦或臨近於附近某物時更靠近在一起,並當使用者之眼睛聚焦或臨近於一距離處的某物時更加分開。The vergence processing module may coordinate with the eye tracking subsystem described herein to make adjustments to the display subsystem to account for the user's depth of convergence. When the user focuses on something at a distance, the user's pupils may be slightly further apart than when the user focuses on something nearby. The eye-tracking subsystem can obtain information about the user's vergence or depth of focus and can adjust the display subsystem to move closer together when the user's eyes are focused or close to something nearby, and when the user's eyes are focused or close More apart when something is at a distance.

藉由上文所描述之眼睛追蹤子系統產生的眼睛追蹤資訊亦可例如用以修改不同的電腦產生之影像如何呈現的各種態樣。舉例而言,顯示子系統可經組態以基於藉由眼睛追蹤子系統產生之資訊而修改電腦產生之影像如何呈現的至少一個態樣。舉例而言,電腦產生之影像可基於使用者之眼睛移動而得以修改,以使得若使用者正查找,則電腦產生之影像可在螢幕上朝上移動。類似地,若使用者正向側面看或向下看,則電腦產生之影像可在螢幕上移動到側面或向下移動。若使用者之眼睛閉合,則電腦產生之影像可暫停或自顯示器移除並在使用者之眼睛再張開後恢復。Eye-tracking information generated by the eye-tracking subsystem described above may also be used, for example, to modify various aspects of how different computer-generated images are presented. For example, the display subsystem can be configured to modify at least one aspect of how the computer-generated image is presented based on information generated by the eye-tracking subsystem. For example, a computer-generated image can be modified based on the user's eye movement so that if the user is looking, the computer-generated image can move upwards on the screen. Similarly, if the user is looking sideways or down, the computer generated image can move sideways or down on the screen. If the user's eyes are closed, the computer-generated image can be paused or removed from the display and resumed after the user's eyes are opened again.

上文所描述之眼睛追蹤子系統可以多種方式併入至本文中所描述的各種人工實境系統中之一或多者中。舉例而言,系統1800及/或眼睛追蹤子系統1900之各種組件中的一或多者可併入至圖13中之擴增實境系統1300及/或圖14中之虛擬實境系統1400中以使得此等系統能夠執行各種眼睛追蹤任務(包括本文中所描述的眼睛追蹤操作中之一或多者)。The eye-tracking subsystem described above can be incorporated into one or more of the various immersive systems described herein in a variety of ways. For example, one or more of the various components of system 1800 and/or eye tracking subsystem 1900 may be incorporated into augmented reality system 1300 in FIG. 13 and/or virtual reality system 1400 in FIG. 14 to enable such systems to perform various eye-tracking tasks, including one or more of the eye-tracking operations described herein.

圖20A繪示經組態以穿戴在使用者之下臂或手腕上作為可穿戴系統2000的例示性人機介面(在本文中亦被稱作EMG控制介面)。在此實例中,可穿戴系統2000可包括圍繞彈性帶2020周向地配置之十六個神經肌肉感測器2010(例如,EMG感測器),其中內表面2030經組態以接觸使用者之皮膚。然而,可使用任何合適數目個神經肌肉感測器。神經肌肉感測器之數目及配置可取決於可穿戴裝置所用於之特定應用。舉例而言,可穿戴臂帶或腕帶可用以產生用於控制擴增實境系統、機器人、控制車輛、滾動文字、控制虛擬化身或任何其他合適之控制任務的控制資訊。如所展示,感測器可使用併入至無線裝置中之可撓性電子器件而耦接在一起。圖20B繪示穿過圖20A中所展示之可穿戴裝置的感測器中之一者的橫截面視圖。在一些具體實例中,可視情況使用硬體信號處理電路系統來處理感測組件中之一或多者之輸出(例如,以執行放大、過濾及/或整流)。在其他具體實例中,感測組件之輸出之至少某一信號處理可在軟體中執行。因此,對由感測器取樣之信號的信號處理可在硬體、軟體中執行或藉由硬體與軟體之任何合適組合執行,本文中所描述之技術的態樣在此方面不受限制。下文參考圖21A及圖21B更詳細地論述用以處理來自感測器2010之經記錄資料的信號處理鏈之非限制性實例。FIG. 20A illustrates an exemplary human-machine interface (also referred to herein as an EMG control interface) configured to be worn on a user's lower arm or wrist as a wearable system 2000 . In this example, wearable system 2000 may include sixteen neuromuscular sensors 2010 (e.g., EMG sensors) arranged circumferentially around elastic band 2020, wherein inner surface 2030 is configured to contact the user's skin. However, any suitable number of neuromuscular sensors may be used. The number and configuration of neuromuscular sensors may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband can be used to generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling text, controlling an avatar, or any other suitable control task. As shown, the sensors can be coupled together using flexible electronics incorporated into the wireless device. 20B shows a cross-sectional view through one of the sensors of the wearable device shown in FIG. 20A. In some embodiments, hardware signal processing circuitry can optionally be used to process the output of one or more of the sensing components (eg, to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the output of the sensing element may be performed in software. Thus, signal processing of signals sampled by sensors may be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the techniques described herein are not limited in this respect. A non-limiting example of a signal processing chain to process recorded data from the sensor 2010 is discussed in more detail below with reference to FIGS. 21A and 21B .

圖21A及圖21B繪示具有EMG感測器之可穿戴系統之內部組件的例示性示意圖。如所展示,可穿戴系統可包括可穿戴部分2110(圖21A)及與可穿戴部分2110通信(例如,經由藍芽或另一合適之無線通信技術)之適配器部分2120(圖21B)。如圖21A中所展示,可穿戴部分2110可包括皮膚接觸電極2111,該等電極之實例結合圖20A及圖20B加以描述。皮膚接觸電極2111之輸出可提供至類比前端2130,該類比前端可經組態以對經記錄信號執行類比處理(例如,放大、雜訊降低、過濾等)。經處理類比信號接著可提供至類比至數位轉換器2132,該類比至數位轉換器可將類比信號轉換為可由一或多個電腦處理器處理之數位信號。可根據一些具體實例使用之電腦處理器的實例係圖21A中所繪示之微控制器(MCU)2134。如所展示,MCU 2134亦可包括來自其他感測器(例如,IMU感測器2140)以及電力及電池模組2142之輸入。由MCU 2134執行之處理的輸出可提供至天線2150以供傳輸至圖21B中所展示之適配器部分2120。21A and 21B show illustrative schematic diagrams of internal components of a wearable system with an EMG sensor. As shown, the wearable system can include a wearable portion 2110 ( FIG. 21A ) and an adapter portion 2120 ( FIG. 21B ) that communicates with the wearable portion 2110 (eg, via Bluetooth or another suitable wireless communication technology). As shown in FIG. 21A, wearable portion 2110 may include skin contact electrodes 2111, examples of which are described in connection with FIGS. 20A and 20B. The output of the skin contact electrodes 2111 can be provided to an analog front end 2130, which can be configured to perform analog processing (eg, amplification, noise reduction, filtering, etc.) on the recorded signal. The processed analog signal may then be provided to an analog-to-digital converter 2132, which may convert the analog signal to a digital signal that may be processed by one or more computer processors. An example of a computer processor that may be used in accordance with some embodiments is the microcontroller (MCU) 2134 depicted in Figure 21A. As shown, MCU 2134 may also include inputs from other sensors (eg, IMU sensor 2140 ) and power and battery module 2142 . The output of the processing performed by the MCU 2134 may be provided to the antenna 2150 for transmission to the adapter portion 2120 shown in FIG. 21B.

適配器部分2120可包括天線2152,該天線可經組態以與包括為可穿戴部分2110之部分的天線2150通信。在天線2150與2152之間的通信可使用任何合適之無線技術及協定而發生,該無線技術及協定之非限制性實例包括射頻傳信及藍芽。如所展示,由適配器部分2120之天線2152接收到的信號可提供至主電腦以供進一步處理、顯示及/或用於實現對一或多個特定實體或虛擬物件之控制。Adapter portion 2120 may include antenna 2152 that may be configured to communicate with antenna 2150 included as part of wearable portion 2110 . Communication between antennas 2150 and 2152 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radio frequency signaling and Bluetooth. As shown, signals received by antenna 2152 of adapter portion 2120 may be provided to a host computer for further processing, display, and/or for enabling control of one or more specific physical or virtual objects.

儘管參考圖20A至圖20B及圖21A至圖21B提供之實例係在具有EMG感測器之介面的上下文中論述,但本文中所描述之用於減少電磁干擾的技術亦可實施於具有其他類型之感測器的可穿戴介面中,包括但不限於肌動圖(MMG)感測器、聲肌圖(SMG)感測器及電阻抗斷層掃描(EIT)感測器。本文中所描述之用於減少電磁干擾的技術亦可實施於經由電線及電纜(例如,USB電纜、光纖電纜等)與電腦主機通信之可穿戴介面中。Although the examples provided with reference to FIGS. 20A-20B and 21A-21B are discussed in the context of an interface with an EMG sensor, the techniques described herein for reducing electromagnetic interference can also be implemented with other types of In the wearable interface of sensors, including but not limited to myocardiography (MMG) sensors, sound myography (SMG) sensors and electrical impedance tomography (EIT) sensors. The techniques described herein for reducing electromagnetic interference can also be implemented in wearable interfaces that communicate with a computer host via wires and cables (eg, USB cables, fiber optic cables, etc.).

如上文所詳述,本文中所描述及/或說明之計算裝置及系統廣泛地表示能夠執行諸如本文中所描述之模組內所含的彼等指令等電腦可讀取指令之任何類型或形式的計算裝置或系統。在其最基本組態中,此等計算裝置可各自包括至少一個記憶體裝置及至少一個實體處理器。As detailed above, computing devices and systems described and/or illustrated herein broadly represent any type or form of computer-readable instructions capable of executing such instructions as those contained within the modules described herein. computing device or system. In their most basic configuration, these computing devices may each include at least one memory device and at least one physical processor.

在一些實例中,術語「記憶體裝置」通常係指能夠儲存資料及/或電腦可讀取指令之任何類型或形式的揮發性或非揮發性儲存裝置或媒體。在一個實例中,記憶體裝置可儲存、載入及/或維持本文中所描述之模組中之一或多者。記憶體裝置之實例包括但不限於隨機存取記憶體(RAM)、唯讀記憶體(ROM)、快閃記憶體、硬碟機(HDD)、固態磁碟機(SSD)、光碟機、快取記憶體、前述記憶體裝置中之一或多者的變化或組合,或任何其他合適之儲存記憶體。In some instances, the term "memory device" generally refers to any type or form of volatile or non-volatile storage device or media capable of storing data and/or computer-readable instructions. In one example, a memory device can store, load and/or maintain one or more of the modules described herein. Examples of memory devices include, but are not limited to, random access memory (RAM), read only memory (ROM), flash memory, hard disk drives (HDD), solid state drives (SSD), optical drives, flash drives, Take memory, a variation or combination of one or more of the aforementioned memory devices, or any other suitable storage memory.

在一些實例中,術語「實體處理器」通常係指能夠解譯及/或執行電腦可讀取指令之任何類型或形式的硬體實施處理單元。在一個實例中,實體處理器可存取及/或修改儲存於前述記憶體裝置中之一或多個模組。實體處理器之實例包括但不限於微處理器、微控制器、中央處理單元(CPU)、實施軟核處理器之場可程式化閘陣列(FPGA)、特殊應用積體電路(ASIC)、前述實體處理器中的一或多者之部分、前述實體處理器中之一或多者的變化或組合,或任何其他合適之實體處理器。In some instances, the term "physical processor" generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, the physical processor can access and/or modify one or more modules stored in the aforementioned memory device. Examples of physical processors include, but are not limited to, microprocessors, microcontrollers, central processing units (CPUs), field-programmable gate arrays (FPGAs) implementing soft-core processors, application-specific integrated circuits (ASICs), the aforementioned A portion of one or more of the physical processors, a variation or combination of one or more of the aforementioned physical processors, or any other suitable physical processor.

儘管說明為單獨元件,但本文中所描述及/或說明之模組可表示單一模組或應用程式之部分。另外,在某些具體實例中,此等模組中之一或多者可表示在由計算裝置執行時可使該計算裝置執行一或多個任務的一或多個軟體應用程式或程式。舉例而言,本文中所描述及/或說明之模組中之一或多者可表示儲存在本文中所描述及/或說明的計算裝置或系統中之一或多者上且經組態以在其上執行的模組。此等模組中之一或多者亦可表示經組態以執行一或多個任務的一或多個專用電腦之全部或部分。Although illustrated as separate elements, modules described and/or illustrated herein may represent portions of a single module or application. Additionally, in some embodiments, one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent storage on one or more of the computing devices or systems described and/or illustrated herein and configured to The mod to execute on. One or more of these modules may also represent all or part of one or more special purpose computers configured to perform one or more tasks.

另外,本文中所描述之模組中之一或多者可將資料、實體裝置及/或實體裝置的表示自一種形式轉換成另一形式。舉例而言,本文中所列舉之模組中之一或多者可接收待轉換的辨識模型之即時輸出、將辨識模型之即時輸出轉換成不確定性位準、將轉換的結果輸出至回饋系統以供與使用者之後續通信,及/或使用轉換之結果來調變用於向使用者呈現的適當及/或合適回饋之至少一個屬性。另外或替代地,本文中所列舉之模組中之一或多者可藉由在計算裝置上執行、將資料儲存在計算裝置上及/或以其他方式與計算裝置互動而將處理器、揮發性記憶體、非揮發性記憶體及/或實體計算裝置的任何其他部分自一種形式轉換成另一形式。Additionally, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules listed herein can receive the real-time output of the identification model to be transformed, convert the real-time output of the recognition model into an uncertainty level, and output the result of the transformation to a feedback system For subsequent communication with the user, and/or using the result of the transformation to modulate at least one attribute for presenting the user with appropriate and/or appropriate feedback. Additionally or alternatively, one or more of the modules enumerated herein may, by executing on, storing data on, and/or otherwise interacting with the computing device, offload the processor, volatilize volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another.

在一些具體實例中,術語「電腦可讀取媒體」通常係指能夠儲存或承載電腦可讀取指令之任何形式的裝置、載體或媒體。電腦可讀取媒體之實例包括但不限於傳輸型媒體,諸如載波;以及非暫時性型媒體,諸如磁性儲存媒體(例如,硬碟機、磁帶機及軟碟)、光學儲存媒體(例如,光碟(CD)、數位視訊光碟(DVD)及藍光(BLU-RAY)光碟)、電子儲存媒體(例如,固態磁碟機及快閃媒體)及其他分配系統。In some embodiments, the term "computer-readable medium" generally refers to any form of device, carrier or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, but are not limited to, transmission media, such as carrier waves; and non-transitory media, such as magnetic storage media (for example, hard drives, tape drives, and floppy disks), optical storage media (for example, optical discs (CD), digital video disc (DVD) and Blu-ray (BLU-RAY) disc), electronic storage media (such as solid-state drives and flash media) and other distribution systems.

本文中所描述及/或說明之程序參數及步驟序列僅作為實例提供且可按需要變化。舉例而言,雖然本文中所說明及/或描述之步驟可以特定次序展示或論述,但此等步驟未必需要以所說明或論述之次序執行。本文中所描述及/或說明之各種例示性方法亦可省略本文中所描述或說明的步驟中之一或多者或包括所揭示彼等步驟以外的額外步驟。Program parameters and sequence of steps described and/or illustrated herein are provided as examples only and may be varied as desired. For example, although steps illustrated and/or described herein may be shown or discussed in a particular order, the steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps beyond those disclosed.

先前描述已經提供以使得所屬技術領域中具有通常知識者能夠最佳地利用本文中所揭示之例示性具體實例的各種態樣。此例示性描述並不意欲為詳盡或限制於所揭示之任何精確形式。在不脫離本揭示內容之精神及範圍之情況下,許多修改及變化係可能的。本文中所揭示之具體實例在全部態樣中應被視為例示性而非限制性的。在判定本揭示內容之範圍時應參考所附申請專利範圍及其等效物。The foregoing description has been provided to enable those of ordinary skill in the art to best utilize various aspects of the illustrative embodiments disclosed herein. This illustrative description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the disclosure. The specific examples disclosed herein are to be considered in all respects as illustrative and not restrictive. In determining the scope of the disclosure, reference should be made to the appended claims and their equivalents.

除非另外指出,否則如本說明書及申請專利範圍中所使用之術語「連接至」及「耦接至」(及其衍生詞)被解釋為准許直接及間接(亦即,經由其他元件或組件)連接兩者。另外,如本說明書及申請專利範圍中所使用之術語「一(a或an)」應被視為意謂「中之至少一者」。最後,為易於使用,如本說明書及申請專利範圍中所使用之術語「包括」及「具有」(及其衍生詞)可與詞「包含」互換且具有與其相同之含義。Unless otherwise indicated, the terms "connected to" and "coupled to" (and their derivatives) as used in this specification and claims are to be interpreted as allowing direct and indirect (that is, via other elements or components) Connect the two. In addition, the term "one (a or an)" as used in this specification and claims shall be deemed to mean "at least one of them". Finally, for ease of use, the terms "comprising" and "having" (and their derivatives) as used in this specification and claims are interchangeable with the word "comprising" and have the same meanings.

100:系統 102:模組 104:接收模組 106:判定模組 108:調變模組 110:呈現模組 112:使用者輸入模組 120:記憶體 130:實體處理器 140:辨識模型 142:捏合偵測模型 144:手動追蹤模型 146:意圖互動模型 148:認知狀態模型 150:感測器 150(1):感測器 150(N):感測器 200:資料流 202:即時輸出 204:不確定性位準 206:回饋 302:機率 304:臨限值 306:機率 308:機率 310:機率 312:機率 400:電腦實施方法 410:步驟 420:步驟 430:步驟 440:步驟 502:不確定性位準 602:幅度位準 702:幅度位準 802:不確定性臨限值/臨限值 902:幅度位準 1002:幅度位準 1102:幅度位準 1200:電腦實施方法/方法 1210:步驟 1220:步驟 1230:步驟 1240:步驟 1250:步驟 1260:步驟 1270:步驟 1300:擴增實境系統/擴增實境裝置/系統/人工實境系統 1302:眼鏡裝置 1305:頸帶 1310:框架 1315(A):左側顯示裝置/顯示裝置 1315(B):右側顯示裝置/顯示裝置 1320(A):聲能轉換器 1320(B):聲能轉換器 1320(C):聲能轉換器 1320(D):聲能轉換器 1320(E):聲能轉換器 1320(F):聲能轉換器 1320(G):聲能轉換器 1320(H):聲能轉換器 1320(I):聲能轉換器 1320(J):聲能轉換器 1325:控制器 1330:有線連接 1335:電源 1340:感測器 1350:控制器 1400:虛擬實境系統/系統/人工實境系統 1402:前部剛體 1404:帶 1406(A):輸出音訊轉換器 1406(B):輸出音訊轉換器 1500:振動觸感系統 1510:觸覺裝置 1520:觸覺裝置 1530:可穿戴紡織材料/紡織材料 1540:振動觸感裝置 1550:電源 1552:導電佈線 1560:處理器 1570:另一裝置或系統 1580:通信介面 1590:觸敏襯墊 1600:人工實境環境 1602:頭戴式顯示器 1604:觸覺裝置 1700:擴增實境系統 1710:使用者 1720:擴增實境眼鏡 1722:顯示器 1730:觸覺裝置 1732:帶元件 1734:張緊機構 1800:系統 1801:眼睛 1802:光源 1804:光學子系統 1806:眼睛追蹤子系統 1808:控制子系統 1820:會聚光 1822:瞳孔 1900:眼睛追蹤子系統 1902:眼睛 1903:閃光 1904:源 1905:影像 1906:感測器 1908:瞳孔 1910:閃光 2000:可穿戴系統 2010:神經肌肉感測器/感測器 2020:彈性帶 2030:內表面 2110:可穿戴部分 2111:皮膚接觸電極 2120:適配器部分 2130:類比前端 2132:類比至數位轉換器 2134:微控制器(MCU) 2140:IMU感測器 2142:電力及電池模組 2150:天線 2152:天線 100: system 102:Module 104: Receiving module 106: Judgment module 108:Modulation module 110: Presentation module 112: User input module 120: memory 130: entity processor 140: Identification Model 142:Pinch detection model 144:Manual tracking model 146: Intent Interaction Model 148: Cognitive state models 150: sensor 150(1): Sensor 150(N): sensor 200: data flow 202: Immediate output 204: Uncertainty level 206: Giving Back 302: Probability 304:Threshold value 306: Probability 308: Probability 310: Probability 312: Probability 400: computer implementation method 410: Step 420: Step 430: step 440: step 502: Uncertainty level 602: amplitude level 702: amplitude level 802: Uncertainty Threshold/Threshold 902: amplitude level 1002: amplitude level 1102: amplitude level 1200: Computer-implemented methods/methods 1210: step 1220: step 1230: step 1240: step 1250: step 1260: step 1270: step 1300: Augmented reality system/augmented reality device/system/artificial reality system 1302: glasses device 1305: neck strap 1310: frame 1315(A): left display device/display device 1315(B): Right Display Device/Display Device 1320(A): Acoustic energy converter 1320(B): Acoustic Energy Converter 1320(C): Acoustic Energy Converter 1320(D): Acoustic Energy Converter 1320(E): Acoustic energy converter 1320(F): Acoustic Energy Converter 1320(G): Acoustic Energy Converter 1320(H): Acoustic Energy Converter 1320(I): Acoustic energy converter 1320(J): Acoustic energy converter 1325: controller 1330: wired connection 1335: Power 1340: sensor 1350: controller 1400: Virtual Reality Systems/Systems/Artificial Reality Systems 1402: Front rigid body 1404: with 1406(A): Output Audio Converter 1406(B): Output Audio Converter 1500: Vibration tactile system 1510: Haptic devices 1520: Haptic devices 1530: Wearable Textile Materials/Textile Materials 1540: Vibration Tactile Device 1550: power supply 1552: Conductive Wiring 1560: Processor 1570: Another device or system 1580: communication interface 1590: Touch Sensitive Liners 1600: Artificial Reality Environment 1602: Head Mounted Display 1604: Haptic devices 1700: Augmented Reality System 1710: user 1720: Augmented Reality Glasses 1722:Display 1730: Haptic devices 1732: with components 1734: tensioning mechanism 1800: system 1801: eyes 1802: light source 1804: Optical Subsystems 1806: Eye Tracking Subsystem 1808: Control Subsystem 1820: converging light 1822: Pupil 1900: Eye Tracking Subsystem 1902: eyes 1903: Flash 1904: source 1905: Image 1906: Sensor 1908: pupil 1910: Flash 2000: Wearable systems 2010: Neuromuscular Sensors/Sensors 2020: Elastic Bands 2030: inner surface 2110: Wearable part 2111: Skin contact electrodes 2120: Adapter part 2130: Analog front end 2132: Analog to Digital Converter 2134: Microcontroller (MCU) 2140:IMU sensor 2142: Power and battery modules 2150: Antenna 2152: Antenna

隨附圖式繪示數個例示性具體實例且為本說明書之一部分。連同以下描述,此等圖式展現並解釋本揭示內容之各種原理。The accompanying drawings depict several illustrative embodiments and are a part of this specification. Together with the description below, these drawings show and explain various principles of the disclosure.

[圖1]係用於向使用者傳達模型不確定性之實例系統的方塊圖。[FIG. 1] is a block diagram of an example system for communicating model uncertainty to a user.

[圖2]係用於向使用者傳達模型不確定性之例示性資料流的流程圖。[FIG. 2] is a flowchart of an exemplary data flow for communicating model uncertainty to a user.

[圖3]係機率之例示性時間序列的圖。[ FIG. 3 ] is a diagram of an exemplary time series of probabilities.

[圖4]係用於向使用者傳達模型不確定性之例示性方法的流程圖。[ FIG. 4 ] is a flowchart of an exemplary method for communicating model uncertainty to a user.

[圖5]係不確定性位準與機率之間的例示性關係之圖。[ Fig. 5 ] is a diagram illustrating an exemplary relationship between an uncertainty level and a probability.

[圖6]係例示性不確定性調變回饋之圖。[FIG. 6] is a diagram of an exemplary uncertainty modulation feedback.

[圖7]係例示性不確定性調變回饋之另一圖。[FIG. 7] is another diagram of exemplary uncertainty modulation feedback.

[圖8]係例示性不確定性臨限值之圖。[FIG. 8] is a graph of exemplary uncertainty thresholds.

[圖9]係例示性基於不確定性之回饋的圖。[ FIG. 9 ] is a diagram illustrating exemplary uncertainty-based feedback.

[圖10]係例示性基於不確定性之回饋的另一圖。[ FIG. 10 ] is another diagram illustrating exemplary uncertainty-based feedback.

[圖11]係例示性基於不確定性之回饋的另一圖。[ FIG. 11 ] is another diagram illustrating exemplary uncertainty-based feedback.

[圖12]係用於向使用者傳達模型不確定性之另一例示性方法的流程圖。[ FIG. 12 ] is a flowchart of another exemplary method for communicating model uncertainty to a user.

[圖13]係可結合本揭示內容之具體實例使用的例示性擴增實境眼鏡之圖示。[ FIG. 13 ] is a diagram of exemplary augmented reality glasses that may be used in conjunction with embodiments of the present disclosure.

[圖14]係可結合本揭示內容之具體實例使用的例示性虛擬實境頭戴式裝置之圖示。[ FIG. 14 ] is an illustration of an exemplary virtual reality headset that may be used in conjunction with embodiments of the present disclosure.

[圖15]係可結合本揭示內容之具體實例使用的例示性觸覺裝置之圖示。[ FIG. 15 ] is a diagram of an exemplary haptic device that may be used in conjunction with embodiments of the present disclosure.

[圖16]係根據本揭示內容之具體實例的例示性虛擬實境環境之圖示。[ FIG. 16 ] is a diagram of an exemplary virtual reality environment according to embodiments of the present disclosure.

[圖17]係根據本揭示內容之具體實例的例示性擴增實境環境之圖示。[ FIG. 17 ] Is a diagram of an exemplary augmented reality environment according to embodiments of the present disclosure.

[圖18]係併入能夠追蹤使用者眼睛之眼睛追蹤子系統的例示性系統之圖示。[ FIG. 18 ] is a diagram of an exemplary system incorporating an eye tracking subsystem capable of tracking a user's eyes.

[圖19]係圖18中所繪示之眼睛追蹤子系統的各種態樣之更詳細圖示。[ FIG. 19 ] is a more detailed illustration of various aspects of the eye tracking subsystem depicted in FIG. 18 .

[圖20A]及[圖20B]係經組態以穿戴在使用者之下臂或手腕上的例示性人機介面之圖示。[ FIG. 20A ] and [ FIG. 20B ] are illustrations of an exemplary human-machine interface configured to be worn on a user's lower arm or wrist.

[圖21A]及[圖21B]係具有可穿戴系統之內部組件之例示性示意圖的圖示。[ FIG. 21A ] and [ FIG. 21B ] are diagrams with illustrative schematic diagrams of the internal components of the wearable system.

貫穿圖式,相同參考標號及描述指示類似但未必相同之元件。雖然本文中所描述之例示性具體實例易受各種修改及替代形式之影響,但在圖式中已藉助於實例展示特定具體實例且將在本文中對其進行詳細描述。然而,本文中所描述之例示性具體實例並不意欲限於所揭示之特定形式。實情為,本揭示內容涵蓋屬於所附申請專利範圍之範圍內之所有修改、等效物及替代例。Throughout the drawings, like reference numbers and descriptions indicate similar, but not necessarily identical, elements. While the illustrative embodiments described herein are susceptible to various modifications and alternative forms, certain embodiments have been shown by way of example in the drawings and will be described herein in detail. However, the illustrative embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.

400:電腦實施方法 400: computer implementation method

410:步驟 410: Step

420:步驟 420: Step

430:步驟 430: step

440:步驟 440: step

Claims (20)

一種電腦實施方法,其包含: 接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為; 基於該資訊而判定與該即時輸出相關聯之不確定性位準; 基於該不確定性位準而調變回饋之至少一個屬性;以及 在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。 A computer-implemented method comprising: receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; determining a level of uncertainty associated with the immediate output based on the information; modulating at least one attribute of feedback based on the uncertainty level; and The feedback is presented to the user substantially simultaneously with the immediate output of the recognition model. 如請求項1之電腦實施方法,其中: 與該辨識模型之該即時輸出相關聯的該資訊包含該使用者執行該至少一個行為之機率; 該電腦實施方法進一步包含以下中之至少一者: 當該使用者執行該至少一個行為之該機率高於預定臨限值時執行使用者輸入操作;或 當該使用者執行該至少一個行為之該機率低於該預定臨限值時避免執行該使用者輸入操作;並且 與該即時輸出相關聯之該不確定性位準係基於在該機率與該預定臨限值之間的距離而判定,該不確定性位準與該距離成反比。 The computer implementation method of claim 1, wherein: the information associated with the real-time output of the recognition model includes a probability of the user performing the at least one behavior; The computer-implemented method further includes at least one of the following: performing a user input operation when the probability of the user performing the at least one action is above a predetermined threshold; or refraining from performing the user input operation when the probability of the user performing the at least one action is below the predetermined threshold; and The uncertainty level associated with the immediate output is determined based on the distance between the probability and the predetermined threshold, the uncertainty level being inversely proportional to the distance. 如請求項1之電腦實施方法,其中: 與該辨識模型之該即時輸出相關聯的該資訊包含該使用者執行該至少一個行為之機率;並且 該回饋之該至少一個屬性經調變為具有與該使用者執行該至少一個行為之該機率成比例的可感知性位準。 The computer implementation method of claim 1, wherein: the information associated with the real-time output of the recognition model includes a probability of the user performing the at least one action; and The at least one attribute of the feedback is tuned to have a perceptibility level proportional to the probability that the user performs the at least one behavior. 如請求項1之電腦實施方法,其中該回饋之該至少一個屬性經調變為具有與該不確定性位準成比例的可感知性位準,該不確定性位準與該即時輸出相關聯。The computer-implemented method of claim 1, wherein the at least one attribute of the feedback is modulated to have a perceptibility level proportional to the level of uncertainty associated with the immediate output . 如請求項1之電腦實施方法,其中: 該辨識模型包含經調適以輸出該使用者執行捏合手勢之機率的捏合辨識模型; 該資訊包含該使用者執行該捏合手勢之機率; 該電腦實施方法進一步包含當該使用者執行該捏合手勢之該機率高於預定臨限值時執行使用者輸入操作;並且 當該使用者執行該捏合手勢之該機率高於該預定臨限值時,在執行該使用者輸入操作的同時,該回饋經呈現給該使用者。 The computer implementation method of claim 1, wherein: The recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; The information includes the probability that the user performed the pinch gesture; The computer-implemented method further includes performing a user input operation when the probability that the user performs the pinch gesture is above a predetermined threshold; and When the probability of the user performing the pinch gesture is higher than the predetermined threshold, the feedback is presented to the user while performing the user input operation. 如請求項1之電腦實施方法,其中: 該辨識模型包含經調適以輸出該使用者執行捏合手勢之機率的捏合辨識模型; 該資訊包含該使用者執行該捏合手勢之機率; 該電腦實施方法進一步包含當該使用者執行該捏合手勢之該機率低於預定臨限值時避免執行使用者輸入操作;並且 在判定該使用者執行該捏合手勢之該機率低於該預定臨限值的同時,該回饋經呈現給該使用者。 The computer implementation method of claim 1, wherein: The recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; The information includes the probability that the user performed the pinch gesture; The computer-implemented method further includes refraining from performing a user input operation when the probability that the user performs the pinch gesture is below a predetermined threshold; and The feedback is presented to the user while determining that the probability of the user performing the pinch gesture is below the predetermined threshold. 如請求項1之電腦實施方法,其中: 該辨識模型包含手動追蹤模型,該手動追蹤模型經調適以輸出: 用於該使用者之手部的一或多個部分之位置或位向資訊;以及 該位置或位向資訊之信賴位準;並且 與該即時輸出相關聯之該不確定性位準係基於該信賴位準而判定,該不確定性位準與該信賴位準成反比。 The computer implementation method of claim 1, wherein: The identification model includes a manual tracking model adapted to output: location or orientation information for one or more parts of the user's hand; and the level of reliance on the location or orientation information; and The uncertainty level associated with the immediate output is determined based on the confidence level, the uncertainty level being inversely proportional to the confidence level. 如請求項1之電腦實施方法,其中: 該回饋係觸覺回饋;並且 該觸覺回饋之至少一個屬性係基於與該即時輸出相關聯之該不確定性位準。 The computer implementation method of claim 1, wherein: the feedback is tactile feedback; and At least one property of the haptic feedback is based on the uncertainty level associated with the immediate output. 如請求項1之電腦實施方法,其中: 該回饋係振動;並且 該振動之至少一個屬性係基於與該即時輸出相關聯之該不確定性位準。 The computer implementation method of claim 1, wherein: the feedback system vibrates; and At least one property of the vibration is based on the uncertainty level associated with the immediate output. 如請求項1之電腦實施方法,其中基於該不確定性位準而調變該回饋的該至少一個屬性包含調變以下中之一或多者: 該回饋之幅度,以將該不確定性位準傳達至該使用者; 該回饋之頻率,以將該不確定性位準傳達至該使用者; 該回饋之持續時間,以將該不確定性位準傳達至該使用者; 該回饋之型態,以將該不確定性位準傳達至該使用者;以及 該回饋之空間化,以將該不確定性位準傳達至該使用者。 The computer-implemented method of claim 1, wherein modulating the at least one attribute of the feedback based on the uncertainty level includes modulating one or more of the following: the magnitude of the feedback to communicate the level of uncertainty to the user; the frequency of the feedback to communicate the level of uncertainty to the user; the duration of the feedback to communicate the level of uncertainty to the user; the type of feedback to communicate the level of uncertainty to the user; and The feedback is spatialized to communicate the level of uncertainty to the user. 如請求項1之電腦實施方法,其中該回饋指示用於減小該即時輸出之該不確定性位準的方式。The computer-implemented method of claim 1, wherein the feedback indicates a means for reducing the uncertainty level of the immediate output. 如請求項1之電腦實施方法,其進一步包含: 接收與該辨識模型之額外即時輸出相關聯的額外資訊; 基於該額外資訊而判定與該額外即時輸出相關聯之額外不確定性位準;以及 基於該額外不確定性位準而調變該回饋之該至少一個屬性。 The computer-implemented method of claim 1, which further includes: receiving additional information associated with the additional real-time output of the identification model; determining an additional level of uncertainty associated with the additional immediate output based on the additional information; and The at least one property of the feedback is modulated based on the additional uncertainty level. 一種系統,其包含: 至少一個實體處理器;以及 實體記憶體,其包含電腦可執行指令,該電腦可執行指令在由該至少一個實體處理器執行時使該至少一個實體處理器: 接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為; 基於該資訊而判定與該即時輸出相關聯之不確定性位準; 基於該不確定性位準而調變回饋之至少一個屬性;並且 在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。 A system comprising: at least one physical processor; and Physical memory containing computer-executable instructions that, when executed by the at least one physical processor, cause the at least one physical processor to: receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; determining a level of uncertainty associated with the immediate output based on the information; modulating at least one attribute of feedback based on the uncertainty level; and The feedback is presented to the user substantially simultaneously with the immediate output of the recognition model. 如請求項13之系統,其中: 與該辨識模型之該即時輸出相關聯的該資訊包含該使用者執行該至少一個行為之機率; 該電腦可執行指令在由該至少一個實體處理器執行時進一步使該至少一個實體處理器: 當該使用者執行該至少一個行為之該機率高於預定臨限值時執行使用者輸入操作;或 當該使用者執行該至少一個行為之該機率低於該預定臨限值時避免執行該使用者輸入操作;並且 與該即時輸出相關聯之該不確定性位準係基於在該機率與該預定臨限值之間的距離而判定,該不確定性位準與該距離成反比。 The system of claim 13, wherein: the information associated with the real-time output of the recognition model includes a probability of the user performing the at least one behavior; The computer-executable instructions, when executed by the at least one physical processor, further cause the at least one physical processor to: performing a user input operation when the probability of the user performing the at least one action is above a predetermined threshold; or refraining from performing the user input operation when the probability of the user performing the at least one action is below the predetermined threshold; and The uncertainty level associated with the immediate output is determined based on the distance between the probability and the predetermined threshold, the uncertainty level being inversely proportional to the distance. 如請求項13之系統,其中: 與該辨識模型之該即時輸出相關聯的該資訊包含該使用者執行該至少一個行為之機率;並且 該回饋之該至少一個屬性經調變為具有與該使用者執行該至少一個行為之該機率成比例的可感知性位準。 The system of claim 13, wherein: the information associated with the real-time output of the recognition model includes a probability of the user performing the at least one action; and The at least one attribute of the feedback is tuned to have a perceptibility level proportional to the probability that the user performs the at least one behavior. 如請求項13之系統,其中該回饋之該至少一個屬性經調變為具有與該不確定性位準成比例的可感知性位準,該不確定性位準與該即時輸出相關聯。The system of claim 13, wherein the at least one attribute of the feedback is modulated to have a perceptibility level proportional to the level of uncertainty associated with the immediate output. 如請求項13之系統,其中: 該辨識模型包含經調適以輸出該使用者執行捏合手勢之機率的捏合辨識模型; 該資訊包含該使用者執行該捏合手勢之一機率; 該電腦可執行指令在由該至少一個實體處理器執行時進一步使該至少一個實體處理器在該使用者執行該捏合手勢之該機率高於預定臨限值時執行使用者輸入操作;並且 當該使用者執行該捏合手勢之該機率高於預定臨限值時,在執行該使用者輸入操作的同時,該回饋經呈現給該使用者。 The system of claim 13, wherein: The recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; The information includes a probability that the user performed the pinch gesture; The computer-executable instructions, when executed by the at least one physical processor, further cause the at least one physical processor to perform a user input operation when the probability that the user performs the pinch gesture is above a predetermined threshold; and When the probability of the user performing the pinch gesture is higher than a predetermined threshold, the feedback is presented to the user while performing the user input operation. 如請求項13之系統,其中: 該辨識模型包含經調適以輸出該使用者執行捏合手勢之機率的捏合辨識模型; 該資訊包含該使用者執行該捏合手勢之機率; 該電腦可執行指令在由該至少一個實體處理器執行時進一步使該至少一個實體處理器在該使用者執行該捏合手勢之該機率低於預定臨限值時避免執行使用者輸入操作;並且 在判定該使用者執行該捏合手勢之該機率低於該預定臨限值的同時,該回饋經呈現給該使用者。 The system of claim 13, wherein: The recognition model includes a pinch recognition model adapted to output a probability that the user performs a pinch gesture; The information includes the probability that the user performed the pinch gesture; The computer-executable instructions, when executed by the at least one physical processor, further cause the at least one physical processor to refrain from performing a user input operation when the probability that the user performs the pinch gesture is below a predetermined threshold; and The feedback is presented to the user while determining that the probability of the user performing the pinch gesture is below the predetermined threshold. 如請求項13之系統,其中: 該辨識模型包含手動追蹤模型,該手動追蹤模型經調適以輸出: 用於該使用者之手部的一或多個部分之位置或位向資訊;以及 該位置或位向資訊之信賴位準;並且 與該即時輸出相關聯之該不確定性位準係基於該信賴位準而判定,該不確定性位準與該信賴位準成反比。 The system of claim 13, wherein: The identification model includes a manual tracking model adapted to output: location or orientation information for one or more parts of the user's hand; and the level of reliance on the location or orientation information; and The uncertainty level associated with the immediate output is determined based on the confidence level, the uncertainty level being inversely proportional to the confidence level. 一種非暫時性電腦可讀取媒體,其包含一或多個電腦可執行指令,該一或多個電腦可執行指令在由計算裝置之至少一個處理器執行時使該計算裝置: 接收與辨識模型之即時輸出相關聯的資訊,該辨識模型經調適以辨識使用者之至少一個行為; 基於該資訊而判定與該即時輸出相關聯之不確定性位準; 基於該不確定性位準而調變回饋之至少一個屬性;並且 在該辨識模型之該即時輸出的實質上同時,向該使用者呈現該回饋。 A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: receiving information associated with a real-time output of a recognition model adapted to recognize at least one behavior of a user; determining a level of uncertainty associated with the immediate output based on the information; modulating at least one attribute of feedback based on the uncertainty level; and The feedback is presented to the user substantially simultaneously with the immediate output of the recognition model.
TW111129002A 2021-08-19 2022-08-02 Systems and methods for communicating model uncertainty to users TW202318180A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163234823P 2021-08-19 2021-08-19
US63/234,823 2021-08-19
US17/575,676 2022-01-14
US17/575,676 US11789544B2 (en) 2021-08-19 2022-01-14 Systems and methods for communicating recognition-model uncertainty to users

Publications (1)

Publication Number Publication Date
TW202318180A true TW202318180A (en) 2023-05-01

Family

ID=83355015

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111129002A TW202318180A (en) 2021-08-19 2022-08-02 Systems and methods for communicating model uncertainty to users

Country Status (3)

Country Link
EP (1) EP4388395A1 (en)
TW (1) TW202318180A (en)
WO (1) WO2023023299A1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10120413B2 (en) * 2014-09-11 2018-11-06 Interaxon Inc. System and method for enhanced training using a virtual reality environment and bio-signal data

Also Published As

Publication number Publication date
EP4388395A1 (en) 2024-06-26
WO2023023299A1 (en) 2023-02-23

Similar Documents

Publication Publication Date Title
US10831268B1 (en) Systems and methods for using eye tracking to improve user interactions with objects in artificial reality
US8988373B2 (en) Skin input via tactile tags
US11039651B1 (en) Artificial reality hat
US20220236795A1 (en) Systems and methods for signaling the onset of a user's intent to interact
US11847794B1 (en) Self-tracked controller
US20230037329A1 (en) Optical systems and methods for predicting fixation distance
WO2022164881A1 (en) Systems and methods for predicting an intent to interact
US12028419B1 (en) Systems and methods for predictively downloading volumetric data
US11789544B2 (en) Systems and methods for communicating recognition-model uncertainty to users
US20220293241A1 (en) Systems and methods for signaling cognitive-state transitions
US11579704B2 (en) Systems and methods for adaptive input thresholding
JP2024516755A (en) HANDHELD CONTROLLER WITH THUMB PRESSURE SENSING - Patent application
US20230043585A1 (en) Ultrasound devices for making eye measurements
TW202318180A (en) Systems and methods for communicating model uncertainty to users
US20240256031A1 (en) Systems and methods for gaze-assisted gesture control
US20240348278A1 (en) Transmitter and driver architectures
CN118119915A (en) System and method for communicating model uncertainty to a user
US20240346221A1 (en) Circuits and methods for reducing the effects of variation in inter-die communication in 3d-stacked systems
US20240312892A1 (en) Universal chip with variable packaging
WO2024159200A1 (en) Systems and methods for gaze-assisted gesture control
WO2023023206A1 (en) Systems and methods for performing eye-tracking
TW202315398A (en) Optical systems and methods for predicting fixation distance
WO2022192759A1 (en) Systems and methods for signaling cognitive-state transitions
WO2023031633A1 (en) Online calibration based on deformable body mechanics
CN116830064A (en) System and method for predicting interactive intent