TWI438716B - Image capture and buffering in a virtual world - Google Patents

Image capture and buffering in a virtual world Download PDF

Info

Publication number
TWI438716B
TWI438716B TW098123488A TW98123488A TWI438716B TW I438716 B TWI438716 B TW I438716B TW 098123488 A TW098123488 A TW 098123488A TW 98123488 A TW98123488 A TW 98123488A TW I438716 B TWI438716 B TW I438716B
Authority
TW
Taiwan
Prior art keywords
image data
virtual world
user
buffer
current value
Prior art date
Application number
TW098123488A
Other languages
Chinese (zh)
Other versions
TW201009746A (en
Inventor
Zachary A Garbow
Jim C Chen
Ryan K Cradick
Original Assignee
Activision Publishing Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Activision Publishing Inc filed Critical Activision Publishing Inc
Publication of TW201009746A publication Critical patent/TW201009746A/en
Application granted granted Critical
Publication of TWI438716B publication Critical patent/TWI438716B/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/535Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for monitoring, e.g. of user parameters, terminal parameters, application parameters, network parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6669Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera using a plurality of virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character change rooms

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Description

在虛擬世界中之影像截取及緩衝Image capture and buffering in the virtual world

本發明之實施例係關於使用沉浸式視覺環境。更具體言之,本發明之實施例係關於用於識別拍照機會並截取在虛擬世界內正發生之事件之影像的技術。Embodiments of the invention relate to the use of an immersive visual environment. More specifically, embodiments of the present invention relate to techniques for identifying photographing opportunities and intercepting images of events that are occurring within the virtual world.

虛擬世界為使用者可居住且與虛擬世界之虛擬物件及場所互動的模擬環境。使用者亦可經由化身(avatar)彼此互動。化身通常提供個人在虛擬世界環境內的圖形表示。化身通常向其他使用者呈現為類似於人類個人的二維或三維圖形表示。通常,虛擬世界允許多個使用者進入虛擬環境並彼此互動。可稱虛擬世界提供沉浸式環境,此係因為其通常顯現為類似於真實世界,且物件傾向於遵循關於重力、地形學、運動、物理學及運動學的規則。當然,虛擬世界可中止或變更此等規則,且可提供其他虛構或想像的環境。使用者通常使用以下各項來經由其化身彼此通信:在化身之間發送之文字訊息、即時語音通信、由化身顯示之姿態、在虛擬世界中可見之符號,及其類似者。The virtual world is a simulated environment in which users can live and interact with virtual objects and places in the virtual world. Users can also interact with each other via avatars. The avatar usually provides a graphical representation of the individual within the virtual world environment. The avatar is typically presented to other users as a two- or three-dimensional graphical representation similar to a human individual. Often, a virtual world allows multiple users to enter a virtual environment and interact with each other. The virtual world can be said to provide an immersive environment because it usually appears to be similar to the real world, and objects tend to follow rules about gravity, topography, motion, physics, and kinematics. Of course, the virtual world can suspend or change these rules and provide other fictional or imaginary environments. Users typically use the following to communicate with each other via their avatars: text messages sent between avatars, instant voice communications, gestures displayed by avatars, symbols visible in the virtual world, and the like.

一些虛擬世界被描述為係持久的。持久世界提供通常始終為可用的且其中事件持續發生而無關於給定化身之存在的沉浸式環境(例如,用作角色扮演遊戲之設定的幻想設定,或包括土地、建築物、城市及經濟制度的虛擬世界)。因此,不同於較為習知的在線遊戲或多使用者環境,在使用者進入(及退出)虛擬世界時,虛擬世界持續存在且情節及事件持續展開。虛擬環境在顯示螢幕上呈現為影像,且某一虛擬環境可允許使用者記錄在虛擬環境內發生的事件。Some virtual worlds are described as being persistent. The enduring world provides an immersive environment that is always available and where events continue to occur without regard to the existence of a given avatar (for example, a fantasy setting for setting up a role-playing game, or including land, buildings, cities, and economic systems) Virtual world). Thus, unlike the more conventional online games or multi-user environments, as the user enters (and exits) the virtual world, the virtual world persists and the episodes and events continue to unfold. The virtual environment is rendered as an image on the display screen, and a virtual environment allows the user to record events that occur within the virtual environment.

本發明之一實施例包括一種用於截取描繪一虛擬世界之影像資料的方法。該方法通常可包括:監視關於與該虛擬世界互動之使用者的複數個量測結果(measurement);自該複數個量測結果計算一拍照機會得分之當前值;及比較該拍照機會得分之該當前值與一預定臨限值。在判定該機會得分之該當前值超出該預定臨限值時,即截取該虛擬世界之一組影像資料。One embodiment of the present invention includes a method for capturing image data depicting a virtual world. The method can generally include: monitoring a plurality of measurements for a user interacting with the virtual world; calculating a current value of a camera opportunity score from the plurality of measurements; and comparing the photo opportunity scores to the The current value is a predetermined threshold. When it is determined that the current value of the chance score exceeds the predetermined threshold, one set of image data of the virtual world is intercepted.

本發明之另一實施例包括一含有程式之電腦可讀儲存媒體,該程式在執行時進行用於截取描繪一虛擬世界之影像資料的操作。該操作通常可包括:監視關於與該虛擬世界互動之使用者的複數個量測結果;自該複數個量測結果計算一拍照機會得分之當前值;及比較該拍照機會得分之該當前值與一預定臨限值。在判定該機會得分之該當前值超出該預定臨限值時,即截取該虛擬世界的一組影像資料。Another embodiment of the present invention includes a computer readable storage medium containing a program that, when executed, performs operations for capturing image data depicting a virtual world. The operation may generally include: monitoring a plurality of measurement results about a user interacting with the virtual world; calculating a current value of a camera opportunity score from the plurality of measurement results; and comparing the current value of the camera opportunity score with A predetermined threshold. When it is determined that the current value of the chance score exceeds the predetermined threshold, a set of image data of the virtual world is intercepted.

本發明之再一實施例包括一具有處理器及含有程式之記憶體的系統,該程式在藉由該處理器執行時經組態以進行用於截取描繪一虛擬世界之影像資料的操作。該操作通常應包括:監視關於與該虛擬世界互動之使用者的複數個量測結果;自該複數個量測結果計算一拍照機會得分之當前值;及比較該拍照機會得分之該當前值與一預定臨限值。在判定該機會得分之該當前值超出該預定臨限值時,即截取該虛擬世界的一組影像資料。Yet another embodiment of the present invention includes a system having a processor and a memory containing the program, the program being configured, when executed by the processor, to perform operations for capturing image data depicting a virtual world. The operation generally includes: monitoring a plurality of measurement results about a user interacting with the virtual world; calculating a current value of a camera opportunity score from the plurality of measurement results; and comparing the current value of the camera opportunity score with A predetermined threshold. When it is determined that the current value of the chance score exceeds the predetermined threshold, a set of image data of the virtual world is intercepted.

為能夠詳細理解達成本發明之上述特徵、優點及目標之方式,可參考本發明之實施例來進行上文簡潔概述之本發明的更特定描述,該等實施例在附加圖式中進行說明。A more particular description of the invention, which is set forth hereinbelow in the <RTIgt; </RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt;

然而,應注意,附加圖式僅說明本發明之典型實施例,且因此不應將其視為對本發明之範疇的限制,此係因為本發明可容許其他同樣有效的實施例。It is to be understood, however, that the appended claims are in the

虛擬世界提供可藉由化身來表示使用者的模擬環境。可使用化身來「行進」穿過虛擬世界之場所(諸如,虛擬街道、建築物、房間等)。同時在給定場所中,化身亦可用以與給定場所中存在之物件或其他化身互動。舉例而言,化身可能能夠藉由進行通信、進行商業交易、參加娛樂活動,及其類似者而接近另一化身並與其互動。因此,儘管處於不同實體場所中,但多個使用者可存在於同一虛擬場所中且使用其各別化身彼此互動。The virtual world provides a simulated environment in which the user can be represented by an avatar. You can use the avatar to "walk" through places in the virtual world (such as virtual streets, buildings, rooms, etc.). At the same time, in a given location, the avatar can also be used to interact with objects or other avatars present in a given location. For example, an avatar may be able to approach and interact with another avatar by communicating, conducting business transactions, participating in entertainment, and the like. Thus, although in different physical locations, multiple users may exist in the same virtual venue and interact with each other using their respective avatars.

在虛擬世界中,如同在真實世界中,通常需要將時刻及記憶截取為圖片。正如在真實世界中,虛擬世界中之許多互動可提供潛在「拍照機會」。例如,使用者可能在與朋友聚會或在虛擬度假。雖然獲取螢幕擷取畫面相當平常,但使用者可能直至時刻已流逝才認識到拍照機會已發生。另外,螢幕擷取畫面僅截取單一相機角度,亦即,使用者檢視區(viewport)之相機角度,其可能並不導致使用者想要之影像。In the virtual world, as in the real world, it is often necessary to capture moments and memories as pictures. Just as in the real world, many interactions in the virtual world provide potential "photographing opportunities." For example, a user may be gathering with a friend or on a virtual vacation. Although it is quite common to obtain a screen capture, the user may not realize that the camera opportunity has occurred until the moment has elapsed. In addition, the screen capture screen only captures a single camera angle, that is, the camera angle of the user viewport, which may not result in the image desired by the user.

儘管使用者可能能夠記錄其與虛擬世界之整個互動並選擇個別影像以進行保存,但此方法建立影像將遵循相機穿過環境之路徑且另外要求實質儲存空間的視訊。另外,很少有使用者想要費力地看完此巨大數目個圖片。因而,使用者可能希望在其與虛擬環境互動時之合適時刻自動識別並保存影像。Although the user may be able to record their entire interaction with the virtual world and select individual images for storage, this method of creating an image will follow the path of the camera through the environment and additionally require substantial storage of video. In addition, few users want to read this huge number of pictures with difficulty. Thus, the user may wish to automatically identify and save the image at the appropriate time when it interacts with the virtual environment.

本發明之實施例提供用於偵測可能在虛擬環境內發生之良好拍照機會且作為回應自不限於使用者檢視區之視角截取影像的技術。在一實施例中,量測多種生理及虛擬世界參數以判定何時截取與虛擬環境互動之使用者的影像。為了改良影像之品質,可藉由由使用者指定之因數來個別地加權此等參數。在另一實施例中,將所截取之影像儲存於具有固定大小之臨時緩衝空間中,從而可能替換較舊影像。使用者可檢視緩衝器內容並選擇理想影像以移動至永久圖庫。使用者之影像選擇又可用以進一步改良未來影像的品質。Embodiments of the present invention provide techniques for detecting good camera opportunities that may occur within a virtual environment and in response to capturing images from a perspective that is not limited to the user's viewport. In one embodiment, a plurality of physiological and virtual world parameters are measured to determine when to capture an image of a user interacting with the virtual environment. In order to improve the quality of the image, these parameters can be individually weighted by a factor specified by the user. In another embodiment, the captured image is stored in a temporary buffer space of a fixed size, thereby potentially replacing the older image. The user can view the contents of the buffer and select the desired image to move to the permanent gallery. The user's image selection can be used to further improve the quality of future images.

在下文中,參考本發明之實施例。然而,應理解,本發明不限於特定描述之實施例。實情為,涵蓋以下特徵及元件(不論是否與不同實施例相關)之任何組合以實施並實踐本發明。此外,在各種實施例中,本發明提供優於先前技術之眾多優點。然而,儘管本發明之實施例可達成優於其他可能解決方案及/或優於先前技術之優點,但是否藉由給定實施例達成特定優點並不為本發明之限制。因此,以下態樣、特徵、實施例及優點僅為說明性的,且除在技術方案中明確敍述以外皆不視為附加申請專利範圍之要素或限制。同樣,對「本發明」之參考不應視為對本文中所揭示之任何發明性標的物之概括,且除在技術方案中明確敍述以外皆不應視為附加申請專利範圍之要素或限制。In the following, reference is made to embodiments of the invention. However, it should be understood that the invention is not limited to the specifically described embodiments. In fact, any combination of the following features and elements (whether or not related to different embodiments) is contemplated to implement and practice the invention. Moreover, in various embodiments, the present invention provides numerous advantages over the prior art. However, while the embodiments of the present invention may achieve advantages over other possible solutions and/or advantages over the prior art, it is not a limitation of the invention. Therefore, the following aspects, features, embodiments, and advantages are merely illustrative, and are not to be considered as an element or limitation of the scope of the appended claims. In the meantime, the reference to the "invention" is not to be construed as limiting the scope of the invention, and is not to be construed as an element or limitation of the scope of the appended claims.

本發明之一實施例實施為用於與電腦系統一起使用之程式產品。該程式產品之程式定義該等實施例(包括本文所述之方法)之功能且可含於多種電腦可讀儲存媒體上。說明性電腦可讀儲存媒體包括(但不限於):(i)在上面永久地儲存資訊之非可寫儲存媒體(例如,電腦內之唯讀記憶體器件,諸如可由CD-ROM驅動器讀取之CD-ROM碟片);(ii)在上面儲存可變資訊之可寫儲存媒體(例如,磁片驅動器或硬碟驅動器內之軟碟)。此等電腦可讀儲存媒體在載運針對本發明之功能之電腦可讀指令時為本發明的實施例。其他媒體包括藉以諸如經由電腦或電話網路(包括無線通信網路)將資訊傳送至一電腦的通信媒體。後一實施例特定地包括將資訊傳輸至網際網路及其他網路/自網際網路及其他網路傳輸資訊。此通信媒體在載運針對本發明之功能的電腦可讀指令時為本發明之實施例。廣泛地,在本文中可將電腦可讀儲存媒體及通信媒體稱為電腦可讀媒體。One embodiment of the present invention is embodied as a program product for use with a computer system. The program of the program product defines the functions of the embodiments (including the methods described herein) and can be included on a variety of computer readable storage media. Illustrative computer readable storage media includes, but is not limited to: (i) a non-writable storage medium on which information is permanently stored (eg, a read-only memory device within a computer, such as may be read by a CD-ROM drive) CD-ROM disc; (ii) a writable storage medium (eg, a floppy disk drive or a floppy disk in a hard disk drive) on which variable information is stored. Such computer readable storage media are embodiments of the invention when carrying computer readable instructions for the functions of the present invention. Other media include communication media that transfer information to a computer, such as via a computer or telephone network (including a wireless communication network). The latter embodiment specifically includes transmitting information to the Internet and other networks/from the Internet and other networks for transmitting information. This communication medium is an embodiment of the invention when carrying computer readable instructions for the functions of the present invention. Broadly, computer readable storage media and communication media may be referred to herein as computer readable media.

一般而言,經執行以實施本發明之實施例的常式可為作業系統之部分或特定應用程式、組件、程式、模組、物件或指令序列。本發明之電腦程式通常包含將由原生電腦(native computer)轉譯成機器可讀格式且因此轉譯成可執行指令的眾多指令。又,程式包含本端地駐留於程式中或可在記憶體中或在儲存器件上找到之變數及資料結構。另外,可基於應用來識別下文所描述之各種程式,該等各種程式係針對該應用而在本發明之特定實施例中實施。然而,應瞭解,以下任何特定程式命名僅為了方便而使用,且因此本發明不應限於僅在由此命名識別及/或隱指之任何特定應用中使用。In general, a routine executed to carry out embodiments of the present invention can be part of a working system or a particular application, component, program, module, article, or sequence of instructions. The computer program of the present invention typically includes numerous instructions that will be translated from a native computer into a machine readable format and thus translated into executable instructions. Also, the program includes variables and data structures that reside locally in the program or can be found in the memory or on the storage device. In addition, the various programs described below can be identified based on the application, which are implemented in a particular embodiment of the invention for that application. However, it should be understood that any of the following specific program nomenclatures are used for convenience only, and thus the invention should not be limited to use in any particular application identified by this nomenclature and/or implicitly.

圖1為根據本發明之一實施例之說明虛擬世界計算環境100之用戶端伺服器視圖的方塊圖。如圖所示,虛擬世界計算環境100包括:用戶端電腦120、網路160及伺服器系統140。在一實施例中,在環境100中說明之電腦系統可包括現有電腦系統,例如,桌上型電腦、伺服器電腦、膝上型電腦、平板電腦及其類似者。然而,在圖1中說明之計算環境100僅為一計算環境之實例。本發明之實施例可無關於電腦系統是否為複雜之多使用者計算系統而使用其他環境來實施,諸如藉由高速網路連接之個人電腦叢集、單一使用者工作台或缺少非揮發性儲存器的網路器具。另外,在圖1中說明且本文中描述之軟體應用程式可使用在現有電腦系統(例如,桌上型電腦、伺服器電腦、膝上型電腦、平板電腦及其類似者)上執行之電腦軟體應用程式來實施。然而,本文中所描述之軟體應用程式並不限於任何當前現有計算環境或程式設計語言,且可經調適以隨著新計算系統變得可用而利用該等新計算系統。1 is a block diagram illustrating a client server view of a virtual world computing environment 100, in accordance with an embodiment of the present invention. As shown, the virtual world computing environment 100 includes a client computer 120, a network 160, and a server system 140. In one embodiment, the computer system illustrated in environment 100 can include existing computer systems, such as desktop computers, server computers, laptops, tablets, and the like. However, the computing environment 100 illustrated in FIG. 1 is merely an example of a computing environment. Embodiments of the present invention may be implemented using other environments regardless of whether the computer system is a complex multi-user computing system, such as a PC cluster connected by a high speed network, a single user workstation, or a lack of non-volatile storage. Network appliances. In addition, the software application illustrated in FIG. 1 and described herein can be used with computer software executing on existing computer systems (eg, desktop computers, server computers, laptops, tablets, and the like). The application is implemented. However, the software applications described herein are not limited to any currently existing computing environment or programming language, and may be adapted to utilize the new computing systems as new computing systems become available.

如圖所示,每一用戶端電腦120包括一中央處理單元(CPU)(未圖示),該中央處理單元(CPU)經由匯流排(未圖示)自用戶端記憶體130及用戶端儲存器123獲得指令及資料。CPU為在電腦中進行所有指令、邏輯及數學處理的可程式化邏輯器件。用戶端儲存器123儲存應用程式及資料以供用戶端電腦120使用。用戶端儲存器123包括硬碟驅動器、快閃記憶體器件、光學媒體及其類似者。用戶端電腦120可操作地連接至網路160。用戶端記憶體130包括作業系統(OS)131及虛擬世界用戶端132。作業系統131為用於管理用戶端電腦120之操作的軟體。OS 131之實例包括UNIX、Microsoft Windows®作業系統之一版本及Linux®作業系統之發行版。(註釋:Linux為Linus Torvalds在美國及其他國家中的商標。)As shown, each client computer 120 includes a central processing unit (CPU) (not shown) that is stored from the client memory 130 and the client via a bus (not shown). The device 123 obtains instructions and materials. The CPU is a programmable logic device that performs all instruction, logic, and math processing in the computer. The client storage 123 stores applications and materials for use by the client computer 120. The client storage 123 includes a hard disk drive, a flash memory device, an optical medium, and the like. Client computer 120 is operatively coupled to network 160. The client memory 130 includes an operating system (OS) 131 and a virtual world client 132. The operating system 131 is software for managing the operation of the client computer 120. Examples of OS 131 include UNIX, one version of the Microsoft Windows® operating system, and a distribution of the Linux® operating system. (Note: Linux is a trademark of Linus Torvalds in the US and other countries.)

在一實施例中,虛擬世界用戶端132提供一軟體程式,該軟體程式允許使用者連接至伺服器140上之虛擬世界伺服器應用程式146且一旦連接即進行各種使用者動作。此等動作可包括探索虛擬場所、與其他化身互動及與虛擬物件互動。另外,虛擬世界用戶端132可經組態以在沉浸式環境內產生並顯示使用者的通常稱為化身之虛擬表示。使用者之化身對於虛擬世界中之其他使用者通常為可見的,且使用者可檢視表示其他使用者的化身。虛擬世界用戶端132亦可經組態以產生沉浸式環境並將其顯示給使用者並將使用者之所要動作傳輸至虛擬世界伺服器應用程式146。此顯示可包括來自虛擬世界之在任何給定時間自使 用者之視線判定的內容。對於使用者而言,顯示可呈現第三人視角,其意謂來自不同於使用者化身之場所的場所之視圖且其可包括使用者化身在虛擬世界內的影像。或者,顯示可呈現第一人視角,其意謂虛擬世界之如經由表示使用者的化身之眼睛將看見之視圖。In one embodiment, the virtual world client 132 provides a software program that allows the user to connect to the virtual world server application 146 on the server 140 and perform various user actions upon connection. Such actions may include exploring virtual places, interacting with other avatars, and interacting with virtual objects. Additionally, the virtual world client 132 can be configured to generate and display a virtual representation of the user, commonly referred to as an avatar, within an immersive environment. The user's avatar is typically visible to other users in the virtual world, and the user can view the avatars representing other users. The virtual world client 132 can also be configured to generate an immersive environment and display it to the user and transfer the user's desired actions to the virtual world server application 146. This display can include from the virtual world at any given time The content of the user's line of sight. For the user, the display may present a third person perspective, which means a view from a different location than the user's avatar and which may include images of the user's avatar within the virtual world. Alternatively, the display may present a first person perspective, which means that the virtual world is seen through a view that represents the user's avatar's eyes.

如圖所示,用戶端記憶體130亦包括一影像截取引擎134。在一實施例中,影像截取引擎134可提供一軟體應用程式,該軟體應用程式經組態以偵測拍照機會且作為回應而在合適時刻截取虛擬環境之影像。該影像截取引擎134可在合適時刻截取使用者化身之影像或化身「看見」之物的影像。亦即,相機視角並不限於使用者之檢視區。另外,在一實施例中,影像截取引擎134可在合適時刻截取描述在該虛擬環境中描繪之每一物件的三維場景資料。如此做可允許自任何所要視角建立虛擬世界中給定時刻的二維影像。一旦被截取,影像即可儲存於臨時緩衝空間125中。在一實施例中,緩衝器之大小可由使用者來調整。為了選擇合適時刻可能已發生(或將要發生)的時間,影像截取引擎134可自虛擬世界用戶端132擷取即時量測結果133。此等即時量測結果包括任何可量測生理或虛擬世界參數。此可包括(例如)虛擬世界內之使用者化身附近的如由虛擬世界伺服器應用程式146暴露至虛擬世界用戶端132之改變。舉例而言,假設使用者與朋友們正在虛擬世界中閒逛,此時焰火突然點亮晚間之天空,從而使得該等使用者將其對虛擬世界之視圖調整為皆相對集中於同一地方。 即時量測結果亦可包括(例如)經由輸入器件180及虛擬現實互動器件190量測之生理參數。繼續該實例,使用者在看見虛擬世界中之焰火時即可向著麥克風中笑。可量測生理參數之其他實例包括:脈搏、眼動、大腦活動、體溫、滑鼠上之握緊壓力、滑鼠之移動模式、鍵入速度及模式、鍵入壓力、面部特徵、排汗及頭部移動。As shown, the client memory 130 also includes an image capture engine 134. In one embodiment, the image capture engine 134 can provide a software application that is configured to detect camera opportunities and, in response, capture images of the virtual environment at appropriate times. The image capture engine 134 can capture an image of the user's avatar or an image of the avatar "seeing" at an appropriate time. That is, the camera angle of view is not limited to the viewing area of the user. Additionally, in an embodiment, the image capture engine 134 can intercept three-dimensional scene material describing each object depicted in the virtual environment at the appropriate time. Doing so allows a two-dimensional image of a given moment in the virtual world to be created from any desired perspective. Once intercepted, the image can be stored in the temporary buffer space 125. In an embodiment, the size of the buffer can be adjusted by the user. In order to select the time at which a suitable moment may have occurred (or is about to occur), the image capture engine 134 may retrieve the instant measurement result 133 from the virtual world client 132. These immediate measurements include any measurable physiological or virtual world parameters. This may include, for example, changes to the virtual world client 132 as exposed by the virtual world server application 146 near the user avatar within the virtual world. For example, suppose that the user and the friends are wandering in the virtual world, at which time the fireworks suddenly light up the evening sky, so that the users adjust their view of the virtual world to be relatively concentrated in the same place. The instantaneous measurement results may also include physiological parameters measured, for example, via input device 180 and virtual reality interaction device 190. Continuing with the example, the user can smile into the microphone while seeing the fireworks in the virtual world. Other examples of measurable physiological parameters include: pulse, eye movement, brain activity, body temperature, grip pressure on the mouse, mouse movement pattern, typing speed and mode, typing pressure, facial features, perspiration, and head mobile.

在一實施例中,影像截取引擎134可經組態以維護用以偵測拍照機會是否已發生之平均量測結果的資料庫127。給定參數之即時量測結果可對照歷史平均值來進行評估,以判定拍照機會是否已發生。舉例而言,若一群使用者正彼此講話,則話音之平均音量可經取樣,且若一使用者將其語音提高至指定臨限值以上,則影像截取引擎134可截取影像並將其儲存於緩衝空間125中。在一實施例中,指定臨限值可根據使用者指定之加權因數進行加權並儲存為使用者設定124之部分。In an embodiment, the image capture engine 134 can be configured to maintain a database 127 for detecting an average measurement of whether a photo opportunity has occurred. The instantaneous measurement results for a given parameter can be evaluated against historical averages to determine if a photo opportunity has occurred. For example, if a group of users are speaking to each other, the average volume of the voice can be sampled, and if a user raises their voice above a specified threshold, the image capture engine 134 can capture the image and store it. In the buffer space 125. In one embodiment, the specified threshold may be weighted according to a user-specified weighting factor and stored as part of the user setting 124.

另外,所獲悉之量測結果可藉由向使用者提供自緩衝器125選擇影像移動至永久圖庫126之能力而取得。當將影像移動至永久圖庫126時,一組情境量測結果可儲存於量測結果資料庫127中。舉例而言,若使用者具有選擇在麥克風笑聲大量存在時獲取影像的歷史,則影像截取引擎134可藉由增加與麥克風笑聲量測結果相關聯之加權因數來作出回應。類似地,若使用者具有選擇自某一角度及距離獲取影像的歷史,則影像截取引擎134可能偏好自類似或相同之角度及距離截取影像。In addition, the measured results obtained can be obtained by providing the user with the ability to select an image from the buffer 125 to move to the permanent gallery 126. When the image is moved to the permanent gallery 126, a set of context measurements can be stored in the measurement results database 127. For example, if the user has a history of selecting an image to acquire when the microphone laughs in a large amount, the image capture engine 134 can respond by increasing the weighting factor associated with the microphone laugh measurement result. Similarly, if the user has a history of selecting images from a certain angle and distance, the image capture engine 134 may prefer to capture images from similar or identical angles and distances.

使用者可使用諸如LCD或CRT監視顯示器之顯示器件170來檢視虛擬世界。且使用者可使用輸入器件180(例如,鍵盤及滑鼠)與虛擬世界用戶端132互動。另外,在一實施例中,使用者可使用多種虛擬現實互動器件190與虛擬世界用戶端132及虛擬世界伺服器應用程式146互動。舉例而言,使用者可佩戴對於每一透鏡具有一螢幕顯示的一組虛擬現實護目鏡。另外,護目鏡可裝備有運動感測器,該等運動感測器使得虛擬世界之呈現給使用者之視圖基於個人之頭部移動而移動。作為另一實例,使用者可佩戴一副手套,該副手套經組態以將使用者手部之運動及移動轉譯為虛擬現實環境內的化身之移動。當然,本發明之實施例並不限於此等實例,且一般熟習此項技術者將易於認識到本發明可經調適以與多種器件一起使用,該等器件經組態以將虛擬世界呈現給使用者且將使用者之移動、運動或其他動作轉譯為藉由虛擬世界內之表示使用者的化身進行之動作。The user can view the virtual world using a display device 170 such as an LCD or CRT monitor display. And the user can interact with the virtual world client 132 using the input device 180 (eg, a keyboard and a mouse). Additionally, in an embodiment, a user may interact with virtual world client 132 and virtual world server application 146 using a variety of virtual reality interaction devices 190. For example, a user can wear a set of virtual reality goggles with a screen display for each lens. Additionally, the goggles can be equipped with motion sensors that cause the view of the virtual world presented to the user to move based on the movement of the individual's head. As another example, a user can wear a pair of gloves that are configured to translate the movement and movement of the user's hand into an avatar movement within the virtual reality environment. Of course, embodiments of the invention are not limited to such examples, and those skilled in the art will readily recognize that the invention can be adapted for use with a variety of devices that are configured to present a virtual world to use. The user's movements, movements, or other actions are translated into actions that are performed by the avatar representing the user within the virtual world.

如圖所示,伺服器系統140包括一CPU 142,該CPU 142經由匯流排141自記憶體144及儲存器143獲得指令及資料。處理器142可為經調適以支援本發明之方法的任何處理器。記憶體144為足夠大以保留必要程式及資料結構的任何記憶體。記憶體144可為包括以下各項之記憶體器件中的一者或其組合:隨機存取記憶體、非揮發性或備用記憶體(例如,可程式化或快閃記憶體、唯讀記憶體等)。另外,可認為記憶體144及儲存器143包括在伺服器140中實 體地定位於其他地方(例如,定位於經由匯流排141耦接至伺服器140之另一電腦上)的記憶體。伺服器系統140可操作地連接至網路160,該網路160通常表示任一種資料通信網路。因此,網路160可表示區域網路及包括網際網路的廣域網路兩者。As shown, the server system 140 includes a CPU 142 that obtains commands and data from the memory 144 and the storage 143 via the bus 141. Processor 142 can be any processor that is adapted to support the methods of the present invention. Memory 144 is any memory large enough to retain the necessary program and data structure. The memory 144 can be one or a combination of memory devices including: random access memory, non-volatile or spare memory (eg, programmable or flash memory, read-only memory) Wait). In addition, the memory 144 and the storage 143 can be considered to be included in the server 140. The memory is physically located elsewhere (eg, on another computer coupled to the server 140 via the bus bar 141). Server system 140 is operatively coupled to network 160, which generally represents any type of data communication network. Thus, network 160 can represent both a regional network and a wide area network including the Internet.

當然,本文中所描述之實施例意欲為說明性的,且並不限制本發明。且廣泛地涵蓋其他實施例。舉例而言,影像截取引擎134、使用者設定124、緩衝空間125、圖庫126以及平均及所獲悉量測結果之資料庫127不需要如圖1中所展示而駐留於用戶端上,且其中之任一者或全部可替代地駐留於伺服器系統140上。在另一實例中,影像截取引擎134之功能性可併入至虛擬世界用戶端132中。The embodiments described herein are intended to be illustrative, and not limiting of the invention. Other embodiments are broadly contemplated. For example, the image capture engine 134, the user settings 124, the buffer space 125, the gallery 126, and the database of average and learned measurements are not required to reside on the client as shown in FIG. 1, and Either or all may alternatively reside on the server system 140. In another example, the functionality of image capture engine 134 can be incorporated into virtual world client 132.

圖2A說明根據本發明之一實施例之展示使用與虛擬世界互動之使用者之第三人視角呈現給使用者的檢視區之使用者顯示器200。在此實例中,主要使用者(亦即,檢視使用者顯示器200之使用者)由化身201表示,且在與第一及第二化身203進行衝浪遊戲時恰趕上波浪202。另外,檢視區自後面展示在波浪202之頂峰上的化身201,兩個化身203較遠且由波浪部分遮擋。在此實例中,假設兩個化身203由為主要使用者之朋友的使用者來控制。類似於圖2A中所描繪之情境的情境呈現截取在虛擬環境中彼此互動之此等化身的值得紀念的影像之狹窄機會窗。如圖2A中所示,控制化身201之主要使用者可能直至過遲才認識到該機會。另外,即使主要使用者將截取螢幕擷取畫面,但檢視區僅 展示主要使用者之化身201之背面、兩個化身203之經剪輯之頭部及波浪202的後側,從而導致不太理想的螢幕擷取畫面。2A illustrates a user display 200 showing a viewing area presented to a user using a third person perspective of a user interacting with the virtual world, in accordance with an embodiment of the present invention. In this example, the primary user (i.e., the user viewing the user display 200) is represented by the avatar 201 and just catches up with the wave 202 when surfing the game with the first and second avatars 203. In addition, the viewport shows the avatar 201 on the apex of the wave 202 from behind, and the two avatars 203 are far apart and are obscured by the wave portion. In this example, it is assumed that the two avatars 203 are controlled by a user who is a friend of the primary user. A context similar to the context depicted in Figure 2A presents a narrow chance window that captures memorable images of such avatars that interact with one another in a virtual environment. As shown in FIG. 2A, the primary user controlling the avatar 201 may not recognize the opportunity until too late. In addition, even if the main user will capture the screenshot, the viewport only The back side of the avatar 201 of the primary user, the clipped head of the two avatars 203, and the back side of the wave 202 are displayed, resulting in a less than ideal screen capture.

圖2B說明根據本發明之一實施例之用於檢視緩衝器125之內容的圖形使用者介面250。說明性地,介面250展示描繪於圖2A中之情境的影像。在此實例中,假設圖2B中之影像係由顯示器200之影像截取引擎134在拍照機會得分超出指定臨限值之時刻來獲取。更具體言之,在一組量測結果之累積偏差(deviation)由於化身201及兩個化身203的接近、使用者化身在趕上波浪202之後微笑、兩個化身203向化身201微笑及麥克風笑聲之音量而超出指定臨限值時,可能已獲取影像。當然,特定量測結果及臨限值可經量身調整以適宜於給定使用者的偏好。FIG. 2B illustrates a graphical user interface 250 for viewing the contents of buffer 125 in accordance with an embodiment of the present invention. Illustratively, interface 250 displays an image depicted in the context of Figure 2A. In this example, it is assumed that the image in FIG. 2B is acquired by the image capture engine 134 of the display 200 at a time when the camera opportunity score exceeds a specified threshold. More specifically, the cumulative deviation of a set of measurement results is due to the proximity of the avatar 201 and the two avatars 203, the user avatar smiles after catching the wave 202, the two avatars 203 smile toward the avatar 201, and the microphone laughs. The image may have been acquired when the volume of the sound exceeds the specified threshold. Of course, specific measurements and thresholds can be tailored to suit a given user's preferences.

說明性地,展示於圖2B中之四個影像包括:側視圖252,其展示漂亮波浪之形狀連同化身之姿勢;正視圖253,其展示所有三個化身之情緒表現;俯視圖254,其展現周圍的水;及肖像視圖255,其截取化身201之面部表情。因為影像截取引擎134自多種角度截取影像252、253、254及255,故此等影像可向主要使用者提供對虛擬世界內之此事件好得多之記憶(與展示於圖2A中之倉促截取的螢幕擷取畫面相比較)。另外,主要使用者(亦即,控制化身201之使用者)可選擇諸如以下各項之操作256中的一者:將此等四個影像中之一者保存至永久圖庫126中(可經由欄標選擇251存取),或自緩衝器125刪除非所要影像 從而釋放額外空間以用於由影像截取引擎134截取之影像。Illustratively, the four images shown in Figure 2B include a side view 252 showing the shape of a beautiful wave along with the pose of the avatar; a front view 253 showing the emotional performance of all three avatars; a top view 254 showing the surroundings Water; and portrait view 255, which intercepts the facial expression of the avatar 201. Because the image capture engine 134 captures images 252, 253, 254, and 255 from a variety of angles, such images can provide the primary user with a much better memory of this event in the virtual world (as with the hasty interception shown in Figure 2A). The screenshot is compared with the screen). Additionally, the primary user (i.e., the user controlling the avatar 201) may select one of operations 256, such as one of: saving one of the four images to the permanent gallery 126 (via the column) Mark selection 251 access), or delete unwanted images from buffer 125 This frees up additional space for the image captured by image capture engine 134.

圖3說明根據本發明之一實施例之用於組態影像截取引擎134之使用者設定124的圖形使用者介面300。如圖所示,一組通用設定301可用以指定用於儲存以使用者名義自動截取之影像的所要緩衝器大小以及啟動或撤銷啟動量測結果收集及自動影像截取。FIG. 3 illustrates a graphical user interface 300 for configuring user settings 124 of image capture engine 134 in accordance with an embodiment of the present invention. As shown, a set of universal settings 301 can be used to specify the desired buffer size for storing images automatically captured on behalf of the user and to initiate or deactivate startup measurement results collection and automatic image capture.

在一實施例中,使用者亦可定製與量測類別302之清單相關聯的一組加權因數303。說明性地,一組滑動條可用以將給定加權增加(或降低)至任何特定因數。如圖所示,加權因數303之值根據分別對相關聯量測類別無影響與具有最大影響而在0至10之範圍內。在此實例中,所顯示量測類別302之清單包括:鄰近朋友之數目、朋友關係強度、正進行之活動的類型(例如,使用者第一次跳傘?)、活動之幅度、麥克風提示、包括於即時訊息通信中的「情緒圖標」、在即時訊息通信中接收到之「情緒圖標」、鍵入速度及模式、關鍵字(例如,「此係玩笑/有趣的」、「LOL」)、口頭對話內容及檢視區定向的彙總(例如,大量使用者正注視同一人或物件?)。顯示給使用者之特定量測類別可藉由點選按鈕304來定製。當然,熟習此項技術者將認識到,量測類別可經量身調整以包括任何可量測生理或虛擬世界參數(例如,使用者檢視區之場景改變的幅度,等)。一組按鈕305允許使用者應用、保存或取消改變,或恢復預設設定。In an embodiment, the user may also customize a set of weighting factors 303 associated with the list of measurement categories 302. Illustratively, a set of sliders can be used to increase (or decrease) a given weight to any particular factor. As shown, the value of the weighting factor 303 is in the range of 0 to 10 depending on the respective associated measurement categories and having the greatest impact. In this example, the list of displayed measurement categories 302 includes: the number of neighboring friends, the strength of the friend relationship, the type of activity being performed (eg, the user's first skydiving?), the magnitude of the activity, the microphone prompt, including "Emotional icons" in instant messaging, "emotional icons" received in instant messaging, typing speed and mode, keywords (for example, "this joke/interesting", "LOL"), verbal dialogue Summary of content and viewport orientation (for example, are a large number of users looking at the same person or object?). The particular measurement category displayed to the user can be customized by clicking the button 304. Of course, those skilled in the art will recognize that the measurement categories can be tailored to include any measurable physiological or virtual world parameters (eg, the magnitude of scene changes in the user's viewport, etc.). A set of buttons 305 allows the user to apply, save or cancel changes, or restore preset settings.

圖4為說明根據本發明之一實施例之用於改良在虛擬環境中以使用者名義截取的影像之選擇的方法400之流程圖。更具體言之,圖4說明用於獲悉用以選擇在使用者與虛擬環境之要素互動時截取的影像之使用者偏好的視角及情境量測結果的方法。為了說明,結合圖1之系統來描述方法400。然而,一般熟習此項技術者將理解,經組態而以任何次序進行方法400之步驟的任何系統在本發明之範疇內。4 is a flow diagram illustrating a method 400 for improving the selection of images captured by a user in a virtual environment, in accordance with an embodiment of the present invention. More specifically, FIG. 4 illustrates a method for learning the perspective and contextual measurement results of a user preference for selecting an image that is captured when the user interacts with elements of the virtual environment. For purposes of illustration, method 400 is described in conjunction with the system of FIG. However, it will be understood by those skilled in the art that any system configured to perform the steps of method 400 in any order is within the scope of the present invention.

如圖所示,方法400以步驟410開始,在步驟410處,接收檢視用於自動截取的影像之緩衝器內容之命令。舉例而言,使用者可點選欄標251以檢視緩衝器125。在步驟420處,使用者可指定影像之選擇。舉例而言,使用者可在圖2B之介面250中選擇影像253。在步驟430處,使用者可指定應將在步驟420處選擇之影像保存至永久圖庫。舉例而言,使用者可點選按鈕256以將所選擇影像253保存至永久圖庫中。As shown, method 400 begins with step 410, at which a command to view the contents of the buffer for the automatically captured image is received. For example, the user can click on the column 251 to view the buffer 125. At step 420, the user can specify a selection of images. For example, the user can select image 253 in interface 250 of FIG. 2B. At step 430, the user can specify that the image selected at step 420 should be saved to the permanent gallery. For example, the user can click the button 256 to save the selected image 253 to the permanent gallery.

在步驟440處,影像截取引擎134可將所選擇影像自緩衝器125移動至永久圖庫126。舉例而言,所選擇影像253自緩衝器移除並儲存於永久圖庫中,該永久圖庫可藉由點選欄標251從而檢視圖庫內容而存取。At step 440, image capture engine 134 can move the selected image from buffer 125 to permanent gallery 126. For example, the selected image 253 is removed from the buffer and stored in a permanent gallery that can be accessed by clicking on the column 251 to view the contents of the gallery.

在步驟450處,影像截取引擎134可自移動至永久圖庫之影像判定一組視角及/或情境量測結果。舉例而言,所截取影像可由提供關於虛擬環境之維度座標之元資料來加標籤。在一實施例中,此清單可包括相對於使用者化身之場所的足以表示錐台狀視見體(viewing volume)之至少八個 3D座標,該視見體表示截取影像之視角。另外,情境量測結果可提供描述虛擬環境之在截取影像時之態樣的元資料。使用圖2B之影像253作為實例,鄰近朋友之數目為兩個,且表示虛擬環境中之朋友的化身皆正注視主要使用者的化身。當然,隨影像截取的情境元資料可經量身調整以適宜於特定狀況的需要。在步驟450之後,方法400終止。At step 450, the image capture engine 134 can determine a set of perspectives and/or contextual measurements from images that are moved to the permanent gallery. For example, the captured image may be tagged by providing metadata about the dimensional coordinates of the virtual environment. In an embodiment, the list may include at least eight of the viewing volume that is sufficient to represent the viewing volume relative to the location of the user's avatar The 3D coordinate, which represents the viewing angle of the captured image. In addition, the contextual measurement results provide metadata describing the context of the virtual environment as it is being captured. Using the image 253 of FIG. 2B as an example, the number of neighboring friends is two, and that the avatars of friends in the virtual environment are looking at the avatar of the primary user. Of course, the contextual metadata intercepted with the image can be tailored to suit the needs of a particular situation. After step 450, method 400 terminates.

圖5為說明根據本發明之一實施例之用於基於使用者偏好判定良好拍照機會的方法500之流程圖。為了說明,結合圖1之系統來描述方法500。然而,一般熟習此項技術者將理解,經組態而以任何次序進行方法500之步驟的任何系統在本發明之範疇內。FIG. 5 is a flow diagram illustrating a method 500 for determining a good photo opportunity based on user preferences, in accordance with an embodiment of the present invention. For purposes of illustration, method 500 is described in conjunction with the system of FIG. However, those skilled in the art will appreciate that any system configured to perform the steps of method 500 in any order is within the scope of the present invention.

如圖所示,方法500以步驟510開始,在該步驟510處,影像截取引擎134判定量測結果收集是否轉為在作用中。舉例而言,如圖3中所示,選項按鈕301允許使用者雙態觸發量測結果收集是否在作用中。若在作用中,則在步驟520處,可收集即時生理及虛擬世界量測結果。舉例而言,可記錄與使用者在一起之朋友的數目及麥克風上之笑聲音量以及其他量測結果。在步驟530處,可比較在使用者與虛擬環境互動時收集到的即時量測結果與在資料庫中維護之平均值。自當前量測結果及歷史平均值之偏差可經收集以識別所關注或不尋常事項何時在虛擬環境內發生。舉例而言,如圖2A及圖2B中所示,(藉由化身201表示之)主要使用者正與(藉由化身203表示之)兩個朋友互動。對於此實例而言,假設此比主要使用者通常在虛擬環境中與之 互動的朋友之平均數目大一標準差(standard deviation)。此外,假設麥克風上之笑聲音量比平均麥克風笑聲音量大1.5個標準差。As shown, method 500 begins with step 510, at which image capture engine 134 determines whether the measurement result collection is transitioning to active. For example, as shown in FIG. 3, the option button 301 allows the user to toggle the triggering result collection to be active. If in effect, then at step 520, immediate physiological and virtual world measurements can be collected. For example, the number of friends with the user and the amount of laughter on the microphone and other measurements can be recorded. At step 530, the instantaneous measurements collected during the user interaction with the virtual environment can be compared to the average maintained in the database. Deviations from current measurements and historical averages can be collected to identify when concerns or unusual events occur within the virtual environment. For example, as shown in Figures 2A and 2B, the primary user (represented by avatar 201) is interacting with two friends (represented by avatar 203). For this example, suppose this is more common than the primary user in a virtual environment. The average number of interacting friends is one standard deviation. In addition, it is assumed that the amount of laughter on the microphone is 1.5 standard deviations larger than the average microphone laugh.

在步驟540處,可使用加權因數303來按比例調整個別偏差,且對其求平均以判定個別偏差的加權平均值。加權平均值提供累積偏差得分(亦即,拍照機會得分),該得分可用以判定所關注或不尋常事項是否可能正在虛擬環境內發生。舉例而言,假設使用者指定,對在麥克風上拾取之音量大小(volume level)或笑聲的偏差給定的權重係對使用者在虛擬環境內與之在一起的朋友之數目的偏差給定的權重之兩倍。在本實例中,基於以上列出之假設偏差的累積偏差得分將為1.33個標準差。At step 540, the individual deviations may be scaled using a weighting factor 303 and averaged to determine a weighted average of the individual deviations. The weighted average provides a cumulative deviation score (ie, a camera opportunity score) that can be used to determine if a concern or unusual event is likely to occur within the virtual environment. For example, suppose the user specifies that the weight given to the deviation of the volume level or laughter picked up on the microphone is given by the deviation of the number of friends with whom the user is in the virtual environment. Double the weight. In this example, the cumulative deviation score based on the hypothesis deviations listed above would be 1.33 standard deviations.

在步驟550處,可比較在步驟540中判定之拍照機會得分與臨限得分。在一實施例中,臨限得分設定為緩衝器125中之最低得分影像的得分。亦即,若拍照機會得分超出緩衝器中之任何影像的最低得分,則此可為截取虛擬環境之影像的合適時刻。若緩衝器為空的,則臨限值可設定為最小值(或簡單地設定為0.0)。若拍照機會得分大於臨限得分(步驟560),則影像截取引擎134可截取虛擬環境之若干影像(步驟570)。在步驟570處,使用在步驟520中收集之即時量測結果來更新資料庫126中的平均值量測結果。在步驟580處,方法500終止。At step 550, the camera opportunity score and the threshold score determined in step 540 can be compared. In one embodiment, the threshold score is set to the score of the lowest score image in buffer 125. That is, if the camera opportunity score exceeds the lowest score of any image in the buffer, this may be a suitable moment to capture the image of the virtual environment. If the buffer is empty, the threshold can be set to a minimum value (or simply set to 0.0). If the camera opportunity score is greater than the threshold score (step 560), the image capture engine 134 may intercept several images of the virtual environment (step 570). At step 570, the average measurement results in the database 126 are updated using the instant measurements collected in step 520. At step 580, method 500 terminates.

當然,以上所描述之實施例意欲為說明性的,且並不限制本發明。廣泛地涵蓋其他實施例。舉例而言,比較步驟 530及計算步驟540不需要如圖5中所示計算加權平均值,且可由任何適當統計計算來替換。例如,量測結果之差量可經監視以預測即將來臨之拍照機會(例如,在量測結果開始快速改變的情況下)。此方法可允許在預期拍照機會之前截取影像,以確保在快速改變第情境下不錯過合適時刻的影像。The embodiments described above are intended to be illustrative, and not limiting of the invention. Other embodiments are broadly contemplated. For example, the comparison step 530 and calculation step 540 need not calculate a weighted average as shown in Figure 5, and can be replaced by any suitable statistical calculation. For example, the difference in measurement results can be monitored to predict upcoming camera opportunities (eg, where the measurement results begin to change rapidly). This method allows the image to be captured before the photo opportunity is expected to ensure that the image at the right moment is not missed in the fast changing context.

圖6為說明根據本發明之一實施例之用於截取影像的方法600之流程圖。為了說明,結合圖1之系統來描述方法600。然而,一般熟習此項技術者將理解,經組態而以任何次序進行方法600之步驟的任何系統在本發明之範疇內。FIG. 6 is a flow diagram illustrating a method 600 for intercepting images in accordance with an embodiment of the present invention. For purposes of illustration, method 600 is described in conjunction with the system of FIG. However, it will be understood by those skilled in the art that any system configured to perform the steps of method 600 in any order is within the scope of the present invention.

如圖所示,方法600以步驟610開始,在該步驟610處,影像截取引擎134判定緩衝器125是否為滿的。亦即,緩衝器內容之大小是否超出使用者指定的最大緩衝器大小。在步驟620處,若緩衝器為滿的,則影像截取引擎134自緩衝器125刪除最低得分影像以使空間可用於新影像。在步驟630處,根據含於資料庫126中之所獲悉量測結果來設定並調整視角,諸如主要使用者的化身自合適距離之側視圖或正視圖。或者,影像截取引擎134可截取描述在虛擬環境中描繪之每一物件的三維場景資料之集合。如此做可允許截取3D影像或自任何所要視角產生二維影像。在步驟640處,自步驟630中所設定之視角截取影像並將所截取影像儲存於緩衝器125中。在步驟650處,將隨同影像所截取之量測結果(例如,關於情境元資料之資訊(例如,存在多少個化身、化身正注視何物、相機或檢視區位置、音量大小 等))儲存於緩衝器125中。在步驟650之後,方法600終止。As shown, method 600 begins with step 610, at which image capture engine 134 determines if buffer 125 is full. That is, whether the size of the buffer content exceeds the maximum buffer size specified by the user. At step 620, if the buffer is full, the image capture engine 134 deletes the lowest score image from the buffer 125 to make the space available for the new image. At step 630, the viewing angle is set and adjusted based on the learned measurements contained in the database 126, such as a side view or front view of the avatar of the primary user from a suitable distance. Alternatively, image capture engine 134 can intercept a collection of three-dimensional scene material describing each object depicted in the virtual environment. Doing so allows you to capture 3D images or generate 2D images from any desired angle of view. At step 640, the image is captured from the perspective set in step 630 and the captured image is stored in buffer 125. At step 650, the measurement results taken along with the image (eg, information about the contextual metadata (eg, how many avatars are present, what the avatar is looking at, the camera or viewport position, volume level) Etc)) is stored in the buffer 125. After step 650, method 600 terminates.

雖然前述內容係針對本發明之實施例,但在不偏離本發明之基本範疇的情況下可設計本發明之其他及另外的實施例,且本發明之範疇係由以下申請專利範圍來判定。While the foregoing is directed to embodiments of the present invention, the invention may be

100‧‧‧虛擬世界計算環境100‧‧‧Virtual World Computing Environment

120‧‧‧用戶端電腦120‧‧‧Customer computer

123‧‧‧用戶端儲存器123‧‧‧Client storage

124‧‧‧使用者設定124‧‧‧User settings

125‧‧‧臨時緩衝空間125‧‧‧ Temporary buffer space

126‧‧‧永久圖庫126‧‧‧Permanent Gallery

127‧‧‧資料庫127‧‧‧Database

130‧‧‧用戶端記憶體130‧‧‧User memory

131‧‧‧作業系統131‧‧‧Operating system

132‧‧‧虛擬世界用戶端132‧‧‧Virtual World Client

133‧‧‧即時量測結果133‧‧‧ Instant measurement results

134‧‧‧影像截取引擎134‧‧‧Image Capture Engine

140‧‧‧伺服器系統140‧‧‧Server System

141‧‧‧匯流排141‧‧ ‧ busbar

142‧‧‧中央處理單元(CPU)142‧‧‧Central Processing Unit (CPU)

143‧‧‧儲存器143‧‧‧Storage

144‧‧‧記憶體144‧‧‧ memory

146‧‧‧虛擬世界伺服器應用程式146‧‧‧Virtual World Server Application

160‧‧‧網路160‧‧‧Network

170‧‧‧顯示器件170‧‧‧Display devices

180‧‧‧輸入器件180‧‧‧Input device

190‧‧‧虛擬現實互動器件190‧‧‧Virtual reality interactive device

200‧‧‧使用者顯示器200‧‧‧User display

201‧‧‧化身201‧‧‧ avatar

202‧‧‧波浪202‧‧‧ waves

203‧‧‧化身203‧‧‧Incarnation

250‧‧‧圖形使用者介面250‧‧‧ graphical user interface

251‧‧‧欄標251‧‧‧ column

252‧‧‧側視圖252‧‧‧ side view

253‧‧‧正視圖253‧‧‧ front view

254‧‧‧俯視圖254‧‧‧Top view

255‧‧‧肖像視圖255‧‧ portrait view

256‧‧‧操作256‧‧‧ operation

300‧‧‧圖形使用者介面300‧‧‧ graphical user interface

301‧‧‧通用設定301‧‧‧Common settings

302‧‧‧量測類別302‧‧‧Measurement category

303‧‧‧加權因數303‧‧‧ Weighting factor

304‧‧‧按鈕304‧‧‧ button

305‧‧‧按鈕305‧‧‧ button

圖1為根據本發明之一實施例之說明虛擬世界計算環境之用戶端伺服器視圖的方塊圖;圖2A說明根據本發明之一實施例之自使用者之第三人視角所展示之參與虛擬世界的使用者;圖2B說明根據本發明之一實施例之展示影像緩衝器之內容的圖形使用者介面螢幕;圖3說明根據本發明之一實施例之顯示組態選項的圖形使用者介面螢幕;圖4為說明根據本發明之一實施例之用於改良在虛擬環境中以使用者名義截取的影像之品質的方法之流程圖;圖5為說明根據本發明之一實施例之用於基於使用者偏好識別虛擬環境內之拍照機會的方法之流程圖;及圖6為說明根據本發明之一實施例之用於截取虛擬環境之影像的方法之流程圖。1 is a block diagram illustrating a client server view of a virtual world computing environment in accordance with an embodiment of the present invention; FIG. 2A illustrates a participating virtual reality displayed from a third party perspective of a user in accordance with an embodiment of the present invention. Figure 2B illustrates a graphical user interface screen showing the contents of an image buffer in accordance with an embodiment of the present invention; and Figure 3 illustrates a graphical user interface screen displaying configuration options in accordance with an embodiment of the present invention. 4 is a flow chart illustrating a method for improving the quality of images captured by a user in a virtual environment, in accordance with an embodiment of the present invention; FIG. 5 is a diagram illustrating A flowchart of a method for a user to identify a camera opportunity within a virtual environment; and FIG. 6 is a flow chart illustrating a method for capturing an image of a virtual environment in accordance with an embodiment of the present invention.

201...化身201. . . Incarnation

203...化身203. . . Incarnation

250...圖形使用者介面250. . . Graphical user interface

251...欄標251. . . Column label

252...側視圖252. . . Side view

253...正視圖253. . . Front view

254...俯視圖254. . . Top view

255...肖像視圖255. . . Portrait view

256...操作256. . . operating

Claims (21)

一種用於截取描繪一虛擬世界之影像資料的方法,該方法包含:監視關於與該虛擬世界互動之一使用者的複數個量測結果;自該複數個量測結果計算一拍照機會得分之一當前值;比較該拍照機會得分之該當前值與一預定臨限值;及在判定該機會得分之該當前值超出該預定臨限值時,截取該虛擬世界的一組影像資料。A method for capturing image data depicting a virtual world, the method comprising: monitoring a plurality of measurements on a user interacting with the virtual world; calculating one of a camera opportunity score from the plurality of measurements a current value; comparing the current value of the camera opportunity score with a predetermined threshold; and intercepting a set of image data of the virtual world when determining that the current value of the opportunity score exceeds the predetermined threshold. 如請求項1之方法,其中該複數個量測結果中之至少一者藉由一使用者可組態之加權值來加權。The method of claim 1, wherein at least one of the plurality of measurements is weighted by a user configurable weighting value. 如請求項1之方法,其進一步包含將該虛擬世界之該所截取影像資料儲存於一緩衝器中,其中該緩衝器經組態以儲存複數組影像資料及對應於每一組影像資料的一拍照機會得分。The method of claim 1, further comprising storing the captured image data of the virtual world in a buffer, wherein the buffer is configured to store the complex array of image data and one corresponding to each set of image data Take a photo opportunity to score. 如請求項3之方法,其進一步包含:自該緩衝器接收該等組影像資料中之一組之一選擇;及將該組所選擇之影像資料複製至一永久照片圖庫。The method of claim 3, further comprising: receiving a selection of one of the group of image data from the buffer; and copying the selected image data to a permanent photo gallery. 如請求項1之方法,其中該複數個量測結果包括以下各項中之至少一者:鄰近朋友之一數目、朋友之一關係強度、正進行的活動之一類型、活動之一幅度、麥克風提示、在一即時訊息交換中使用的字元、一鍵入速度、一鍵入模式、在一即時訊息交換中使用的一關鍵字、對話內容,及檢視區定向之一彙總。The method of claim 1, wherein the plurality of measurement results include at least one of: a number of neighboring friends, a relationship strength of one of the friends, one of the types of activities being performed, one of the activities, and a microphone A summary of the prompts, characters used in an instant message exchange, a typing speed, a typing pattern, a keyword used in an instant message exchange, conversation content, and view orientation. 如請求項1之方法,其中該組影像資料包括截取該虛擬世界之一影像的複數個影像,且其中每一影像係自一相異相機位置截取。The method of claim 1, wherein the set of image data comprises a plurality of images intercepting an image of the virtual world, and each of the images is captured from a different camera position. 如請求項1之方法,其中該組影像資料包括描述在截取該組影像資料時在該虛擬環境中描繪之每一物件的一組三維場景資料。The method of claim 1, wherein the set of image data comprises a set of three-dimensional scene data describing each object depicted in the virtual environment when the set of image data is captured. 一種含有一程式之電腦可讀儲存媒體,該程式在執行時進行一用於截取描繪一虛擬世界之影像資料之操作,該操作包含:監視關於與該虛擬世界互動之一使用者的複數個量測結果;自該複數個量測結果計算一拍照機會得分之一當前值;比較該拍照機會得分之該當前值與一預定臨限值;及在判定該機會得分之該當前值超出該預定臨限值時,截取該虛擬世界的一組影像資料。A computer readable storage medium containing a program that, when executed, performs an operation for capturing image data depicting a virtual world, the operation comprising: monitoring a plurality of users of a user interacting with the virtual world a result of the measurement; calculating a current value of one of the camera opportunity scores from the plurality of measurement results; comparing the current value of the camera opportunity score with a predetermined threshold; and determining that the current value of the opportunity score exceeds the predetermined value At the limit, a set of image data of the virtual world is intercepted. 如請求項8之電腦可讀儲存媒體,其中該複數個量測結果中之至少一者藉由一使用者可組態之加權值來加權。The computer readable storage medium of claim 8, wherein at least one of the plurality of measurements is weighted by a user configurable weighting value. 如請求項8之電腦可讀儲存媒體,其中該操作進一步包含將該虛擬世界之該所截取影像資料儲存於一緩衝器中,其中該緩衝器經組態以儲存複數組影像資料及對應於每一組影像資料的一拍照機會得分。The computer readable storage medium of claim 8, wherein the operation further comprises storing the captured image data of the virtual world in a buffer, wherein the buffer is configured to store the complex array of image data and corresponding to each A photo opportunity score for a set of image data. 如請求項10之電腦可讀儲存媒體,其中該操作進一步包含:自該緩衝器接收該等組影像資料中之一組之一選擇;及將該組所選擇之影像資料複製至一永久照片圖庫。The computer readable storage medium of claim 10, wherein the operation further comprises: receiving a selection of one of the group of image data from the buffer; and copying the selected image data to a permanent photo gallery . 如請求項8之電腦可讀儲存媒體,其中該複數個量測結果包括以下各項中之至少一者:鄰近朋友之一數目、朋友之一關係強度、正進行的活動之一類型、活動之一幅度、麥克風提示、在一即時訊息交換中使用的字元、一鍵入速度、一鍵入模式、在一即時訊息交換中使用的一關鍵字、對話內容,及檢視區定向之一彙總。The computer readable storage medium of claim 8, wherein the plurality of measurement results comprise at least one of: a number of neighboring friends, a relationship strength of a friend, a type of activity being performed, an activity A summary of amplitude, microphone prompts, characters used in an instant message exchange, a typing speed, a typing pattern, a keyword used in an instant message exchange, conversation content, and view orientation. 如請求項8之電腦可讀儲存媒體,其中該組影像資料包括截取該虛擬世界之一影像的複數個影像,且其中每一影像係自一相異相機位置截取。The computer readable storage medium of claim 8, wherein the set of image data comprises a plurality of images that capture an image of the virtual world, and each of the images is captured from a different camera position. 如請求項8之電腦可讀儲存媒體,其中該組影像資料包括描述在截取該組影像資料時在該虛擬環境中描繪之每一物件的一組三維場景資料。The computer readable storage medium of claim 8, wherein the set of image data comprises a set of three dimensional scene data describing each object depicted in the virtual environment when the set of image data is captured. 一種資料處理系統,其包含:一處理器;及一含有一程式之記憶體,該程式在藉由該處理器執行時經組態以進行一用於截取描繪一虛擬世界之影像資料之操作,該操作包含:監視關於與該虛擬世界互動之一使用者的複數個量測結果;自該複數個量測結果計算一拍照機會得分之一當前值;比較該拍照機會得分之該當前值與一預定臨限值;及在判定該機會得分之該當前值超出該預定臨限值時,截取該虛擬世界的一組影像資料。A data processing system comprising: a processor; and a memory containing a program configured to perform an operation for capturing image data depicting a virtual world while being executed by the processor, The operation includes: monitoring a plurality of measurement results about a user interacting with the virtual world; calculating a current value of one of the camera opportunity scores from the plurality of measurement results; comparing the current value of the camera opportunity score with a current value Predetermining a threshold; and intercepting a set of image data of the virtual world when determining that the current value of the opportunity score exceeds the predetermined threshold. 如請求項15之系統,其中該複數個量測結果中之至少一者藉由一使用者可組態之加權值來加權。The system of claim 15, wherein at least one of the plurality of measurements is weighted by a user configurable weighting value. 如請求項15之系統,其中該操作進一步包含將該虛擬世界之該所截取影像資料儲存於一緩衝器中,其中該緩衝器經組態以儲存複數組影像資料及對應於每一組影像資料的一拍照機會得分。The system of claim 15, wherein the operation further comprises storing the captured image data of the virtual world in a buffer, wherein the buffer is configured to store the complex array of image data and corresponding to each set of image data A photo opportunity to score. 如請求項17之系統,其中該操作進一步包含:自該緩衝器接收該等組影像資料中之一組的一選擇;及將該組所選擇之影像資料複製至一永久照片圖庫。The system of claim 17, wherein the operation further comprises: receiving a selection of one of the sets of image data from the buffer; and copying the selected set of image data to a permanent photo gallery. 如請求項15之系統,其中該複數個量測結果包括以下各項中之至少一者:鄰近朋友之一數目、朋友之一關係強度、正進行的活動之一類型、活動之一幅度、麥克風提示、在一即時訊息交換中使用的字元、一鍵入速度、一鍵入模式、在一即時訊息交換中使用的一關鍵字、對話內容,及檢視區定向之一彙總。The system of claim 15, wherein the plurality of measurement results include at least one of: a number of neighboring friends, a relationship strength of one of the friends, one of the types of activities being performed, one of the activities, and a microphone A summary of the prompts, characters used in an instant message exchange, a typing speed, a typing pattern, a keyword used in an instant message exchange, conversation content, and view orientation. 如請求項15之系統,其中該組影像資料包括截取該虛擬世界之一影像的複數個影像,且其中每一影像係自一相異相機位置截取。The system of claim 15, wherein the set of image data comprises a plurality of images that capture an image of the virtual world, and each of the images is captured from a different camera position. 如請求項15之系統,其中該組影像資料包括描述在截取該組影像資料時在該虛擬環境中描繪之每一物件的一組三維場景資料。The system of claim 15, wherein the set of image data comprises a set of three-dimensional scene data describing each object depicted in the virtual environment when the set of image data is captured.
TW098123488A 2008-07-29 2009-07-10 Image capture and buffering in a virtual world TWI438716B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/181,713 US8026913B2 (en) 2008-07-29 2008-07-29 Image capture and buffering in a virtual world

Publications (2)

Publication Number Publication Date
TW201009746A TW201009746A (en) 2010-03-01
TWI438716B true TWI438716B (en) 2014-05-21

Family

ID=41607876

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098123488A TWI438716B (en) 2008-07-29 2009-07-10 Image capture and buffering in a virtual world

Country Status (2)

Country Link
US (1) US8026913B2 (en)
TW (1) TWI438716B (en)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8127235B2 (en) 2007-11-30 2012-02-28 International Business Machines Corporation Automatic increasing of capacity of a virtual space in a virtual world
US20090164919A1 (en) 2007-12-24 2009-06-25 Cary Lee Bates Generating data for managing encounters in a virtual world environment
JP5159375B2 (en) 2008-03-07 2013-03-06 インターナショナル・ビジネス・マシーンズ・コーポレーション Object authenticity determination system and method in metaverse, and computer program thereof
US8893047B2 (en) * 2009-11-09 2014-11-18 International Business Machines Corporation Activity triggered photography in metaverse applications
US9205328B2 (en) 2010-02-18 2015-12-08 Activision Publishing, Inc. Videogame system and method that enables characters to earn virtual fans by completing secondary objectives
US9682324B2 (en) 2010-05-12 2017-06-20 Activision Publishing, Inc. System and method for enabling players to participate in asynchronous, competitive challenges
KR101230397B1 (en) * 2010-09-29 2013-02-07 (주) 인텍플러스 Method and Apparatus for Transmitting/Receiving Image Data with High Speed
US10137376B2 (en) 2012-12-31 2018-11-27 Activision Publishing, Inc. System and method for creating and streaming augmented game sessions
US9282244B2 (en) 2013-03-14 2016-03-08 Microsoft Technology Licensing, Llc Camera non-touch switch
US9066007B2 (en) 2013-04-26 2015-06-23 Skype Camera tap switch
US20140354880A1 (en) * 2013-06-03 2014-12-04 Microsoft Corporation Camera with Hall Effect Switch
US11184580B2 (en) 2014-05-22 2021-11-23 Microsoft Technology Licensing, Llc Automatically curating video to fit display time
US9503644B2 (en) 2014-05-22 2016-11-22 Microsoft Technology Licensing, Llc Using image properties for processing and editing of multiple resolution images
US9451178B2 (en) 2014-05-22 2016-09-20 Microsoft Technology Licensing, Llc Automatic insertion of video into a photo story
US10376792B2 (en) 2014-07-03 2019-08-13 Activision Publishing, Inc. Group composition matchmaking system and method for multiplayer video games
US11351466B2 (en) 2014-12-05 2022-06-07 Activision Publishing, Ing. System and method for customizing a replay of one or more game events in a video game
US10118099B2 (en) 2014-12-16 2018-11-06 Activision Publishing, Inc. System and method for transparently styling non-player characters in a multiplayer video game
US10486068B2 (en) 2015-05-14 2019-11-26 Activision Publishing, Inc. System and method for providing dynamically variable maps in a video game
US10286314B2 (en) 2015-05-14 2019-05-14 Activision Publishing, Inc. System and method for providing continuous gameplay in a multiplayer video game through an unbounded gameplay session
US10315113B2 (en) 2015-05-14 2019-06-11 Activision Publishing, Inc. System and method for simulating gameplay of nonplayer characters distributed across networked end user devices
US10213682B2 (en) 2015-06-15 2019-02-26 Activision Publishing, Inc. System and method for uniquely identifying physical trading cards and incorporating trading card game items in a video game
US10471348B2 (en) 2015-07-24 2019-11-12 Activision Publishing, Inc. System and method for creating and sharing customized video game weapon configurations in multiplayer video games via one or more social networks
US10099140B2 (en) 2015-10-08 2018-10-16 Activision Publishing, Inc. System and method for generating personalized messaging campaigns for video game players
US11185784B2 (en) 2015-10-08 2021-11-30 Activision Publishing, Inc. System and method for generating personalized messaging campaigns for video game players
US10232272B2 (en) 2015-10-21 2019-03-19 Activision Publishing, Inc. System and method for replaying video game streams
US10376781B2 (en) 2015-10-21 2019-08-13 Activision Publishing, Inc. System and method of generating and distributing video game streams
US10245509B2 (en) 2015-10-21 2019-04-02 Activision Publishing, Inc. System and method of inferring user interest in different aspects of video game streams
US10694352B2 (en) 2015-10-28 2020-06-23 Activision Publishing, Inc. System and method of using physical objects to control software access
US10226703B2 (en) 2016-04-01 2019-03-12 Activision Publishing, Inc. System and method of generating and providing interactive annotation items based on triggering events in a video game
US10226701B2 (en) 2016-04-29 2019-03-12 Activision Publishing, Inc. System and method for identifying spawn locations in a video game
US10179289B2 (en) 2016-06-21 2019-01-15 Activision Publishing, Inc. System and method for reading graphically-encoded identifiers from physical trading cards through image-based template matching
US10573065B2 (en) 2016-07-29 2020-02-25 Activision Publishing, Inc. Systems and methods for automating the personalization of blendshape rigs based on performance capture data
US10688392B1 (en) * 2016-09-23 2020-06-23 Amazon Technologies, Inc. Reusable video game camera rig framework
US10463964B2 (en) 2016-11-17 2019-11-05 Activision Publishing, Inc. Systems and methods for the real-time generation of in-game, locally accessible heatmaps
US10709981B2 (en) 2016-11-17 2020-07-14 Activision Publishing, Inc. Systems and methods for the real-time generation of in-game, locally accessible barrier-aware heatmaps
US10500498B2 (en) 2016-11-29 2019-12-10 Activision Publishing, Inc. System and method for optimizing virtual games
US10055880B2 (en) 2016-12-06 2018-08-21 Activision Publishing, Inc. Methods and systems to modify a two dimensional facial image to increase dimensional depth and generate a facial image that appears three dimensional
US10861079B2 (en) 2017-02-23 2020-12-08 Activision Publishing, Inc. Flexible online pre-ordering system for media
US10818060B2 (en) 2017-09-05 2020-10-27 Activision Publishing, Inc. Systems and methods for guiding motion capture actors using a motion reference system
US10561945B2 (en) 2017-09-27 2020-02-18 Activision Publishing, Inc. Methods and systems for incentivizing team cooperation in multiplayer gaming environments
US10974150B2 (en) 2017-09-27 2021-04-13 Activision Publishing, Inc. Methods and systems for improved content customization in multiplayer gaming environments
US11040286B2 (en) 2017-09-27 2021-06-22 Activision Publishing, Inc. Methods and systems for improved content generation in multiplayer gaming environments
US10406754B2 (en) * 2017-10-03 2019-09-10 Jabil Inc. Apparatus, system and method of monitoring an additive manufacturing environment
US10537809B2 (en) 2017-12-06 2020-01-21 Activision Publishing, Inc. System and method for validating video gaming data
US10463971B2 (en) 2017-12-06 2019-11-05 Activision Publishing, Inc. System and method for validating video gaming data
US10981051B2 (en) 2017-12-19 2021-04-20 Activision Publishing, Inc. Synchronized, fully programmable game controllers
US11278813B2 (en) 2017-12-22 2022-03-22 Activision Publishing, Inc. Systems and methods for enabling audience participation in bonus game play sessions
US10596471B2 (en) 2017-12-22 2020-03-24 Activision Publishing, Inc. Systems and methods for enabling audience participation in multi-player video game play sessions
US10864443B2 (en) 2017-12-22 2020-12-15 Activision Publishing, Inc. Video game content aggregation, normalization, and publication systems and methods
US11263670B2 (en) 2018-11-19 2022-03-01 Activision Publishing, Inc. Systems and methods for dynamically modifying video game content based on non-video gaming content being concurrently experienced by a user
US11192028B2 (en) 2018-11-19 2021-12-07 Activision Publishing, Inc. Systems and methods for the real-time customization of video game content based on player data
US20200196011A1 (en) 2018-12-15 2020-06-18 Activision Publishing, Inc. Systems and Methods for Receiving Digital Media and Classifying, Labeling and Searching Offensive Content Within Digital Media
US11679330B2 (en) 2018-12-18 2023-06-20 Activision Publishing, Inc. Systems and methods for generating improved non-player characters
US11305191B2 (en) 2018-12-20 2022-04-19 Activision Publishing, Inc. Systems and methods for controlling camera perspectives, movements, and displays of video game gameplay
US11344808B2 (en) 2019-06-28 2022-05-31 Activision Publishing, Inc. Systems and methods for dynamically generating and modulating music based on gaming events, player profiles and/or player reactions
US11097193B2 (en) 2019-09-11 2021-08-24 Activision Publishing, Inc. Methods and systems for increasing player engagement in multiplayer gaming environments
US11423605B2 (en) 2019-11-01 2022-08-23 Activision Publishing, Inc. Systems and methods for remastering a game space while maintaining the underlying game simulation
US11712627B2 (en) 2019-11-08 2023-08-01 Activision Publishing, Inc. System and method for providing conditional access to virtual gaming items
US11537209B2 (en) 2019-12-17 2022-12-27 Activision Publishing, Inc. Systems and methods for guiding actors using a motion capture reference system
US11420122B2 (en) 2019-12-23 2022-08-23 Activision Publishing, Inc. Systems and methods for controlling camera perspectives, movements, and displays of video game gameplay
US11563774B2 (en) 2019-12-27 2023-01-24 Activision Publishing, Inc. Systems and methods for tracking and identifying phishing website authors
US11524234B2 (en) 2020-08-18 2022-12-13 Activision Publishing, Inc. Multiplayer video games with virtual characters having dynamically modified fields of view
US11351459B2 (en) 2020-08-18 2022-06-07 Activision Publishing, Inc. Multiplayer video games with virtual characters having dynamically generated attribute profiles unconstrained by predefined discrete values
US11717753B2 (en) 2020-09-29 2023-08-08 Activision Publishing, Inc. Methods and systems for generating modified level of detail visual assets in a video game
US11833423B2 (en) 2020-09-29 2023-12-05 Activision Publishing, Inc. Methods and systems for generating level of detail visual assets in a video game
US11724188B2 (en) 2020-09-29 2023-08-15 Activision Publishing, Inc. Methods and systems for selecting a level of detail visual asset during the execution of a video game
US11439904B2 (en) 2020-11-11 2022-09-13 Activision Publishing, Inc. Systems and methods for imparting dynamic and realistic movement to player-controlled avatars in video games
US12097430B2 (en) 2020-12-28 2024-09-24 Activision Publishing, Inc. Methods and systems for generating and managing active objects in video games
US12064688B2 (en) 2020-12-30 2024-08-20 Activision Publishing, Inc. Methods and systems for determining decal projections intersecting spatial units in a frame of a game space
US11853439B2 (en) 2020-12-30 2023-12-26 Activision Publishing, Inc. Distributed data storage system providing enhanced security
US11794107B2 (en) 2020-12-30 2023-10-24 Activision Publishing, Inc. Systems and methods for improved collision detection in video games

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4065507B2 (en) * 2002-07-31 2008-03-26 キヤノン株式会社 Information presentation apparatus and information processing method
US20080158242A1 (en) * 2007-01-03 2008-07-03 St Jacques Kimberly Virtual image preservation
US8099668B2 (en) * 2008-01-07 2012-01-17 International Business Machines Corporation Predator and abuse identification and prevention in a virtual environment

Also Published As

Publication number Publication date
US8026913B2 (en) 2011-09-27
US20100026716A1 (en) 2010-02-04
TW201009746A (en) 2010-03-01

Similar Documents

Publication Publication Date Title
TWI438716B (en) Image capture and buffering in a virtual world
US8022948B2 (en) Image capture and buffering in a virtual world using situational measurement averages
US20230016490A1 (en) Systems and methods for virtual and augmented reality
JP7181316B2 (en) Eye Tracking with Prediction and Latest Updates to GPU for Fast Foveal Rendering in HMD Environments
US11340694B2 (en) Visual aura around field of view
JP2023052243A (en) Interaction with 3d virtual object using posture and plural dof controllers
KR20230048152A (en) Methods for manipulating objects in the environment
US9789403B1 (en) System for interactive image based game
US8516396B2 (en) Object organization based on user interactions within a virtual environment
WO2020168681A1 (en) Information processing method, information processing apparatus, electronic device and storage medium
JP2021524102A (en) Dynamic graphics rendering based on predicted saccade landing points
US20220277529A1 (en) Caching and updating of dense 3d reconstruction data
TWI669635B (en) Method and device for displaying barrage and non-volatile computer readable storage medium
KR20240134054A (en) Selecting virtual objects in a three-dimensional space
EP4248413A1 (en) Multiple device sensor input based avatar
US20230247178A1 (en) Interaction processing method and apparatus, terminal and medium
US9799141B2 (en) Display device, control system, and control program
US20130080976A1 (en) Motion controlled list scrolling
WO2024060953A1 (en) Method and apparatus for augmented reality interaction, device, and storage medium
WO2018140397A1 (en) System for interactive image based game
US20240371110A1 (en) Augmented reality mood board manipulation
US20240370159A1 (en) Augmented reality mood board
US11995753B2 (en) Generating an avatar using a virtual reality headset
US20240312142A1 (en) Mixed-reality social network avatar via congruent learning
WO2024228914A1 (en) Augmented reality mood board manipulation