TW201030673A - Display device, display method, information recording medium, and program - Google Patents

Display device, display method, information recording medium, and program Download PDF

Info

Publication number
TW201030673A
TW201030673A TW098133438A TW98133438A TW201030673A TW 201030673 A TW201030673 A TW 201030673A TW 098133438 A TW098133438 A TW 098133438A TW 98133438 A TW98133438 A TW 98133438A TW 201030673 A TW201030673 A TW 201030673A
Authority
TW
Taiwan
Prior art keywords
viewpoint
virtual space
field
input
orientation
Prior art date
Application number
TW098133438A
Other languages
Chinese (zh)
Inventor
Shunta Magarifuchi
Original Assignee
Konami Digital Entertainment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konami Digital Entertainment filed Critical Konami Digital Entertainment
Publication of TW201030673A publication Critical patent/TW201030673A/en

Links

Classifications

    • A63F13/10
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61BRAILWAY SYSTEMS; EQUIPMENT THEREFOR NOT OTHERWISE PROVIDED FOR
    • B61B12/00Component parts, details or accessories not provided for in groups B61B7/00 - B61B11/00
    • B61B12/10Cable traction drives
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/204Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform the platform being a handheld device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/301Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device using an additional display connected to the game console, e.g. on the controller
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02PCONTROL OR REGULATION OF ELECTRIC MOTORS, ELECTRIC GENERATORS OR DYNAMO-ELECTRIC CONVERTERS; CONTROLLING TRANSFORMERS, REACTORS OR CHOKE COILS
    • H02P5/00Arrangements specially adapted for regulating or controlling the speed or torque of two or more electric motors
    • H02P5/68Arrangements specially adapted for regulating or controlling the speed or torque of two or more electric motors controlling two or more dc dynamo-electric motors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention is directed to allowing easy and detailed viewing of an object of user interest among objects placed in a virtual space. A display unit (202) displays an image, which is generated by projecting the objects placed in the virtual space onto a part of a transparent region of user interest on a plane having a predetermined configuration, in an area on the screen corresponding to the transparent region. A selection instruction accepting unit (203) accepts an instruction input for selecting one from among the displayed objects. A direction specification accepting unit (204) accepts an input specifying the direction toward which a viewpoint is to move in a virtual space. A rotating unit (205) rotates the objects in the virtual space toward a specified direction around a selected object. The area on the screen corresponding to the transparent region displays an image showing the virtual space, which looks as if the viewpoint is moving toward the direction specified by the user.

Description

201030673 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種顯示裝置、顯示方法、資訊記錄媒 . 體以及程式,適於在被配置於虛擬空間内的物件當中,讓 使用者能夠容易地觀察所注意的物件的詳細。 【先前技術】 例如在專利文獻1中’揭示一種技術,當將被配置在 虛擬空間内的敵方角色、炸彈、或陷阱等物件,顯示在遊 戲晝面上之際,能將這些物件加以放大顯示而讓使用者進 行觀察。 [先前技術文獻] (專利文獻) 專利文獻1 :日本特開2005-095347號公報 【發明内容】 * [發明所欲解決之問題] 然而’僅將物件加以放大顯示,不能觀察到該物件的 背面等之狀況。因此,強烈想要將沒有描繪在晝面上的物 件的外觀,以容易觀看之方式提示給使用者。 本發明係用以解決這樣的問題。其目的係提供—種顯 示裝置、顯示方法、資訊記錄媒體以及程式,適於在被配 置於虛擬空間内的物件當中,讓使用者能夠容易地觀察所 3 201030673 注意的物件的詳細。 [解決問題之技術手段] =了達成上述目的’關於本發明的第一觀點的顯示裝 :具備s己錄部、顯示部、選擇指示接收部、方 接收部、及旋轉部。 記錄部,係記錄被配置在虛擬空間内的物件的位 =向/點的位置、以及預定的尺寸的投影面的位置和朝 向:在這裡,虛擬空間,例如,係指作為遊戲舞台之三次 :虛擬空間…物件’一般來說,係指遊戲的角色或背 記錄在卡"中。 投影面的資訊,典型地係 的二二係::見點以在投影面上產生透視投影而成 示被投影在投影面上的虛擬空間,而完 = 二窗戶之作"㈣,係由在觸控面板等 件田t,以觸碰任一個之方式等加以選擇。又 =夠藉由使碰觸在觸控面板上的手指朝向移動方向進行 移動’以指定視點的移動方向。 選擇指示接收部,係接受選擇指示輸入,該 輸入’係指示要由被產生的影像中所包含的物:‘: 可視物件),選擇任一個之意思。選擇指示接收部,: 例如,偵測到觸控面板或操作鍵的壓下以成 所顯示的物件當中的使用者想要選擇的—個物;=部 空間中的各個物件’係能夠作為可視物件。 虛擬 4 201030673 ,係接受方向指定輸入,該方向指定 空間中欲使視點進行移動的方向。方 由,例如,偵測到使用者對於觸控面 以接受用以指定被配置在虛擬空間内 的方向之方向指定輸入。201030673 VI. Description of the Invention: [Technical Field] The present invention relates to a display device, a display method, an information recording medium, and a program, which are suitable for being easily placed in an object disposed in a virtual space. Observe the details of the object you are paying attention to. [Prior Art] For example, in Patent Document 1, a technique is disclosed in which an object such as an enemy character, a bomb, or a trap placed in a virtual space is displayed on the face of the game, and these objects can be enlarged. Display and let the user observe. [Prior Art Document] (Patent Document) Patent Document 1: Japanese Laid-Open Patent Publication No. 2005-095347 [Draft of the Invention] * [Problems to be Solved by the Invention] However, only the object is enlarged and the back side of the object cannot be observed. Waiting for the situation. Therefore, it is strongly desired to present the appearance of an object not depicted on the face of the face to the user in an easy-to-view manner. The present invention is to solve such problems. The purpose is to provide a display device, a display method, an information recording medium, and a program, which are suitable for the object to be placed in the virtual space, so that the user can easily observe the details of the object noted by 201030673. [Technical means for solving the problem] = The above-described object is achieved. The display device according to the first aspect of the present invention includes a s recording unit, a display unit, a selection instruction receiving unit, a side receiving unit, and a rotating unit. The recording unit records the position of the object disposed in the virtual space = the position of the direction/point, and the position and orientation of the projection surface of the predetermined size: here, the virtual space, for example, refers to the game stage three times: Virtual space...object 'generally, refers to the role or back of the game recorded in the card". The information of the projection surface, typically the second and second series: see the point to produce a perspective projection on the projection surface to show the virtual space projected on the projection surface, and the end = two windows "" (4) In the touch panel and the like, t is selected by touching either one or the like. Also = enough to move by pointing the finger touching the touch panel toward the moving direction to specify the moving direction of the viewpoint. The selection instruction receiving unit accepts a selection instruction input indicating that the object to be included in the generated image: ‘: visible object is selected. Selecting the indication receiving portion, for example, detecting that the touch panel or the operation key is depressed to be the one of the displayed items that the user wants to select; the individual objects in the space can be regarded as visible object. Virtual 4 201030673 , accepts the direction specified input, which specifies the direction in the space where the viewpoint is to be moved. For example, it is detected that the user specifies input for the touch surface to accept a direction for specifying a direction to be configured in the virtual space.

,旋轉部係、以被選擇的可視物件為中心將視點及投 I面對於在虛擬空間中被配置的物件以被指定方向進 仃相對地旋轉。旋轉部’係為了以使用者所選擇的可視物 件為中〜’對應於所指定的方向’使虛擬空間進行旋轉之 方式來讓人觀看,而使虛擬空間及視點’以被衫方向進 灯相對地旋轉。亦即,以使用者所選擇的可視物件為中心, 來使虛擬空間本身進行旋轉,或者,使視點及投影面在該 又,係可兩方面同時進行。另 ’係可為物件的重心,也可以The rotating part system pivots the viewpoint and the projection plane to the object arranged in the virtual space in a specified direction, centering on the selected visible object. The rotating portion is made to be viewed in such a manner that the virtual object is rotated in the visible object selected by the user to correspond to the specified direction, and the virtual space and the viewpoint are relatively lighted in the direction of the shirt. Rotate ground. That is, the virtual space itself is rotated around the visible object selected by the user, or the viewpoint and the projection surface are simultaneously arranged in two aspects. The other can be the center of gravity of the object, or

方向指定接收部 輸入,係指定在虛擬 向指定接收部,係藉 板或操作鍵的壓下, 的視點所要進行移動 可視物件的周圍進行移動。 外’可視物件的中心,例如 是使用者觸碰到物件的位置。這樣,若依照本發明,使用 者能夠由各種角度觀察到所注意的物件的詳細。 為了達成上述目的,關於本發明的其他觀點的顯示裝 置,係具備記錄部、顯示部、選擇指示接收部、方向指定 接收部、及旋轉部。 記錄部’係記錄被配置在虛擬空間内的物件的位置和 朝向、視點的位置、以及具有透明領域及不透明領域之預 定形狀的的面的位置和朝向。 顯示部’係將物件的位置與視點的位置所連結的線段 係通過透明領域之物件(以下稱為可視物件),以在該線段 5 201030673 與該透明領域的交點上產 顯示。 生投影而成的影像之方式 加以 選擇才曰丁接收βρ,係接受選擇指示輸入,該選擇指示 輸入,係指示要由可視物件’選擇任一個之意思。 向U接收部’係接受方向指定輸人,該方向指定 係才曰疋在虛冑空間中欲使視點進行移動的方向。 ^ 轉卩係、以被選擇的可視物4牛為中心、,對應於被指 •〜肖使在虛擬空間中被配置的物件的位置和朝向,進 行旋轉。 在扠裡’預定形狀的面’相當於投影面,係配置在視 點與虛擬空間之間。顯示部,係在由視點的位置,越過預 定形狀的© ’觀看虛擬空間的狀況之期間,針對被投影在 該透明領域上的部分加以顯示。亦即,透明領域,係完成 作為觀看虛擬空間的窗戶之作用。The direction designation receiving unit input is specified to be moved to the designated receiving unit, and the viewpoint to be moved by the borrowing plate or the operation key is moved around the visible object. The center of the outer object, such as the location where the user touches the object. Thus, according to the present invention, the user can observe the details of the object of interest from various angles. In order to achieve the above object, a display device according to another aspect of the present invention includes a recording unit, a display unit, a selection instruction receiving unit, a direction specifying receiving unit, and a rotating unit. The recording unit records the position and orientation of the object placed in the virtual space, the position of the viewpoint, and the position and orientation of the surface having the predetermined shape of the transparent field and the opaque field. The display unit is a line segment connecting the position of the object and the position of the viewpoint through an object of a transparent field (hereinafter referred to as a visible object) to be displayed at the intersection of the line segment 5 201030673 and the transparent region. The method of projecting the projected image is selected to receive βρ, which accepts a selection indication input indicating that any of the visible objects is selected. The U receiving unit is designated to accept the direction of the input, and the direction designation is the direction in which the viewpoint is to be moved in the virtual space. ^ The rotation system is centered on the selected object 4, and corresponds to the position and orientation of the object arranged in the virtual space. The surface of the predetermined shape in the fork corresponds to the projection surface and is disposed between the viewpoint and the virtual space. The display unit displays a portion projected on the transparent area while the virtual space is being viewed from the position of the viewpoint by the position of the viewpoint. That is, the transparent field is completed as a window for viewing virtual space.

使用者’係由在觸控面板等上被顯示的物件當中,選 擇任-個來觸碰。又,使用者,係藉由使在碰觸在觸控面 板上的手指等朝向該移動方向進行移動,以指定視點的移 向旋轉’係為了以使視點進行旋轉之方式來讓人 觀看’而以被選擇的物件為中心(例如,物件的重心,或者 使用者觸碰到物件之位置等),來使在虛擬空間中被配置的 物件的位置或朝向’基於所指定的方向,進行旋轉。另外, 如後述,能夠使虛擬空間内的所有物件進行旋轉,或者, 也能夠使部分(亦即,可視物件)進行旋轉。 這樣,依照本發明,係藉由以被選擇的物件為令心, 6 201030673 基於被指定的方向,使在虛擬空間中被配置的物件的位置 °進仃旋轉’讓使用者能夠由各種角度來觀察所注 的物件的詳細。 又,在顯示裝置中,旋轉部,係也能夠以使其位置和 向被旋轉的物件就是可視物件之方式來進行。The user's touch is selected by any one of the objects displayed on the touch panel or the like. Further, the user moves the finger or the like that is touched on the touch panel in the moving direction, and the movement of the specified viewpoint is rotated to make the viewpoint view by rotating the viewpoint. Centering on the selected object (eg, the center of gravity of the object, or the location at which the user touches the object, etc.), the position or orientation of the object being configured in the virtual space is rotated based on the specified direction. Further, as will be described later, it is possible to rotate all the objects in the virtual space, or to rotate a part (that is, a visible object). Thus, in accordance with the present invention, by taking the selected object as the deliberate, 6 201030673 based on the specified direction, the position of the object arranged in the virtual space is rotated in rotation 'to enable the user to come from various angles Observe the details of the object being noted. Further, in the display device, the rotating portion can be made such that the position and the object to be rotated are visible objects.

亦即,旋轉部,係也能夠不是使在虛擬空間内的物件 全部進行旋轉,而是在旋轉前將被投影在透明領域上的可 視物件加以抽出’並僅使該可視物件進行旋轉。藉此,能 夠減少被旋轉的物件的數目,以減輕計算負荷。 又,在顯示裝置中,旋轉部’係也能夠一旦使可視物 件以外的物件(以下成為不可視物件)的位置和朝向進行旋 轉’則在該不可視物件與視點之距離係比該可視物件與該 視點之距離更短、且該不可視物件與該視點所連結的線段 係通過透明領域之場合,使旋轉中止。 亦即’依照本發明,雖然以該被選擇的物件為中心, 基於被指定的方向而使在虛擬㈣中被配置的物件進行旋 轉’但是在旋轉後,係有在旋轉之前並未被顯示的物件, ,投影在該敎形狀的透明領域上,而隱藏住可視物件之 場合。因為可視物件’係使用者想要注意的物件所以並 不希望可視物件被隱藏住。 因此,在旋轉部,係基於指定方向,使物件旋轉之場 合中,被投影在透明領域上的物件當中,任一個的不可視 物件與視點之距離係比任一個的可視物件與視點之距離更 短之場合,則使旋轉中止。I了防止不可視物件隱藏住可 201030673 視物件。因此’即便在被選擇的物件進行旋轉之時,能夠 不引起該物件被隱藏住的狀況。 又,顯示裝置,係也能夠更具備領域指定接收部,用 以接收領域指定輸入’該領域指定輸入,係在預定形狀的 • 面上,指定透明領域及不透明領域。 * 亦即,在虛擬空間内,如同能夠指定想要注意的部分, 使用者也能夠指定在預定形狀的面上的構成透明領域的位 Φ置。透明領域的位置,例如,係典型地使用觸碰筆或滑鼠 等之輸入裝置,以取得封閉領域之方式,圈出一個圓來加 以指定。如果針對旋轉顯示的領域進行較小的指定,則現 在正在注意的物件變得更容易判別。進而,藉由以狹小的 方式取得被旋轉顯示的面積,則能夠減少所謂的3〇暈眩 (3D的遊戲t面進行旋轉或激烈地移動之㉟觀看畫面的 使用者係短暫地感到不舒服之症狀)。 又,顯不部,也能夠產生影像,該影像係將位置和朝 向被旋轉的物件的位置與視點的位置所連結的線段係通過 透明領域之物件,投影在該線段與該透明領域的交點上; 且將旋轉被進行以前的物件的位置與該視點的位置所連接 的線段係通過不透明領域之物件,投影在該線段與該不透 明領域的交點上。 亦即,顯示部,也能夠產生影像,該影像係由物件的 旋轉被進仃以前的虛擬空間的狀況,被投影在預定的面的 •月領域上之影像;物件的旋轉被進行以後的虛擬空間 、狀况,破投影在預定的面的透明領域上之影像,所構成。 8 201030673 藉此’例如’係顯示現在正在注意的部分及不是的部分之 兩方面’而能夠對應於使用者的想要稍微觀看並非正在注 意的部分之要求。x,因為產生的影像係僅使正在注意 的部分旋轉,而其他的八、 ,、他的邛分沒有變化之影像,所以能夠得 到注意的部分係洋φ & & 4 β 序現出來的效果,而使該注意部分更容易 判別。In other words, the rotating portion can not only rotate the object in the virtual space, but extract the visible object projected on the transparent field before the rotation, and rotate only the visible object. Thereby, the number of objects to be rotated can be reduced to reduce the calculation load. Further, in the display device, the rotating portion can also rotate the position and orientation of the object other than the visible object (hereinafter referred to as an invisible object), and the distance between the invisible object and the viewpoint is compared with the visible object and the viewpoint. The distance is shorter, and the line segment connecting the invisible object and the viewpoint passes through the transparent field, and the rotation is suspended. That is, according to the present invention, although the object arranged in the virtual (four) is rotated based on the specified direction centered on the selected object, but after the rotation, the item is not displayed before the rotation. The object, projected on the transparent field of the shape of the file, hides the visible object. Since the visible object is the item that the user wants to pay attention to, it is not desirable to hide the visible object. Therefore, in the case where the rotating portion is rotated based on the specified direction, among the objects projected on the transparent field, the distance between the invisible object and the viewpoint is shorter than the distance between the visible object and the viewpoint. In this case, the rotation is aborted. I have prevented the invisible objects from being hidden. 201030673 View objects. Therefore, even when the selected object is rotated, it is possible to prevent the object from being hidden. Further, the display device can further include a field specifying receiving unit for specifying the input field of the receiving area, and specifying the input in the field, and designating the transparent area and the opaque area on the surface of the predetermined shape. * That is, in the virtual space, as long as the portion that is desired to be noted can be specified, the user can also specify the bit Φ constituting the transparent field on the face of the predetermined shape. The position of the transparent field, for example, is typically input using a touch pen or a mouse to obtain a closed field, and a circle is added to specify. If a smaller designation is made for the field of the rotated display, the object that is currently being noticed becomes easier to discriminate. Further, by obtaining the area to be rotated and displayed in a narrow manner, it is possible to reduce the so-called three-dimensional dizziness (the user who views the screen on the t-plane of the 3D rotation or fierce movement 35 is temporarily uncomfortable. symptom). Moreover, the image can be generated by the position and the line segment connecting the position of the rotated object and the position of the viewpoint through the object of the transparent field, projected on the intersection of the line segment and the transparent field. And projecting the line segment to which the position of the previous object and the position of the viewpoint are rotated through the object of the opaque field, and projecting at the intersection of the line segment and the opaque field. In other words, the display unit can also generate an image, which is projected on a predetermined area of the moon area by the rotation of the object in the virtual space before the object is rotated; the rotation of the object is performed later. The space, the situation, and the image projected on the transparent field of the predetermined surface. 8 201030673 By this, for example, it is possible to correspond to the user's desire to slightly view the portion that is not being noticed. x, because the generated image only rotates the part that is paying attention, and the other eight, , and his enthalpy does not change the image, so the part that can be noticed is the φ && 4 β sequence The effect makes the note part easier to discriminate.

亦即,領域指定接收部,係讓使 更小的領域,指定為透明領域。藉此 行旋轉之場合,係能夠減少計算量。 防3D暈眩之面積,加以預先設定好。 又’領域指定接收部, 係在預定的面積以上之場合 係也能夠在接收到的透明領域 ’則不接收領域指定輸入。 用者將比預定的面積 ’在僅使可視物件進 又’也能夠將用以預 又’領域指定接收部’係一旦接收領域指定輸入,則That is, the field designation receiving department allows the smaller field to be designated as a transparent field. In this case, it is possible to reduce the amount of calculation. Prevent the area of 3D stun and set it in advance. Further, when the field designation receiving unit is equal to or larger than the predetermined area, it is also possible to specify the input in the received transparent area. The user will be able to use the predetermined area ‘only for the visible object to enter ’, and

也能夠在可視物件的數目係超過預定的數目以上之場合, 不接收該領域指定輸入。 ° 亦即,領域指定接收部,係在使用者以可視物件的數 目比預定的數目更少之方式’來指定透明領域之場合中, 接收該領域指1藉此,與針對透明領域的面積加以限制 之場合相同,在僅使可視物件進行旋轉之場合,係能夠減 少計算量。又’也能夠將用以預防3D暈眩之面積加以預 示方法,係藉由具備 方向指定接收部、及 ’而具備顯示步驟、 又’關於本發明的其他觀點的顯 記錄部、顯示部、選擇指示接收部、 旋轉部之顯示裝置所進行的顯示方法 9 201030673 選擇指示接收步驟、 在記錄部中,*指定接收步驟、及旋轉步驟。 置和朝向、視點的位置己錄被配置在虛擬空間内的物件的位 和朝向。 、卩及預定的尺寸的投影面的位置 在顯示步驟中, 生透視浐-… 顯不。p ’係由視點’以在投影面上產 生透視W而成的影像之方式,示物^ 在選擇指示接收+ 擇指示輸人,該選擇ίΓ選擇指示接收部,係接受選 中所包含的物件(以下^5人’係指7"要由被產生的影像 (下稱為可視物件),選擇任一個之意思。 在方向指定接收 驟中,方向指定接收部,係接受方 向才日定輸入,該方向指定It is also possible to receive the field-specific input when the number of visible objects exceeds a predetermined number. ° That is, the field designation receiving unit, in the case where the user specifies the transparent field in a manner that the number of visible objects is less than the predetermined number, receives the field finger 1 thereby, and the area for the transparent field is In the case of the limitation, in the case where only the visible object is rotated, the amount of calculation can be reduced. In addition, a method for predicting an area for preventing 3D stun can also be provided with a display unit including a direction specifying receiving unit and a display unit, and a display unit and a display unit for selecting another aspect of the present invention. The display method 9 201030673 instructing the display unit of the receiving unit and the rotating unit to select the instruction receiving step, the * designating receiving step, and the rotating step in the recording unit. Position and orientation, the position of the viewpoint has recorded the position and orientation of the object arranged in the virtual space. , 卩 and the position of the projection surface of the predetermined size In the display step, the raw perspective 浐-... is not displayed. p ' is a method in which the viewpoint 'generates a perspective W on the projection surface, and the object ^ is selected to receive the selection indication input, and the selection 指示 selects the indication receiving unit, and accepts the selected object. (The following ^5 people's refers to 7" to select any one of the generated images (hereinafter referred to as visible objects). In the direction-designated receiving step, the direction specifies the receiving part, and the receiving direction is input. Specify the direction

曰疋輸入,係指定在虛擬空間中欲使 視點進行移動的方向。 I 在旋轉步驟中,旋轉部,係以被選擇的可視物件為中 心、,將視點及投影面,對於被配置在虛擬空間中的物件, 以被指定方向進行相對地旋轉。 關於本發明的其他觀點的顯示方法’係藉由具備 記錄部、顯示部、撰媒社 選擇心不接收部、方向指定接收部、及 旋轉之顯不裝置所進行的顯示方法,而具備顯示步驟、 選擇指不接收步驟、方向指定接收步驟、及旋轉步驟。 在°己錄部中’係記錄被配置在虛擬空間内的物件的位 置和朝向、視點的位置、以及具有透明領域及不透明領域 之預定形狀的的面的位置和朝向。 在顯示步驟中,顯示部,係將物件的位置與視點的位 置所連結的線段係通過透明領域之物件(以下稱為可視物 10 201030673 上產生投影而成的影 件)’以在該線段與 像之方式,加以顯示。 在選擇指示接收步驟中, 擇指示輸入’該選擇指示輸入 擇任一個之意思。 選擇指示接收部,係接受選 ’係指示要由可視物件,選曰疋 Input specifies the direction in the virtual space where you want to move the viewpoint. I In the rotating step, the rotating portion is centered on the selected visible object, and the viewpoint and the projection surface are relatively rotated in the specified direction with respect to the object arranged in the virtual space. The display method of the other aspect of the present invention includes a display method including a recording unit, a display unit, a publisher selection center receiving unit, a direction specifying receiving unit, and a rotating display device. , Select means not to receive the step, direction to specify the receiving step, and to rotate the step. The position and orientation of the object disposed in the virtual space, the position of the viewpoint, and the position and orientation of the face having the predetermined shape of the transparent field and the opaque field are recorded in the recorded portion. In the display step, the display portion is a line segment connecting the position of the object and the position of the viewpoint through an object of a transparent field (hereinafter referred to as a shadow formed on the visible object 10 201030673) to be in the line segment and Like it, show it. In the selection instruction receiving step, the instruction input 'this selection' means to input one of the meanings. Select the indication receiving part, accept the selection, and the instruction is to be selected by the visible object.

㈠二:向指定接收步驟中,方向指定接收部,係接受方 向才曰疋輸人’肖方向^輸人,係㈣在虛擬㈣中欲使 視點進行移動的方向。 在旋轉步驟中’旋轉部,係以被選擇的可視物件為中 心’對應於被指定方向,使被配置在虛擬空間中的物件的 位置和朝向,進行旋轉。 關於本發明的其他觀點的程式,係以使電腦能夠 作為上述的顯示裝置來發揮機能之方式加以構成。 又,本發明的程式,係可記錄在光碟、可撓性碟片、 硬碟、光磁碟、數位視訊光碟、磁帶、半導體記憶體等電 腦可讀取的資訊記錄媒體。上述程式,係可獨立於用於實 行該程式的電腦,而可透過電腦通訊網路進行分配、販賣。 又,上述資訊記錄媒體,係可獨立於該電腦,而進行分配、 販賣。 [發明之效果] 依照關於本發明的顯示裝置,係能夠提供一種顯示裳 置、顯示方法、資訊記錄媒體以及程式,適於在被配置於 虛擬空間内的物件當中,讓使用者能夠容易地觀察所注意 的物件的詳細β 、 11 201030673 【實施方式] 以下說明本發明的實施形態。以下,雖然為了容易理 解,係以攜帶型遊戲機來說明本發明所適用的實施形態, 但是本發明同樣適用於各種電腦、PDA、及行動電話等。 亦即’以下說明的實施形態,係用以說明本發明,但是並 非用以限制本申請案的發明範圍者。因此,業者能夠採用 以均等物將這些的各個要素或全部要素進行替換之實施形 態,這些實施形態也包含在本發明的範圍中。 [第一實施例] 第1圖係表示關於本發明的實施形態的顯示裝置被實 現的典型的攜帶型遊戲機的概要構成之概略圖。以下,參 照本圖來進行說明。 攜帶型遊戲機1,係具備:處理控制部1〇、連接器11、 卡匣12、無線通信部13、通信控制器14、聲音増幅器15、 揚聲器16、操作鍵17、第一顯示部18、第二顯示部19、 及觸控面板20。 處理控制部10,係具備:CPU(中央處理單元)核心 l〇a、影像處理部i〇b、VRAM(視訊隨機存取記憶體)1〇c、 WRAM(工作隨機存取記憶體)1〇(}、LCD(液晶顯示器)控制 器l〇e、及觸控面板控制器i〇f。 CPU核心l〇a,係控制攜帶型遊戲機i整體的動作, 並與各個構成要素相連接來交換控制信號或資料。具體來 12 201030673 說,係在卡匣12被安裝至連接器u中之狀態下,將卡匣 12内的ROM(唯讀記憶體)12a所記錄的程式或資料加以讀 出,以實行預定的處理。 影像處理部10b,係將由卡匣12内的ROM 12a所讀出 的資料、或在CPU核心i〇a中所處理後的資料加以加工處 * 理之後’將其儲存在VRAM 10c中。 VRAM 1 〇c,係將顯示用的資訊加以記錄之圖框記憶 鲁體,並記錄由影像處理部10b等所加工後的影像資訊。 WRAM 1 〇d ,係記錄當核心j 〇a要實行各種遵從 程式之處理時所必要的工作資料等。 LCD控制器10e’係控制第一顯示部18及第二顯示部 19,以顯不預定的顯示用影像。LCD控制器1〇e,係將 l〇c所記錄的影像資訊,以觀的同步時機來變換成為顯 不馆號,並輸出至第—顯示部18上。又,LCD控制器10e, 係在第三顯示部19上顯示預定的指*圖像(ic〇n)等。 觸控面板控制器10f,係偵測藉由觸碰筆或使用者的手 指而對於觸控面板20之接觸(觸碰)。例如,在第二顯示部 19上顯示有預定的指示圖像之狀態下,㈣在觸控面板2 0 ' 上之接觸及該位置。 連接器11’係能夠以自由拆下裝上之方式來與卡昆12 進行連接之端子’而在與卡E 12進行連接之際其係與卡 匣12之間進行預定的資料的傳送接收。 卡匣12,係具備R〇M以及ram (隨機存取記憶 體)12b。 13 201030673 以及 況等 在ROM 12a巾,係記錄有用以實現遊戲之程式、 遊戲所附帶的影像資料及聲音資料。 在RAM 12b中,係記錄有用以垂__ 輝,用以表不遊戲的進行狀 之各種的資料。 無線通信部13,係與其他的攜帶型遊戲機ι的無線通 信部η之間進行無線通信之單元,其係透過圖上未圖示的 天線(内藏天線等)來進行預定的資料的傳送接收。(1) 2: In the designated receiving step, the direction is designated by the receiving unit, and the receiving direction is the input direction, and the direction of the viewpoint is to be moved in the virtual (4). In the rotating step, the 'rotating portion is centered on the selected visible object', and the position and orientation of the object placed in the virtual space are rotated in accordance with the designated direction. The program of the other aspect of the present invention is configured such that the computer can function as the above-described display device. Further, the program of the present invention can be recorded on a computer-readable information recording medium such as a compact disc, a flexible disc, a hard disc, an optical disc, a digital video disc, a magnetic tape, or a semiconductor memory. The above programs can be distributed and sold through a computer communication network independently of the computer used to implement the program. Further, the above information recording medium can be distributed and sold independently of the computer. [Effects of the Invention] According to the display device of the present invention, it is possible to provide a display display, a display method, an information recording medium, and a program, which are suitable for allowing the user to easily observe among objects arranged in the virtual space. Details of the object to be noted β, 11 201030673 [Embodiment] Hereinafter, embodiments of the present invention will be described. Hereinafter, the embodiment to which the present invention is applied will be described with a portable game machine for the sake of easy understanding, but the present invention is also applicable to various computers, PDAs, mobile phones, and the like. That is, the embodiments described below are for explaining the present invention, but are not intended to limit the scope of the invention of the present application. Therefore, the embodiment in which each element or all of these elements are replaced by an equalizer can be employed, and these embodiments are also included in the scope of the present invention. [First Embodiment] Fig. 1 is a schematic view showing a schematic configuration of a typical portable game machine realized by a display device according to an embodiment of the present invention. Hereinafter, the description will be made with reference to this figure. The portable game machine 1 includes a processing control unit 1 , a connector 11 , a cassette 12 , a wireless communication unit 13 , a communication controller 14 , a voice clipper 15 , a speaker 16 , an operation key 17 , and a first display unit 18 . The second display portion 19 and the touch panel 20 are provided. The processing control unit 10 includes a CPU (Central Processing Unit) core 10a, a video processing unit i〇b, a VRAM (Video Random Access Memory) 1〇c, and a WRAM (Working Random Access Memory). (}, LCD (Liquid Crystal Display) controller l〇e, and touch panel controller i〇f. CPU core l〇a, controls the overall operation of the portable game machine i, and is connected with each component to exchange Control signal or data. Specifically, 12 201030673, the program or data recorded in the ROM (read only memory) 12a in the cassette 12 is read out while the cassette 12 is mounted in the connector u. The image processing unit 10b stores the data read by the ROM 12a in the cassette 12 or the data processed in the CPU core i〇a, and then stores it. In the VRAM 10c, the VRAM 1 〇c is a frame for recording the information for display, and records the image information processed by the image processing unit 10b, etc. WRAM 1 〇d is recorded as the core j 〇a to carry out the work materials necessary for the processing of various compliance programs The LCD controller 10e' controls the first display unit 18 and the second display unit 19 to display a predetermined image for display. The LCD controller 1〇e is to view the image information recorded by l〇c. The synchronization timing is changed to the display number and output to the first display unit 18. Further, the LCD controller 10e displays a predetermined finger image (ic〇n) or the like on the third display unit 19. The touch panel controller 10f detects contact (touch) with the touch panel 20 by touching a pen or a user's finger. For example, a predetermined indication image is displayed on the second display portion 19. In the state, (4) contact on the touch panel 20' and the position. The connector 11' is capable of being connected to the card E 12 by means of a terminal that is freely detachably attached to the card 12 At the same time, the predetermined data is transmitted and received between the cassette 12 and the cassette 12. The cassette 12 is provided with R〇M and ram (random access memory) 12b. 13 201030673 and the condition is recorded in the ROM 12a. There are programs for realizing the game, video data and sound data attached to the game. In RAM 12b It is a device that records various types of information that can be used to describe the progress of the game. The wireless communication unit 13 is a unit that performs wireless communication with the wireless communication unit η of another portable game machine ι. It transmits and receives predetermined data through an antenna (a built-in antenna or the like) not shown in the drawing.

另外,無線通信冑13,係能夠與預定的存取點之間進 行無線LAN通信。又,在無線通信部13上係以固有的 MAC(媒介存取控制)位址加以編號。 通k控制器14,係控制無線通信部13,並遵從預定的 協定’以#為在處理㈣冑1()肖其他的攜帶型遊戲機ι的 處理控制部10之間進行通信的媒介。 聲音增幅器15,係將在處理控制部1〇中所產生的聲 音信號加以增幅,並供給至揚聲器16。 揚聲器16’例如,係由立體聲揚聲器…“的speaker) 等所構成,並遵從在聲音增幅器15中所增幅的聲音信號, 以輸出預定的樂曲音或效果音等。 操作鍵17,係由適當地被配置在攜帶型遊戲機丨上的 複數個按鍵開關等所構成,並遵從使用者的操作,以接收 預定的指示輸入。 第一顯示部18及第二顯示部19,係由LCD等所構成, 並由LCD控制器加以控制,以適當地顯示遊戲影像等。 另外,第二顯示部19,係顯示有藉由觸控面板2〇的 201030673 接觸,用以輸入來自使用者的操作指示之指示圖像等。 觸控面板20,係在第二顯示部19的前面與其重疊而 被配置’以接收藉由觸碰筆或使用者的手指的接觸而來的 輸入。 觸控面板20,例如,係由感壓式的觸碰感測面板等所 構成’以偵測使用者的手指等的壓力,幻貞測接觸狀態及 由接觸狀態變成非接觸狀態之㈣。另夕卜,觸控面板2〇, 也能夠由其他的靜電容量的變化,以偵測使用者的手指等 的接觸。 第2圖係表示攜帶型遊戲機1之外觀圖。 另外,關於本實施形態之顯示裝置,雖然係在上述典 型的攜帶型遊戲機丨上加以實現,但是也能夠在一般的電 腦或遊戲裝置上加以實現。一般的電腦或遊戲裝置,與上 述的攜帶型遊戲機1相同,係具備CPU核心、或VRAM、 WRAM »又,也能夠利用,例如,以構成lAN(區域網路) 之際所使用的10BASE-T/l〇〇BASE_T等規格作為依據的 NIC(網路介面控制器),來作為通信部;以及利用硬碟、或 其他的DVD-ROM、光磁碟,來作為記錄裝置。又,也利 用鍵盤或滑鼠來取代觸控面板,以作為輸入裝置。然後, 在程式安裝之後,一旦實行該程式,則能夠作為顯示裝置 來發揮機能。 另外,以下只要不註記,關於本實施形態的顯示裝置, 係藉由第1圖所示的攜帶型遊戲機i來加以說明。顯示裝 置,係能夠因應必要而置換成適當一般的電腦、或遊戲裝 15 201030673 置的要素’且這些實施形態也包含在本發明的範圍中。 以下’針對顯示裝置200的構成,係參照第3圖加以 說明。 關於本實施形態之顯示裝置200,係具備記錄部2〇1、 顯示部202、選擇指示接枚部2〇3、方向指定接收部2〇4、 旋轉部205、及領域指定接收部2〇6等。 在這裡,記錄部201 ’例如,係記錄關於成為遊戲舞 •台之三次元虛擬空間的資訊。在這個虛擬空間内,例如, 如第4圖所示,係配置有物件(例如,4〇1A、4〇ib、4〇ic 等)、觀看虛擬空間之視點4 1 〇(亦即照相機)、以及具有透 明領域420A及不透明領域420B之預定形狀的面420等。Further, the wireless communication unit 13 is capable of performing wireless LAN communication with a predetermined access point. Further, the wireless communication unit 13 is numbered by a unique MAC (Media Access Control) address. The controller k controls the wireless communication unit 13 and follows the predetermined protocol '# as the medium for performing communication between the processing control units 10 of the other portable game machines ι (1). The sound amplifier 15 amplifies the sound signal generated in the process control unit 1A and supplies it to the speaker 16. The speaker 16' is constituted, for example, by a stereo speaker ... "speaker" or the like, and follows the sound signal amplified in the sound amplifier 15 to output a predetermined musical piece or effect sound, etc. The operation key 17 is appropriately The plurality of push buttons and the like arranged on the portable game machine are configured to receive a predetermined instruction input in accordance with a user's operation. The first display unit 18 and the second display unit 19 are provided by an LCD or the like. And configured by the LCD controller to properly display the game image, etc. In addition, the second display unit 19 displays the 201030673 contact through the touch panel 2 for inputting an operation instruction from the user. The touch panel 20 is disposed to overlap the front surface of the second display unit 19 to receive an input by contact with a touch pen or a user's finger. The touch panel 20, for example, It is formed by a pressure-sensitive touch sensing panel or the like to detect the pressure of the user's finger or the like, and to detect the contact state and the contact state to the non-contact state (4). In addition, the touch panel 2〇, it is also possible to detect the contact of the user's finger or the like by the change of the other electrostatic capacitance. Fig. 2 is an external view showing the portable game machine 1. Further, the display device of the present embodiment is It is implemented in the above-described typical portable game machine, but can also be implemented in a general computer or game device. A general computer or game device is the same as the portable game machine 1 described above, and has a CPU core, or VRAM, WRAM » In addition, it is also possible to use, for example, a NIC (Network Interface Controller) based on specifications such as 10BASE-T/l〇〇BASE_T used to form a lAN (regional network). And use a hard disk, or other DVD-ROM, optical disk as a recording device. Also, use a keyboard or mouse to replace the touch panel as an input device. Then, after the program is installed, once By executing this program, it is possible to function as a display device. Note that the display device of the present embodiment is the portable game machine shown in Fig. 1 as long as it is not noted. In the following description, the display device can be replaced with a suitable general computer or an element of the game device 15 201030673, and these embodiments are also included in the scope of the present invention. The display device 200 of the present embodiment includes a recording unit 2〇1, a display unit 202, a selection instruction unit 2〇3, a direction designation receiving unit 2〇4, and a rotation unit 205. And the field designation receiving unit 2〇6, etc. Here, the recording unit 201' records, for example, information about a three-dimensional virtual space that becomes a game dance station. In this virtual space, for example, as shown in FIG. The object is configured with an object (for example, 4〇1A, 4〇ib, 4〇ic, etc.), a viewpoint 4 1 〇 (ie, a camera) for viewing the virtual space, and a surface having a predetermined shape of the transparent field 420A and the opaque field 420B. 420 and so on.

預定形狀的面420,係相當於虛擬空間要被投影的投 影面。雖然在本實施形態中,係由平面所構成,但是其形 狀不受限於此。典型的預定形狀的面42〇,係被配置在視 _ 點410與虛擬空間内的物件(例如,4〇1A 、401B、401C 等) 之間。再者,視線方向415的朝向,係通過預定形狀的面 420的中心,且與預定形狀的面420相垂直的向量的朝向 成為一致。 另外’在預定形狀的面420上所配置的透明領域 420A、以及不透明領域42〇b的邊界係一致,如後述可 由使用者加以指定。 要被配置在虛擬空間内的物件的形狀、位置和朝向、 視點的位置、以及預定形狀的面42〇的位置和朝向等資 料’係預先記錄在卡匣12内的R〇M 12a中。處理控制部 16 201030673 ίο’係由適當的卡g 12讀出這些資料,然後記錄在wram 剛等中。旋轉部205’係對應於來自使用者的指示以更 新物件的形狀、位置和朝向等的資料。處理控制部1〇、連 接器11、及卡H 12,係協同動作以作為記錄部2〇1而發揮 機能。 • 顯示部202 ’係在藉由後述的領域指定接收部206而 接收到用以#定透明領域之輸入之場合,則產生與以下的 籲預定形狀的面420具有相同尺寸之影像。亦即,所產生的 影像當中,在預定形狀的面42G的透明領域所對應的位置 上,係描繪被投影在預定形狀的面420的透明領域上的虛 擬空間的影像的透明領域的部分。又,所產生的影像當中, 在透明領域以外的領域(不透明領域)所對應的位置上係 產生描繪了用以表現黑暗之預定顏色(例如,黑色)之影 像。然後,顯示部202,係將產生後的影像,在第二顯示 籲部19(或者’第-顯示部18)上加以顯示。亦即,雖然預定 形狀的面420係被描述為相當於投影面,但是在本實施例 的形態中,實際作為投影面來發揮機能者,係預定形狀的 面420每中的透明領域。另外,也能夠將整個的預定形狀 .的面42〇設為透明領域。處理控制部1〇及第二顯示部19(或 者,第一顯示部U)’係協同動作以作為顯示部2〇2而發揮 機能。 選擇指不接收部203,係接收選擇指.示輸入,該選擇 指示輸入,係指示由顯示部2〇2所顯示的物件(以下稱為可 梘物件)當中,選擇任一個之意思。使用者,例如,係藉由 17 201030673 接觸(觸碰)所想要選擇的物件被顯示在觸控面& 2 置,而能夠選擇出由顯示部202所顯示的物件的—個 者,也能夠進行與這個同等的操作(將操作鍵口壓 s 選擇物件。被接收的選擇物件,係暫時記錄在,例如,败心 刚等中。處理控制部10、操作鍵17、及觸控面板μ,係 協同動作以作為選擇指示接收部2〇3而發揮機能。 方向指定接收部204,係接收來自使用者的方向指定 輸入,該方向指定輸入,係指定在虛擬空間中欲使該虛擬 空間中被配置的視點進行移動的方向。使用者,例如,將 對應於預定的方向之操作鍵17壓下,或者,一邊將觸控面 板2〇壓下’ -邊使其移動,而能夠指定方向。處理控制部 10、操作鍵17、及觸控面板2〇,係協同動作以作為方向指 定接收部204而發揮機能。 旋轉部205,係以藉由選擇指示接收部2〇3所接收的 物件為中心,以能夠觀看到該視點係在藉由方向指定接收 部崩所接收的方向上進行移動之方式,使虛擬空間内的 物件進行旋轉。亦即,以與被指定的方向係逆向之方式, 使被配置在該虛擬空間中的物件的位置及朝向進行旋轉, 並將在記錄201中所記錄的物件的位置及朝向加以更 新。處理控制部10,係作為旋轉部2〇5而發揮機能。 領域指定接收部206,係接收領域指定輸入’該領域 接收輸入,係在預定形狀的面上,指定該透明領域及 該不透明領域。使用者,例如,係能夠以取得封閉領域之 方式’在觸控面板2G上—邊屋下觸碰筆、-邊使其移動以 18 201030673 描繪作為領域的邊界之線條(line),來指定透明領域的位置 及形狀。或者,使用者,也能夠進行與這個同等的操作(將 操作鍵17壓下),以指定透明領域。被接收的指定領域, 係暫時記錄在,例如,WRAM 1 〇d等中。處理控制部1 〇、 操作鍵17及觸控面板20,係協同動作以作為領域指定接 ' 收部206而發揮機能。 ® 以下,針對關於本實施形態的顯示裝置200的動作處 理加以說明。 在本實施形態中,例如,以實行遊戲之場合為例加以 說明,該遊戲係具有在黑暗中必須尋找什麼之場面 (scene)。在該遊戲中,使用者,係一旦指定透明領域後, 則係以僅在該被指定的透明領域中有手電筒的照射之方 式,來進行顯示。 進而,使用者’係一旦選擇在透明領域中被表示的物 > 件當中的一個,並進行輸入以指定欲使視點進行移動的方 向,則顯示裝置200,係以該物件為中心,基於被指定的 方向’將虛擬空間内的物件進行旋轉《然後,將用以顯示 旋轉後的虛擬空間的狀況之影像,顯示在該領域中。使用 者,係藉由觀看在由各種角度進行指定後的透明領域中所 顯示的虛擬空間,以進行尋找東西。 另外’在本實施形態中,來自使用者的被接收的輸入 狀態的遷移,係統整在第5圖中。在什麼都沒有接收的狀 態(狀態A)下’能夠接收透明領域的指定(狀態b) <·接著, 19 201030673 接收成為旋轉十心之物件的選擇指示(狀態c),然後,接收 視點的移動方向的指定(狀態D)。這樣,在沒有選擇成為旋 轉中心的物件之下,則不接收移動方向的指示;又在沒 有指定透明領域之下,則不接收成為旋拜令心的物件之$ . 擇。 . 另外,在狀態B、C、D令,能夠輸入預定的按鍵,以 解除所指定的透明領域(狀態E)(亦即,由WRAM i〇d刪 • 除)。在接收到物件的選擇指示之狀態下,於透明領域的解 除已被指定的場合(狀態C—E的遷移),則因為在被解除的 領域内具有被選擇成為旋轉_心之物件,所以將該物件的 選擇也由WRAM 10d刪除而加以解除。 又,被指定的透明領域,係除了能夠以拖曳(drag)領域 的邊界部分之方式等來變更形狀(狀態B—B的遷移)以 外,在接收到物件的選擇指示之狀態下,係能夠接收指示, .以選擇被指定的透明領域内的其他物件(狀態C—C的遷 移)或者,在進行試點的移動方向的指定之後,也能夠更 加指定移動方向(狀態D— D的遷移)。 第6圖所不的顯示處理,係本實施形態的顯示裝置2〇〇 的各部進行協同動作,以預定的時機定期地重複實行的處 理如上述,針對顯示裝置200對應於遷移的接收狀態所 進行的顯示處理,係參照第6圖接著說明。首先,針對狀 態A,亦即,使用者並未進行透明領域的指定、及物件的 選擇等之初期狀態之場合,加以顯示。 首先’處理控制部10 ’係判斷是否有用以指定視點的 20 201030673 移動方向之輸入(步驟S101)。亦即,參照wraM 10d等, 以判斷是否記錄有用以指定視點的移動方向之輸入。在初 期狀態中’判斷沒有用以指定方向之輸入(步驟S1 〇 1 ;否), 則處理係前進至步驟S103。 接著,顯示部202,係將被投影在透明領域上的虛擬 • 空間的狀況,記錄在圖框記憶體(VRAM 10c)中(步驟 S103)。亦即,顯示部202,係參照記錄部2〇丨,以取得視 • 點的位置、預定形狀的面420的位置和朝向、物件的形狀、 位置和朝向。然後,基於所取得資訊,將投影虛擬空間内 的物件而成的影像’產生在預定形狀的面42〇上。其中, 係將在透明領域的位置(參照WRAM 10d而取得)上所對應 的影像,記錄在該透明領域所對應的圖框記憶體(VRAM IOC)»但是,在初期狀態中,因為沒有指定透明領域所 以在圖框記憶體中係沒有記錄影像資料。 ,繼續,顯示部202,係在圖框記憶體的對應於不透明 領域的位置上,寫入用以顯示預定顏色(例如,黑色)之資 料(步驟S104)。在初期狀態中,因為僅存在有不透明領域, 所以在這個時點,在圖框記憶體中係記錄有一整面被塗黑 的影像。 顯示部202,係將圖框記憶體的内容,在預定的時機, 傳送至第-或第二顯示部(18、19),來加以表示(步驟 因此,在初期狀態中,係在第一或第二顯示部(1卜 19)上,表示一整面黑的影像。然後結束顯示處理。 接著,針對狀態B,亦即,使用者已指定透明領域之 21 201030673 場合之顯示處理,來加以說明 的操作或觸控面板20的接觸, 輸入°然後,領域指定接收部 指定的透明領域的位置及形狀 等中β 。使用者,係進行操作鍵17 以進行用以指定透明領域之 206,係接收藉由該輸入所 ’並暫時記錄在WRAM 10d ' 使用者,在已指定透明領域之狀態下,若實行步驟 si〇卜則藉由處理控制部10,來判斷尚未輪入用以指定方 •向之輸入(步驟否),並由步驟s103實行至步驟sl〇5。 如上述說明,由步驟S1〇3實行至步驟sl〇5,則在被 顯示的影像當中’係藉由顯示部202,以虛擬空間被投影 至透明領域所對應的位置,而其他的部分則被描繪成黑色 影像之方式,加以顯示。 例如’第7A圖係表示將第4圖中所示的虛擬空間投影 至透明領域420A的狀況。如第7A圖所示,要被投影至透 明領域者,係物件401A及一部分的物件4〇1b ,而其他則 籲被投影至不透明領域。因此,如第7B圖所示,物件4〇1a 的被投影而成的像401A,、及一部分的物件4〇1b的被投影 而成的像401B’,係被顯示在對應於畫面的透明領域的部 ' 分’而對應於畫面的不透明領域的部分,係顯示為專色。 這樣’由使用者所指定的透明領域,係如同被手電筒照射 一般,而顯示一部分的虛擬空間。 如第7B圖所示,在本實施形態中,各個物件係以一點 透視法來進行透視投影,而距離視點較遠的物件被投影得 較小,距離視點較近的物件被投影得較大。但是,也能夠 22 201030673 採用平行投影來取代一點投影法》另外,顯示部202,係 針對每個物件,例如,使用z緩衝法(在三次元繪圖中,摘 測視線被前方物體擋住而被隱藏的物體或面,且不進行描 綠的處理,亦即’ 一種隱藏面消除方法。在構成畫面的各 個晝素的顏色資訊上添加相關於縱深的資訊,並在畫面進 . 行搶晝之際’與相同座標的晝素的縱深資訊相比較,而僅 將最前方所存在的物體寫入晝面的手法)等來進行隱藏面 ® 處理(hldden surface eiimination),以描繪虛擬空間的影像。 這樣’因為顯示部202,係在每次的畫面產生之時, 針對在晝面上對應於透明領域之領域的影像、及對應於不 透明領域之領域的影像,加以描繪,所以即便在使用者指 疋要變更領域的形狀之場合,也能夠反映該變更。 繼續,在已選擇透明領域之狀態下,使用者係輸入指 不,以由被表示的物件當中選擇一個,進而,考慮進行輸 .入,以指定視點的移動方向(狀態B— C— d的遷移)。 亦即’使用者’係接觸被表示在該領域内的任一個物 •件的位置。選擇指示接收部203,係針對被投影至含有該 接觸被進行的座標值之領域之物件而加以特定,將該被特 定的物件作為被選擇者而加以接收(狀態c)。 例如,選擇指示接收部203,也能夠對應於物件被投 影的位置的各個像素,將用以記錄為了特定被投影在該位 置上的物件之資訊之排列,記錄在WRAM 10d巾。另外, 也能夠將晝面叹定為48〇><64〇點(d〇t),則該排列,係彻 行4〇歹】的—次元之排列,且在物件被表示之場合,將用 23 201030673 以特定物件之情報(例如,物件①)儲存在對應的像素上; 在沒有被表不之場合,則將g儲存對應的像素上。然後, _ d{用w特&所相到的座標值所對應的被記錄的 物件之資訊,並將葬出兮, 精由δ亥抽出的資訊所特定的物件, 被選擇的物件加以特定 ·" 灯心選擇指不接收部203,係為了技 用以特定被選擇的物件之“ & 為了將 叼物件之資訊,在以後的處理中加以利 用,而將其暫時記錄在WRAM 10d中。 a另外,當然,在偵測座標不在被指定的透明領域内之 場合’則選擇指示接收冑2〇3’也能夠不接收該選擇指示 輸入。 又’視點的彡動方肖,例#,係使用者一邊接觸觸控 面板20, -邊在指定的方向上移動,來加以指定。一邊接 觸的移動,係藉由方向指定接收冑2〇4’連續地偵測接觸 座標。方向指定接收部2G4,係在連續地㈣的座標值當 中將上人的座標值與這次的座標值加以比較,以算出觸 碰所移動的距離及方向。在移動的距離係預定的距離以上 之場合,則方向指定接收部2〇4,係將使用者的操作作為 方向指定輸入而加以接收,並將移動方向及距離,暫時記 錄在WRAM 10d中(狀態〇)。 或者,也能夠操作(壓下)用以表示上下左右的方向的 操作鍵17,來進行方向指定。這個場合,方向指定接收部 204,係也能夠藉由被壓下的操作鍵17的種類、及該被壓 下的時間’來算出視點的移動方向及移動距離。 這樣,使用者,係輸入指示,該指示,係由被顯示的 24 201030673 物件當中選擇一個,進而,I已進行用以指定視點的移動 方向之輸入之狀態下’實行步驟SUM,則處理控制部10, 係判斷有用以指定視點的移動方向之輸入(步冑S1〇l ; 疋)。然後’處理前進至步驟S102。 、續旋轉部205,係由WRAM l〇d中取得被選擇的 物^並以該物件為中心(例如,該物件的重心等),基於 被才曰疋的方向’使被配置在虛擬空間中的物件進行旋轉(步 驟S1〇2)料’使虛擬空間内的物體,在與被接受的指定 方向相反的方向上,僅以對應於移動距離之角《,進行旋 轉。 站兹^ &步驟S103實行至步驟Sl05,以與上述同樣 顯示被投影在預定形狀的面420的透明領域上 的虛擬二間的影像。 例如,第8A圖係表示透 係被選擇,且Μ 71n “ 域綠内的物件 8B圖所- 定為視點的移動方向。如第 圖所不’旋轉部2G5,係求得轴73G,該轴7 ==與物件的中心之向量72。、及方向HO, 二=的:心。旋轉部I係相對於這個轴 行左旋轉 的方向上(亦即,在第沾圖中係進 物件進行_ 雖之角度’使虛擬空間内的 困=?轉。旋轉後在畫面上被顯示的影像,係… 例如’在記錄部2〇 j中, 對應於移動距離的絲’、、備表格(Uble),以定義 距離的㈣角度。旋轉部⑽,係也能夠參照 25 201030673 :表格以決定旋轉角度。藉由這樣,旋轉部205,係使 虛擬空間内的物件進行旋轉,則所顯示的用以表示虛擬空 内的狀/兄之影像’係在對應於使用者所指定透明領域内 —面上的領域’使視點在使用者所指定的方向上進行移 ' 動(參照第9圖)。 使用者’係也能夠更指定視點的移動方向,而使物件 力進行旋轉(狀態D的遷移)。或者,也能夠解除透明 •領域的扎疋(由狀態B、C、D的任-個’向狀態E進行遷 )在本實施形態中,因為係以使一部分的虛擬空間進行 旋轉並加以觀察之方式來讓人觀看,所以在透明領域的指 疋被解除之場合,則使虛擬空間回復到藉由旋轉部205加 以進行旋轉之前的狀態。因此,旋轉部205,係將進行旋 轉之前的那個時點的虛擬空間内的狀態(物件的位置及朝 向),暫存在WRAMlGd中。然後,在進行透明領域的解除 • 之際,係將被暫存的虛擬空間的狀態,作為現在的虛擬空 間的狀態來加以下載。 接著’透明領域被指定’物件的選擇指示、以及視點 的移動方向被使用者加以輸出之場合,則旋轉部2〇5,係 由被下載的狀態開始,使物件進行旋轉。 如以上說明,在本實施形態中,顯示裝置2〇〇,係將 物件的旋轉被進行後的虛擬空間的狀況,所投影在預定形 狀的面420上的使用者所注意的透明領域部分之影像,顯 示在對應於畫面上的透明領域之位置上。 因此,若根據本實施形態,在被配置於虛擬空間内的 26 201030673 物件當中,使用者能夠容易地觀察所注意的物件的詳細。 以上’雖然針對本實施形態加以說明,但是本發明並 不受限於上述實施形態,而能夠具有各種變化及應用。又, 也能夠自由地組合上述實施例的各個構成要素。 例如,在上述實施形態中,係以被指定的物件為中心, * 使被配置在虛擬空間中的全部物件進行旋轉。也可以不這 樣做’而是在透明領域指定後,僅使在連一次也沒有旋轉 # 以則便被投影在透明領域上的物件(以下稱為原可視物 件)’在步驟S 102中進行旋轉以描繪在透明領域上。藉此, 使進行旋轉的物件變少,而能夠減輕描繪處理。 又,本發明,係除了在黑暗中找東西的遊戲之外,在 詳細調查被顯示在遊戲畫面上的一部分之場合也能夠適 用0 例如’為了調查被顯示在遊戲晝面上的一部分,使用 者係將該部分圈起來以指定為透明領域,繼續,一旦指定 作為旋轉中心的物件、及視點的移動方向,則在被顯示在 畫面上的該透明領域所對應部分上,如上述,係顯示對應 於已被投影在所指定的透明領域上的被指定的方向而進行 '旋轉的物件。另一方面,在對應於被顯示在晝面上的不透 明領域之部分上,係也能夠顯示領域被指定的時點之被投 影在該不透明領域上的虛擬空間的狀況。 為了實現這個,例如,顯示部,係在領域被指定的時 點(以及進行旋轉之前),產生將虛擬空間的狀態投影在預 疋形狀的面上的影像。然後,顯示部,係也能夠在上述步 27 201030673 :s:4中’將該產生的影像當中,與不透明領域一致的部 分’寫入圖框記憶體的與不透明領域—致的位置。 10A圖所-針對在投影第M圖所示的虛擬空間、且顯示第 π的影像之狀態下,使用者進行如第剛圖所示 的輸之場合加以說明。亦即,使用者係指定透明領域 =、見選擇物件4°1A作為旋轉中心的物件、指定方向Μ :=Γ移動方向。這個結果™圖所示,在透 係顯不以物件41〇Α作為令心進行旋轉後的狀 祕。=一方面’在不透明領域中,係顯示與在指定透明領 之’點之被顯示在不透明領域中者相同的影像。 ""樣’利用也同時顯示現在注意的部分以外的部分, =能夠對應使用者的要求’亦即該使用者想要觀看並非 在庄意的部分。X ’因為係產生僅使正在注意的部分進 轉而其他部分沒有變化之影像而能夠得到使注意部 4現出來的效果’而使該注意部分更容易加以判別。 推理:本發明’例如能夠在想要解明事件的謎題之 加以使用。將在不透明領域中所顯示的影像, 2為表示現在狀況的現場照片的影像。再者,在透明領域 一’係投影有表示過去的狀況的現場照月的影像。藉此, "'旦使用者針對現場照片的任一場所(透明領域)進行指 定’則顯不部係顯示將現場附近的過去影像加以部分地合 成在現在影像上之影像,以演出宛如僅針對那個部分 =:」之效果。使用者’係一邊考慮想要注意哪:、應 q意哪裡,且必須使過去的現場附近的狀況以進行旋轉 28 201030673 之方式加以顯示’以尋找現在與過去現場的不同,來測試 推理能力。這樣’能夠創造出讓使用者更加思考的遊戲、 更加使用頭腦的遊戲。 又’在上述實施形態中’顯示部並未針對在虛擬空間 中被配置的物件所能夠旋轉的角度給予限制。但是,例如, • 在顯示部基於指定的方向,使虛擬空間内的物件進行旋轉 之際,可能有在透明領域的指定之後不久(以及進行旋轉以 鳙前)並未顯示在該透明領域上的物件(以下稱為原不可視物 件)的任一個,係被配置在原可視物件與預定形狀的面之間 之結果。這個結果,係造成原可視物件被原不可視物件隱 藏住《•因為原可視物件,係使用者正在注意的物件,所以 不想要其被原不可視物件隱藏住。 因此,旋轉部,首先,係針對全部的物件進行試驗旋 轉。然後,在S式驗旋轉後的視點與原不可視物件所連結的 ,線段,係通過透明領域之場合(亦即,原不可視物件被投影 至透明領域之場合),則進行下個判定。亦即,將在試驗旋 轉後係通過透明領域之視點與原不可視物件所連結的線段 之各個的長度,與視點與原可視物件之各個的距離加以 比較以判定是否較短。 在較短之場合,則原不可視物件的任一個係比原可 視物件的任一個,位於更靠近視點的位置,而有原不可視 物件將原可視物件隱藏住的可能性。因此,旋轉部係不 確認全部的物件的試驗旋轉,而將物件的位置及朝向回復 至試驗旋轉開始前的狀態。或者,不是回復至試驗旋轉開 29 201030673 始前的狀態,而是以該視點與原不可視物件之距離的任一 個’係即將變成該視點與原可視物件之距離更短之前的位 置’來確定試驗旋轉。另一方面,如果沒有更短者之場合, 則旋轉部’係確定試驗旋轉。 例如,針對如第11A圖所示加以配置的物件,係在基 於使用者所指定的視點的移動方向,藉由旋轉部,在抽刪 的周圍向右轉之場合,加以說明。The face 420 of the predetermined shape corresponds to the projection surface on which the virtual space is to be projected. Although in the present embodiment, it is constituted by a flat surface, the shape thereof is not limited thereto. A typical predetermined shaped face 42 is disposed between the view point 410 and an object (e.g., 4〇1A, 401B, 401C, etc.) within the virtual space. Further, the direction of the line of sight direction 415 passes through the center of the face 420 of a predetermined shape, and the orientation of the vector perpendicular to the face 420 of the predetermined shape becomes uniform. Further, the boundaries of the transparent region 420A and the opaque region 42〇b disposed on the surface 420 of the predetermined shape are identical, and can be designated by the user as will be described later. The shape, position and orientation of the object to be disposed in the virtual space, the position of the viewpoint, and the position and orientation of the face 42' of the predetermined shape are previously recorded in the R 〇 M 12a in the cassette 12. The processing control unit 16 201030673 ίο' reads these materials from the appropriate card g 12 and then records them in wram just wait. The rotating portion 205' corresponds to an instruction from the user to update the shape, position, orientation, and the like of the object. The processing control unit 1A, the connector 11, and the card H 12 operate in cooperation to function as the recording unit 2〇1. When the display unit 202' receives an input for the #defined transparent area by the field specifying receiving unit 206, which will be described later, an image having the same size as the surface 420 of the predetermined predetermined shape is generated. That is, among the generated images, a portion of the transparent region of the image of the virtual space projected on the transparent field of the surface 420 of the predetermined shape is drawn at a position corresponding to the transparent region of the surface 42G of the predetermined shape. Further, among the generated images, an image depicting a predetermined color (for example, black) for expressing darkness is generated at a position corresponding to a field other than the transparent field (opaque area). Then, the display unit 202 displays the generated image on the second display portion 19 (or the 'display portion 18'). That is, although the surface 420 of the predetermined shape is described as being equivalent to the projection surface, in the embodiment of the present embodiment, the actual function as the projection surface is the transparent field of each of the surfaces 420 of the predetermined shape. Further, it is also possible to set the entire surface 42 of the predetermined shape to be a transparent field. The processing control unit 1A and the second display unit 19 (or the first display unit U)' cooperate to function as the display unit 2〇2. The selection finger receiving unit 203 receives a selection instruction input indicating that any one of the objects (hereinafter referred to as an object to be displayed) displayed by the display unit 2〇2 is selected. The user, for example, is contacted (touched) by 17 201030673, and the object to be selected is displayed on the touch surface & 2, and the object displayed by the display unit 202 can be selected. It is possible to perform the same operation as this (the operation key s is selected to select the object. The selected object to be received is temporarily recorded, for example, in the smashing, etc. The processing control unit 10, the operation key 17, and the touch panel μ The cooperative operation functions as the selection instruction receiving unit 2〇3. The direction designation receiving unit 204 receives a direction designation input from the user, and the direction designation input is specified in the virtual space to be in the virtual space. The direction in which the arranged viewpoint is moved. The user, for example, presses the operation key 17 corresponding to the predetermined direction, or can move the touch panel 2 while pressing it to specify the direction. The processing control unit 10, the operation keys 17, and the touch panel 2 are cooperatively operated to function as the direction designation receiving unit 204. The rotation unit 205 is connected by the selection instruction receiving unit 2〇3. The object is centered, and the object in the virtual space is rotated in such a manner as to be able to see that the viewpoint is moved in the direction received by the direction-designated receiving portion collapse, that is, reversed from the specified direction. In this manner, the position and orientation of the object placed in the virtual space are rotated, and the position and orientation of the object recorded in the record 201 are updated. The processing control unit 10 functions as the rotating unit 2〇5. The field designation receiving unit 206 is a receiving area specifying input 'the field receives input, and is on a surface of a predetermined shape, and specifies the transparent field and the opaque field. The user, for example, can acquire the closed field. 'On the touch panel 2G—to touch the pen under the house, move it to the line of the boundary of the field as 18 201030673 to specify the position and shape of the transparent field. Or, the user can also Perform the same operation as this (press the operation key 17) to specify the transparent field. The specified field to be received is temporarily recorded, for example The WRAM 1 〇d, etc. The processing control unit 1 〇, the operation keys 17 and the touch panel 20 cooperate to operate as the field designation receiving unit 206. The following is a description of the display device 200 according to the present embodiment. In the present embodiment, for example, a case where a game is executed is described as an example, and the game has a scene that must be searched for in the dark. In the game, the user is once When the transparent area is specified, the display is performed by the illumination of the flashlight only in the designated transparent area. Further, the user's selection of one of the items that are displayed in the transparent field is selected. And inputting to specify the direction in which the viewpoint is to be moved, the display device 200, based on the object, rotates the object in the virtual space based on the specified direction 'When, it will be used to display the rotated An image of the state of the virtual space is displayed in the field. The user seeks to find things by viewing the virtual space displayed in the transparent field specified by various angles. Further, in the present embodiment, the transition from the received input state of the user is shown in Fig. 5. In a state where nothing is received (state A) 'can receive the specification of the transparent field (state b) <·Next, 19 201030673 receives a selection instruction (state c) of the object that is rotated ten hearts, and then receives the viewpoint The specification of the movement direction (state D). Thus, under the object that is not selected to be the center of rotation, the indication of the direction of movement is not received; and if the area of the transparent area is not specified, the item that becomes the whirlwind is not received. In addition, in the states B, C, and D, a predetermined button can be input to release the designated transparent field (state E) (i.e., deleted by WRAM i〇d). In the state where the selection of the object is received, when the release of the transparent field has been designated (the transition of state C-E), since there is an object selected as the rotation_heart in the released domain, The selection of the object is also removed by the WRAM 10d deletion. In addition, the designated transparent area can be received in addition to the boundary portion of the drag field (the transition of the state B-B), and can receive the selection instruction of the object. Instructing, by selecting other objects in the specified transparent field (migration of state C-C) or after specifying the moving direction of the pilot, it is also possible to specify the moving direction (migration of state D-D). The display processing of FIG. 6 is a process in which the respective units of the display device 2A of the present embodiment operate in cooperation, and the processing is repeatedly performed at a predetermined timing as described above, and the display device 200 performs the response state corresponding to the transition. The display processing is described with reference to Fig. 6 . First, the state A is displayed, that is, when the user does not perform the initial state of designation of the transparent field and selection of the object. First, the 'processing control unit 10' determines whether or not it is useful to specify the input of the 20 201030673 moving direction of the viewpoint (step S101). That is, the wraM 10d or the like is referred to to judge whether or not the input is useful to specify the moving direction of the viewpoint. In the initial state, it is judged that there is no input for specifying the direction (step S1 〇 1; NO), and the processing proceeds to step S103. Next, the display unit 202 records the state of the virtual space projected on the transparent area in the frame memory (VRAM 10c) (step S103). That is, the display unit 202 refers to the recording unit 2 to obtain the position of the viewpoint, the position and orientation of the surface 420 of the predetermined shape, and the shape, position, and orientation of the object. Then, based on the acquired information, an image 'projected by projecting objects in the virtual space is generated on the face 42 of the predetermined shape. The image corresponding to the position in the transparent field (obtained with reference to WRAM 10d) is recorded in the frame memory (VRAM IOC) corresponding to the transparent field. However, in the initial state, since no transparency is specified. In the field, no image data is recorded in the frame memory. And continuing, the display unit 202 writes a material for displaying a predetermined color (for example, black) at a position corresponding to the opaque area of the frame memory (step S104). In the initial state, since there is only an opaque field, at this point in time, a black-faced image of the entire surface is recorded in the frame memory. The display unit 202 transmits the content of the frame memory to the first or second display unit (18, 19) at a predetermined timing (step, therefore, in the initial state, the first or The second display unit (1, 19) indicates a black-faced image. Then, the display processing is terminated. Next, the state B, that is, the user has designated the display processing of the transparent field 21 201030673, to explain The operation or the contact of the touch panel 20, input ° then, the field specifies the position and shape of the transparent field specified by the receiving portion, etc. The user performs the operation key 17 for specifying the transparent field 206, receiving By the input "and temporarily recorded in the WRAM 10d ' user, in the state where the transparent field has been designated, if the step si is performed, the processing control unit 10 determines that the specified party has not been rotated. Input (step No), and from step s103 to step s1〇5. As described above, step S1〇3 is performed to step s1〇5, and then among the displayed images, 'by the display unit 202 Virtual space It is projected to the position corresponding to the transparent field, and the other parts are displayed as black images, for example. 'FIG. 7A shows the state in which the virtual space shown in FIG. 4 is projected to the transparent field 420A. As shown in Fig. 7A, the object to be projected to the transparent field is the object 401A and a part of the object 4〇1b, and the others are projected to the opaque field. Therefore, as shown in Fig. 7B, the object 4〇1a The projected image 401A of the object 401A and the part of the object 4〇1b projected is displayed on the opaque field corresponding to the transparent region of the screen. The part is displayed as a spot color. Thus, the transparent area specified by the user is a part of the virtual space as if it is illuminated by a flashlight. As shown in Fig. 7B, in the present embodiment, each object is A point perspective method is used for perspective projection, while objects farther from the viewpoint are projected smaller, and objects closer to the viewpoint are projected larger. However, it is also possible to use parallel shots on 22 201030673 In addition, the display unit 202 uses, for each object, for example, a z-buffer method (in a three-dimensional drawing, an object or a face whose line of sight is blocked by a front object is hidden, and is not performed. The process of depicting green, that is, a method of eliminating hidden faces. Adding information about the depth of the color information of each element constituting the picture, and drawing the smashing of the picture with the same coordinates The depth information is compared, and only the object existing in the forefront is written into the kneading surface), etc., to perform the hldden surface eiimination to draw an image of the virtual space. Thus, because the display unit 202 is attached When each time a screen is generated, the image corresponding to the field of the transparent field on the face and the image corresponding to the field of the opaque field are drawn. Therefore, even when the user refers to the shape of the field to be changed, It can also reflect this change. Continuing, in the state where the transparent field has been selected, the user inputs the finger to select one of the objects to be represented, and then considers the input and the input to specify the moving direction of the viewpoint (state B - C - d migrate). That is, the 'user' touches the position of any item indicated in the field. The selection instruction receiving unit 203 specifies the object to be projected onto the field including the coordinate value at which the contact is made, and receives the specified object as the selected person (state c). For example, the selection instruction receiving unit 203 can also record the arrangement of the information for recording the object projected to the position corresponding to each pixel at the position where the object is projected, and record it in the WRAM 10d. In addition, it is also possible to sigh the face to 48〇><64〇 points (d〇t), and the arrangement is the arrangement of the dimensions of the 4th dimension, and when the object is represented, The information of the specific object (for example, object 1) will be stored on the corresponding pixel with 23 201030673; if it is not displayed, g will be stored on the corresponding pixel. Then, _d{ uses the information of the recorded object corresponding to the coordinate value of the w& and will be buried, and the object specified by the information extracted by δHai, the selected object is specified. " The sleek selection refers to the non-receiving unit 203, which is used for the purpose of specifying the selected object. In order to utilize the information of the object, it is temporarily recorded in the WRAM 10d. a In addition, of course, when the detected coordinates are not in the designated transparent area, 'the selection indication 胄2〇3' can also not receive the selection indication input. And the 'viewpoint's swaying side, example#, is The user touches the touch panel 20 while moving in a specified direction to specify. The movement of the contact is continuously detected by the direction specifying receiving 胄2〇4'. The direction specifying receiving portion 2G4 In the continuous (4) coordinate value, the coordinate value of the upper person is compared with the current coordinate value to calculate the distance and direction of the movement of the touch. When the moving distance is more than a predetermined distance, the square is The designated receiving unit 2〇4 receives the user's operation as a direction specifying input, and temporarily records the moving direction and distance in the WRAM 10d (state 〇). Alternatively, it can also be operated (depressed). The operation key 17 in the up, down, left, and right directions is used to specify the direction. In this case, the direction designation receiving unit 204 can calculate the viewpoint by the type of the depressed operation key 17 and the time of the depression. The direction of movement and the distance of movement. In this way, the user inputs an indication, which is selected by one of the displayed 24 201030673 objects, and further, I has performed the input state for specifying the direction of movement of the viewpoint. In step SUM, the processing control unit 10 determines that it is useful to input the movement direction of the specified viewpoint (step S1〇1; 疋). Then the processing proceeds to step S102. The continuous rotation unit 205 is composed of WRAM l〇d Obtaining the selected object ^ and centering on the object (for example, the center of gravity of the object, etc.), rotating the object arranged in the virtual space based on the direction of the being (Step S1〇2) The material is caused to rotate the object in the virtual space in the direction opposite to the designated direction to be accepted, only at the angle corresponding to the moving distance. [Standard ^ & Step S103 is carried out to the step Sl05 displays the image of the virtual two space on the transparent field projected on the surface 420 of the predetermined shape in the same manner as described above. For example, Fig. 8A shows that the transmission system is selected, and Μ 71n "object 8B in the field green" - Set the direction of movement of the viewpoint. As shown in the figure, the rotating portion 2G5 determines the axis 73G which is a vector 72 with the center of the object. And direction HO, two =: heart. The rotating portion I is rotated in the direction of the left axis of the axis (that is, the angle of the object is made in the first image to make the sleepy in the virtual space = turn. After being rotated, it is displayed on the screen. For example, 'in the recording unit 2〇j, the silk corresponding to the moving distance' and the prepared table (Uble) are defined by the (four) angle. The rotating unit (10) can also be determined by referring to the table 25 201030673: The rotation angle is such that the rotating portion 205 rotates the object in the virtual space, and the image displayed in the virtual space is displayed in the transparent field corresponding to the user. The field on the surface 'shifts the viewpoint in the direction specified by the user (see Figure 9). The user's system can also specify the direction of movement of the viewpoint and rotate the object (transition of state D) Alternatively, it is also possible to release the transparency/field entanglement (the state B, C, and D are moved to the state E). In this embodiment, a part of the virtual space is rotated and added. Way of observation When the finger in the transparent field is released, the virtual space is returned to the state before being rotated by the rotating unit 205. Therefore, the rotating unit 205 is virtual at the time before the rotation. The state in the space (the position and orientation of the object) is temporarily stored in WRAMlGd. Then, when the transparent area is released, the state of the temporarily stored virtual space is downloaded as the state of the current virtual space. When the 'transparent field is designated' object selection instruction and the direction of movement of the viewpoint is output by the user, the rotating unit 2〇5 starts the state of being downloaded, and rotates the object. In the present embodiment, the display device 2 is a virtual space in which the rotation of the object is performed, and the image of the transparent field portion that the user has projected on the surface 420 of the predetermined shape is displayed in correspondence. In the position of the transparent area on the screen. Therefore, according to the present embodiment, 26 20103 is placed in the virtual space. In the object, the user can easily observe the details of the object to be noticed. The above description has been made on the present embodiment, but the present invention is not limited to the above embodiment, and various changes and applications are possible. For example, in the above-described embodiment, all the objects placed in the virtual space are rotated around the designated object, and it is not necessary to do so. After the transparent field is specified, only the object that is projected on the transparent field (hereinafter referred to as the original visible object) is rotated in step S102 to be drawn on the transparent field. In this way, the object to be rotated is reduced, and the drawing process can be reduced. Further, the present invention is applicable to a case where a part of the game screen is displayed in detail in addition to a game for finding things in the dark. 0 For example, 'In order to investigate a part of the game that is displayed on the face of the game, the user circled the part to specify The field continues, once the object as the center of rotation is specified, and the direction of movement of the viewpoint, on the portion corresponding to the transparent field displayed on the screen, as described above, the display corresponds to the transparent field that has been projected in the specified The 'rotated object' is carried out in the specified direction. On the other hand, on the portion corresponding to the opaque field displayed on the face, the system can also display the state of the virtual space on the opaque field at the point in time when the field is designated. In order to achieve this, for example, the display unit generates an image in which the state of the virtual space is projected on the surface of the pre-shape when the field is designated (and before the rotation). Then, the display unit can also write the portion of the generated image that matches the opaque area to the position of the opaque field of the frame memory in step 27 201030673 : s:4. 10A is a description of the case where the user performs the input as shown in the first figure in the state where the virtual space shown in the Mth picture is projected and the image of the πth is displayed. That is, the user specifies the transparent field =, see the object that selects the object 4° 1A as the center of rotation, and specifies the direction Μ := Γ moving direction. As shown in the result of the TM image, it is revealed that the object is rotated by the object 41〇Α. = On the one hand, in the opaque field, the same image as the one displayed in the opaque field at the point of the designated transparent collar is displayed. The "" sample uses also show the part of the part that is not noticed at the same time, = can correspond to the user's request', that is, the user wants to view the part that is not in the mood. X ’ makes it possible to obtain an effect of causing the attention portion 4 to be reproduced only by moving the portion that is being noticed and the other portion is not changed, and making the attention portion easier to discriminate. Reasoning: The present invention can be used, for example, in a puzzle that wants to solve an event. The image displayed in the opaque field, 2 is an image of a live photo showing the current situation. In addition, in the transparent field, an image of the scene of the past that indicates the past situation is projected. In this way, "'s user specifies the location of the live photo (transparent field)', then the display shows that the past image near the scene is partially combined with the image on the current image, so that the performance is like For the effect of that part =:". The user's side considers what you want to pay attention to: where it should be, and you must test the reasoning ability by displaying the situation near the scene in the past by rotating it 28 201030673 to find the difference between the present and the past. This can create games that make users think more and use more minds. Further, in the above embodiment, the display unit does not limit the angle at which the object placed in the virtual space can be rotated. However, for example, when the display unit rotates the object in the virtual space based on the specified direction, there may be a short time after the designation of the transparent field (and before the rotation is performed), and the transparent area is not displayed. Any one of the objects (hereinafter referred to as the original invisible object) is disposed between the original visible object and the face of the predetermined shape. This result causes the original visible object to be hidden by the original invisible object. • Because the original visible object is the object that the user is paying attention to, it is not wanted to be hidden by the original invisible object. Therefore, the rotating portion firstly performs test rotation for all the articles. Then, when the viewpoint after the S-type rotation is connected to the original invisible object and the line segment passes through the transparent field (that is, when the original invisible object is projected to the transparent field), the next determination is made. That is, the length of each line segment connected to the original invisible object through the viewpoint of the transparent field after the rotation of the test is compared with the distance between the viewpoint and the original visible object to determine whether it is short. In the shorter case, either of the original invisible objects is located closer to the viewpoint than any of the original viewable objects, and there is a possibility that the original invisible object will hide the original visible object. Therefore, the rotating portion does not confirm the test rotation of all the objects, but returns the position and orientation of the object to the state before the start of the test rotation. Or, instead of returning to the state before the start of the test rotation, the distance between the viewpoint and the original invisible object is determined by the position of the viewpoint that is immediately shorter than the distance between the viewpoint and the original visible object. Rotate. On the other hand, if there is no shorter case, the rotating portion determines the test rotation. For example, the object arranged as shown in Fig. 11A will be described with respect to the moving direction of the viewpoint specified by the user, and the rotating portion is turned rightward around the stencil.

如第UA圖所示,物件4〇lc,係在透明領域42〇a的 指定之後不久,並沒有被投影在透明領域42〇a上而為原 不可視物件。另一方面,物件4〇1八及4〇1B,係在該指定 之後,就立刻被投影在透明領域420A上,而為原可視物件。 在由第11A圖,藉由試驗旋轉而得到如第UB圖所示 的物件之場合,原不可視物件4〇lc係移動到被投影至不透 明領域420B之位置β在這個狀態下,將虛擬空間的狀況投 影至預定形狀的面420的透明領域420Α的話,則可得到如 第11C圖所示的影像。亦即’原不可視物件401 C,係將原 可視物件401A所投影的像的一部分隱藏住。 旋轉部,係為了防止這樣的狀況,而將原不可視物件 與視點的距離(例如,距離1012)’與原可視物件與視點的 距離(例如,距離1〇11),互相比較。 在第11B圖中,因為距離1〇12係比距離1〇11更短, 所以旋轉部係不針對試驗旋轉進行確定而中止。 藉此’即便不求得被投影的物件的重疊,也能夠簡易 地以不隱藏住原可視物件之方式來顯示。 30 201030673 又’也能夠限制在預定形狀的面420内可以被指定為 透明領域之面積的尺寸。例如,在指定透明領域之際,領 域指定接收部,係分別求得該透明領域的最大的寬度、及 最大的長度,而能夠在該最大寬度、及最大的長度的任一 個’並非記錄部等中所記錄的預定的臨界值以下之場合, . 則不接收領域指定。 藉此,例如,在僅使原可視物件進行旋轉之變化例中, _此夠防止原可視物件的數目過多,而可抑制關於旋轉處理 的计算量。又,即便在不是僅使原可視物件進行旋轉而 是使虛擬空間中的全部物件進行旋轉之場合,因為使被旋 轉顯示的領域變小,而能夠防止所謂的3D暈眩。或者,為 了得到同樣的效果,領域指定接收部,係在原可視物件的 數目在預定的臨界值以上之場合,則不接收領域指定。 又,在上述實施形態中,係在對應於畫面的透明領域 •之位置上,顯示使虛擬空間内的物件進行旋轉而視點已經 移動之影像》否則,也能夠對應於進行旋轉之際所容許的 計算精度等,在使用者所指定的方向上,使視點本身(及預 .定形狀的面)之位置進行移動。或者’物件的旋轉及視點的 移動’也能夠兩方同時進行。 另外’本申請案係以日本專利申請案2〇〇8 26〇979號 作為基礎來主張優先權》該作為基礎之申請案的内容,係 全部併入本申請案者。 [產業上的利用可能性] 31 201030673 如以上說明’若依照本發明’係能夠提供一種顯示裝 置、顯示方法、資訊記錄媒體以及程式,適於在被配置於 虛擬空間内的物件當中’讓使用者能夠容易地觀察所注意 的物件的詳細。 • 【圖式簡單說明】 第ί圖係表示本發明的顯示裝置被實現的典型的攜帶 φ 型遊戲機的概要構成之圖。 第2圖係表示本發明的顯示裝置被實現的典型的攜帶 型遊戲機的外觀之圖。 第3圖係用以說明顯示裝置的機能構成之圖。 第4圖係表示虛擬空間的構成的例子之圖。 第5圖係顯示裝置所接收的輸入之狀態遷移圖。 第ό圖係用以說明顯示處理之流程圖。 第7Α圖係表示將虛擬空間的狀況投影至預定形狀的 _ 面上之狀況之圖。 第7Β目係、表示將虛擬空間的狀況投影至預定形狀的 面上之結果所產生的影像的例子之圖。 •第8Α圖係表示在旋轉前所表示的影像上,選擇透明領 域内的物件,並已指定視點的移動方向的例子之圖。7 第8Β圖係表示相對於被指定的視點的移動方向,求得 旋轉部使物件進行旋轉的轴及方向之方法的一例之圖。侍 第9囷係表示基於第8Α圖所示的視點的移動方向而 32 201030673 使物件進行旋轉之際所顯示的影像的例子之圖。 第10 A圖係表示在變形例中的旋轉前所顯示的影像的 例子之圖。 第10B圖係表示在變形例中的透明領域内的物件被選 擇’並已指定視點的移動方向的例子之圖。 第10C圖係表示在變形例中的藉由旋轉部使物件向所 &疋的移動方向進行旋轉後所顯示的影像的例子之圖。 第11A圖係表示虚擬空間的構成的例之圖。 第11B圖係表示使第11A圖相對於轴1〇〇1,向右轉進 行旋轉之後的虛擬空間的狀況之圖。 第11C圖係表示使第11B圖進行投影後的結果之圖。 33 201030673 【主要元件符號說明】 1攜帶型遊戲機1〇處理控制20 觸控面板 部 10a CI>U核心 10b 影像處理部 10c VRAM 10d WRAM lOe LCD控制器 lOf 觸控面板控制器 11連接器 12卡匣 12a ROM 12b RAM 13 無線通信部 14 通信控制器 15 聲音增幅器 16 揚聲器 17 操作鍵 18 第一顯示部 19 第一顯不部 200 顯示裝 置 201 記錄部 202 顯示部 203 選擇指 示接收部 204 方向指 疋接收部 205 旋轉部 206 領域指 定接收部As shown in Fig. UA, the object 4〇lc is not projected on the transparent field 42〇a shortly after the designation of the transparent field 42〇a, and is the original invisible object. On the other hand, the objects 4〇18 and 4〇1B, immediately after the designation, are projected onto the transparent field 420A as the original visible object. In the case where the object shown in Fig. UB is obtained by the test rotation by the 11A, the original invisible object 4〇lc is moved to the position β projected to the opaque field 420B. In this state, the virtual space is When the condition is projected to the transparent field 420 of the surface 420 of the predetermined shape, an image as shown in Fig. 11C can be obtained. That is, the original invisible object 401 C hides a part of the image projected by the original visible object 401A. In order to prevent such a situation, the rotating portion compares the distance between the original invisible object and the viewpoint (e.g., distance 1012)' with the distance between the original visible object and the viewpoint (e.g., distance 1〇11). In Fig. 11B, since the distance 1〇12 is shorter than the distance 1〇11, the rotating portion is not determined for the test rotation and is suspended. Therefore, even if the object to be projected is not overlapped, it can be easily displayed without hiding the original visible object. 30 201030673 Further, it is also possible to limit the size which can be designated as the area of the transparent field in the face 420 of the predetermined shape. For example, when specifying a transparent area, the field designation receiving unit obtains the maximum width and the maximum length of the transparent area, and can be any one of the maximum width and the maximum length. Where the predetermined threshold value is recorded below, the field designation is not accepted. Thereby, for example, in a variation in which only the original visible object is rotated, it is possible to prevent the number of original visible objects from being excessive, and the amount of calculation regarding the rotation processing can be suppressed. Further, even when all the objects in the virtual space are rotated instead of rotating only the original visible object, the so-called 3D stun can be prevented by making the area of the rotated display small. Alternatively, in order to obtain the same effect, the field designation receiving unit does not receive the field designation when the number of original visual objects is equal to or greater than a predetermined threshold. Further, in the above-described embodiment, an image in which the object in the virtual space is rotated and the viewpoint has moved is displayed at a position corresponding to the transparent area of the screen. Otherwise, it is also possible to allow for the rotation. The position of the viewpoint itself (and the surface of the predetermined shape) is moved in the direction specified by the user, such as calculation accuracy. Alternatively, the 'rotation of the object and the movement of the viewpoint' can be performed simultaneously. Further, the present application is hereby incorporated by reference in its entirety in its entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire entire all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all all [Industrial Applicability] 31 201030673 As described above, "in accordance with the present invention", it is possible to provide a display device, a display method, an information recording medium, and a program, which are suitable for use in an object disposed in a virtual space. The person can easily observe the details of the object of interest. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 5 is a view showing a schematic configuration of a typical portable φ type game machine in which the display device of the present invention is realized. Fig. 2 is a view showing the appearance of a typical portable game machine to which the display device of the present invention is realized. Fig. 3 is a view for explaining the functional configuration of the display device. Fig. 4 is a view showing an example of the configuration of a virtual space. Figure 5 is a state transition diagram of the input received by the display device. The figure is used to illustrate the flow chart of the display process. Fig. 7 is a view showing a state in which the state of the virtual space is projected onto the _ plane of the predetermined shape. The seventh item is a diagram showing an example of an image generated as a result of projecting the state of the virtual space onto the surface of the predetermined shape. • Figure 8 is a diagram showing an example of selecting an object in a transparent field and specifying the direction of movement of the viewpoint on the image indicated before the rotation. 7 Fig. 8 is a view showing an example of a method of obtaining an axis and a direction in which a rotating portion rotates an object with respect to a moving direction of a specified viewpoint. The ninth line shows an example of an image displayed when the object is rotated based on the moving direction of the viewpoint shown in Fig. 8 . Fig. 10A is a view showing an example of an image displayed before the rotation in the modification. Fig. 10B is a view showing an example in which the object in the transparent field in the modification is selected' and the direction of movement of the viewpoint is specified. Fig. 10C is a view showing an example of an image displayed by rotating the object in the moving direction of the & Fig. 11A is a view showing an example of the configuration of a virtual space. Fig. 11B is a view showing a state of the virtual space after the 11A map is rotated to the right with respect to the axis 1〇〇1. Fig. 11C is a view showing the result of projecting the 11B image. 33 201030673 [Description of main component symbols] 1 portable game machine 1 processing control 20 touch panel unit 10a CI> U core 10b image processing unit 10c VRAM 10d WRAM lOe LCD controller 10F touch panel controller 11 connector 12 card匣12a ROM 12b RAM 13 Wireless communication unit 14 Communication controller 15 Sound amplifier 16 Speaker 17 Operation key 18 First display unit 19 First display unit 200 Display unit 201 Recording unit 202 Display unit 203 Selection instruction receiving unit 204 Direction finger疋 receiving unit 205 rotating unit 206 field specifying receiving unit

401A ' 401B ' 401C 401A’ 、 401B’ 像 410視點(照相機) 415 視線方向 物件 420 預定形狀的面 420A 透明領域 420B 不透明領域 710 方向 720 向量 730 > 1001 軸 1011 、1012距離 34401A ' 401B ' 401C 401A' , 401B' image 410 viewpoint (camera) 415 line of sight direction object 420 surface of predetermined shape 420A transparent field 420B opaque field 710 direction 720 vector 730 > 1001 axis 1011 , 1012 distance 34

Claims (1)

201030673 七、申請專利範圍: 1. 一種顯示裝置(200),其特徵在於具備: 一記錄部(201),係記錄被配置在虛擬空間内的物件的 位置和朝向、視點的位置、以及預定的尺寸的投影面的位 置和朝向; . 一顯示部(202) ’係由前述視點,以在前述投影面上產 生透視投影而成的影像之方式,來顯示前述物件; • 一選擇指示接收部(203),係接受選擇指示輸入,該選 擇指示輸入,係指示要由前述被產生的影像中所包含的物 件(以下稱為可視物件),選擇任一個之意思; 一方向指定接收部(204),係接受方向指定輸入,該方 向指定輸入,係指定在前述虛擬空間中欲使前述視點進行 移動的方向;以及 一旋轉部(205),係以前述被選擇的可視物件為中心, 將前述視點和前述投影面,相對於被配置在前述虛擬空間 中的物件’以前述被指定方向進行相對地旋轉。 2. —種顯示裝置(200),其特徵在於具備: ' 一記錄部(201) ’係記錄被配置在虛擬空間内的物件的 位置和朝向、視點的位置、以及具有透明領域及不透明領 域之預足形狀的的面的位置和朝向; 一顯示部(202) ’係將前述物件的位置與前述視點的位 置所連結的線段係通過前述透明領域之物件(以下稱為可 35 201030673 視物件),以在該線段與該透明領域的交點上產生投影而成 的影像之方式,加以顯示; 一選擇指示接收部(203) ’係接受選擇指示輸入,該選 擇指示輸入,係指示要由前述可視物件’選擇任一個之意 - 思; 、 * 一方向指定接收部(2〇4) ’係接受方向指定輸入,該方 向指定輸入,係指定在前述虛擬空間中欲使前述視點進行 φ 移動的方向;以及 一旋轉部(205) ’係以前述被選擇的可視物件為中心, 對應於前述被指定方向,使被配置在前述虛擬空間中的物 件的位置和朝向,進行旋轉。 3. 如申請專利範圍第2項所述之顯示裝置(2〇〇),其中 前述旋轉部(205) ’使其位置和朝向進行旋轉的物件, 就是前述可視物件。 4. 如申請專利範圍第2項所述之顯示裝置(2〇〇),.其中 前述旋轉部(205) ’係一旦使前述可視物件以外的物件 (以下成為不可視物件)的位置和朝向進行旋轉,則在該不 可視物件與前述視點之距離’係比該可視物件與該視點之 距離更短’且該不可視物件與該視點所連結的線段係通過 前述透明領域之場合,則使前述旋轉中止。 5. 如申請專利範圍第2項所述之顯示裝置(200),其中更具 36 201030673 備領域指定接收部(206),用以接收領域指定輸入,該領域 和疋輸入’係在前述預定形狀的面上,指定前述透明領域 和前述不透明領域; 前述顯示部(202),係產生前述影像,該影像係將前述 • 位置和朝向被旋轉後的物件的位置與前述視點的位置所連 . 結的線段係通過前述透明領域之物件,投影在該線段與該 透明領域的交點上;且將前述旋轉被進行以前的物件的位 • 置與則述視點的位置所連接的線段係通過前述不透明領域 之物件,投影在該線段與該不透明領域的交點上。 6.如申請專利範圍第5項所述之顯示裝置(2〇〇),其中 則述領域指定接收部(2〇6) ’係在接收到的前述透明領 域係在11定的面積以上之場纟,則不接收前述領域指定輪 入0 7.如申請專利範圍第5項所述之顯示裝置(2〇〇),其中 則述湏域如疋接收部(206),係—旦接收前述領域指定 輸入’則在前述可視物件的數目係超過預定的數目以上之 場合’不接收該領域指定輸入。 但糊不万法,係藉由具備記錄部(2(H)、顯示部(2〇2)、 選擇指示接㈣(2〇3)、方向減接收部(崩)、及旋轉部 (2〇5)之顯Μ置_)所騎賴Μ法,純徵在於. 前述記錄部⑽),係記錄被配置在虛擬空間内的物件 37 201030673 的位置和朝向、視點的位置、以及預定的尺寸的投影面的 位置和朝向; 前述表示方法,係具備: 一顯示步驟,係使前述顯示部(202),由前述視點,以 • 在前述投影面上產生透視投影而成的影像之方式,來顯示 - 前述物件; 一選擇指示接收步驟,係使前述選擇指示接收部 .(203)’接受選擇指示輸入’該選擇指示輸入,係指示要由 前述被產生的影像中所包含的物件(以下稱為可視物件), 選擇任一個之意思; 一方向指定接收步驟,係使前述方向指定接收部 (204),接受方向指定輸入,該方向指定輸入,係指定在前 述虛擬空間中欲使前述視點進行移動的方向;以及 旋轉步驟,係使旋轉部(205),以前述被選擇的可視 物件為中心,將前述視點和前述投影面,相對於被配置在 前述虛擬空間中的物件,以前述被指定方向進行相對地旋 轉。 9. 一種顯示方法,係藉由具備記錄部(2〇1)、顯示部口〇2卜 選擇指示接收部(2〇3)、彳向指定接收部⑽句、及旋轉部 ⑽)之顯示裝置(2⑽)所進行的顯示方法,其特徵在於: 前述記錄部(201),係記錄被配置在虛擬空間内的物件 的位置和朝向、視點的位置、以及具有透明領域及不透明 領域之預定形狀的的面的位置和朝向; 38 201030673 前述顯示方法,係具備: —顯示步驟’係使前述顯示部(202),係將前述物件的 位置與前述視點的位置所連結的線段係通過前述透明領域 之物件(以下稱為可視物件)’以在該線段與該透明領域的 交點上產生投影而成的影像之方式,加以顯示; 一選擇指示接收步驟,係使前述選擇指示接收部 (203) ’接受選擇指示輸入,該選擇指示輸入,係指示要由 ® 前述可視物件,選擇任一個之意思; 方向指定接收步驟,係使前述方向指定接收部 (204) ’接受方向指定輸入,該方向指定輸入,係指定在前 述虛擬空間中欲使前述視點進行移動的方向,以及 一旋轉步驟,係使前述旋轉部(205),以前述被選擇的 可視物件為中心,對應於前述被指定方向,使被配置在前 述虛擬空間中的物件的位置和朝向,進行旋轉。 10· 一種電腦可讀取的資訊記錄媒體,其特徵在於記錄有 程式,該程式係使電腦作為下述部分而發揮功能: - 一記錄部(201) ’係記錄被配置在虛擬空間内的物件的 位置和朝向、視點的位置、以及預定的尺寸的投影面的位 置和朝向; 一顯示部(202)’係由前述視點,以在前述投影面上產 生透視投影而成的影像之方式,來顯示前述物件; 一選擇指示接收部(2〇3),健受選擇指示輸入,該選 擇指示輸入,係指示要由前述被產生的影像中所包含的物 39 201030673 件(以下稱為可視物件),選擇任一 -方向指定接收部叫係接受二:定輸入,該方 向指定輸入,係指定在前述虛擬空間中欲使前述視點進行 移動的方向;以及 一旋轉部(205),係以前述被選擇的可視物件為中心, 將前述視點和前述投影面,相對於被配置在前述虛擬空間 中的物件’以刖述被指定方向進行相對地旋轉。 種電腦可讀取的資訊記錄媒體,其特徵在於記錄有 程式’該程式係使電腦作為下述部分而發揮功能·· 。己錄4 (20 1)’係記錄被配置在虛擬空間内的物件的 位置和朝向、視點的位置、以及具有透明領域及不透明領 域之預定形狀的的面的位置和朝向; 一顯示部(202) ’係將前述物件的位置與前述視點的位 攀置所連結的線段係通過前述透明領域之物件(以下稱為可 視物件)’以在該線段與該透明領域的交點上產生投影而成 , 的影像之方式’加以顯示; 選擇指不接收部(2〇3),係接受選擇指示輸入’該選 擇扣示輪入,係指示要由前述可視物件,選擇任一個之意 思; 一方向指定接收部(2〇4),係接受方向指定輸入,該方 向才曰定輸入’係指定在前述虛擬空間中欲使前述視點進行 移動的方向;以及 旋轉部(205),係以前述被選擇的可視物件為中心, 201030673 對應於前述被指定方向,使被配置在前述虛擬空間中的物 件的位置和朝向’進行旋轉。 —種程式,其特徵在於可使電腦作為下述部分而發 功能: 的 位 一記錄部(201) ’係記錄被配置在虛擬空間内的物件201030673 VII. Patent application scope: 1. A display device (200), comprising: a recording portion (201) for recording a position and orientation of an object disposed in a virtual space, a position of a viewpoint, and a predetermined The position and orientation of the projected surface of the size; a display portion (202) 'displays the object from the aforementioned viewpoint in such a manner as to produce a perspective projection image on the projection surface; 203) accepting a selection instruction input indicating that an object (hereinafter referred to as a visible object) to be included in the generated image is selected, and selecting one of the meanings; a direction specifying input for specifying a direction in which the viewpoint is to be moved in the virtual space; and a rotating portion (205) centering on the selected visible object And the aforementioned projection surface, relative to the object disposed in the virtual space in the aforementioned specified direction turn. 2. A display device (200) comprising: 'a recording unit (201)' recording a position and an orientation of an object disposed in a virtual space, a position of a viewpoint, and having a transparent field and an opaque field The position and orientation of the surface of the pre-formed shape; a display portion (202)' is a line segment connecting the position of the object and the position of the viewpoint to the object of the transparent field (hereinafter referred to as the object 35: 201030673) Displaying the projected image at the intersection of the line segment and the transparent field; a selection indication receiving unit (203)' accepts a selection indication input indicating that the visual indication is to be The object 'selects one of the meanings' - thinks; , * specifies the direction of the receiving part (2〇4) 'the acceptance direction designation input, which specifies the input, and specifies the direction in which the viewpoint is to be moved by φ in the aforementioned virtual space. And a rotating portion (205) is centered on the selected visible object, corresponding to the specified direction, so as to be matched Position and orientation of the object member in a virtual space, is rotated. 3. The display device (2) according to claim 2, wherein the object in which the rotating portion (205)' rotates in position and orientation is the visible object. 4. The display device (2) according to claim 2, wherein the rotating portion (205) is rotated once the position and orientation of the object other than the visible object (hereinafter referred to as an invisible object) is rotated. And if the distance between the invisible object and the viewpoint is shorter than the distance between the visible object and the viewpoint, and the line segment connecting the invisible object and the viewpoint passes through the transparent region, the rotation is stopped. 5. The display device (200) according to claim 2, wherein the further field is a designated receiving portion (206) for receiving a field specifying input, and the field and the input are in the aforementioned predetermined shape. The transparent surface and the opaque field are designated on the surface; the display unit (202) generates the image, and the image is connected to the position of the object rotated by the position and orientation. The line segment is projected through the object of the transparent field and projected on the intersection of the line segment and the transparent field; and the line segment connecting the position of the previous object and the position of the viewpoint is transmitted through the aforementioned opaque field. The object is projected on the intersection of the line segment and the opaque field. 6. The display device (2) according to claim 5, wherein the field specifying receiving unit (2〇6) is in a field in which the received transparent field is above 11纟, does not receive the above-mentioned field designation round-in 0. 7. The display device (2〇〇) according to claim 5, wherein the field, such as the receiving unit (206), receives the aforementioned field The specified input 'is not received the field specified input if the number of visual objects mentioned above exceeds a predetermined number. However, it is not necessary to have a recording unit (2 (H), a display unit (2〇2), a selection instruction (4) (2〇3), a direction reduction receiving unit (disintegration), and a rotating unit (2〇). 5) The display device _) is based on the method of riding, and the recording unit (10) records the position and orientation of the object 37 201030673 disposed in the virtual space, the position of the viewpoint, and the predetermined size. The position and orientation of the projection surface; the display method includes: a display step of causing the display unit (202) to display the perspective projection on the projection surface by the display unit (202) - the foregoing object; a selection instruction receiving step, the selection instruction receiving unit (203)' accepting the selection instruction input 'the selection instruction input, indicating the object to be included in the image to be generated as described above (hereinafter referred to as a visual object), a meaning of selecting one; a direction specifying receiving step, the direction specifying receiving portion (204), accepting a direction specifying input, the direction specifying input, specifying in the aforementioned virtual space a direction in which the viewpoint is moved; and a rotating step of causing the rotating portion (205) to center the view point and the projection surface with respect to the object disposed in the virtual space, centering on the selected visible object The aforementioned direction is relatively rotated in the specified direction. 9. A display method comprising: a recording unit (2〇1), a display unit port 2, a display unit for selecting an instruction receiving unit (2〇3), a pointing receiving unit (10) sentence, and a rotating unit (10) (2) The display method performed by the recording unit (201) for recording the position and orientation of an object placed in the virtual space, the position of the viewpoint, and the predetermined shape having the transparent field and the opaque field. 38 201030673 The display method includes: a display step of: causing the display unit (202) to pass a line segment connecting the position of the object to the position of the viewpoint through the transparent field An object (hereinafter referred to as a visible object) is displayed in such a manner that a projected image is generated at an intersection of the line segment and the transparent region; and a selection instruction receiving step causes the selection instruction receiving portion (203) to accept Select the instruction input, which indicates the meaning of any one to be selected by the above-mentioned visual object; the direction designation receiving step, the system makes The direction specifying receiving unit (204) 'accepting the direction specifying input, specifying the direction in which the viewpoint is to be moved in the virtual space, and a rotating step of causing the rotating portion (205) to The selected visible object is centered, and the position and orientation of the object disposed in the virtual space are rotated in accordance with the specified direction. 10. A computer-readable information recording medium characterized by recording a program for causing a computer to function as: - a recording unit (201) 'recording an object arranged in a virtual space Position and orientation, position of the viewpoint, and position and orientation of the projection surface of the predetermined size; a display portion (202)' is formed by the aforementioned viewpoint to produce a perspective projection image on the projection surface. Displaying the foregoing object; a selection indication receiving unit (2〇3), a selection indication input, indicating the item 39 201030673 (hereinafter referred to as a visible object) to be included in the image generated as described above Selecting any direction-specific designation receiving unit to accept two: fixed input, the direction designating input, specifying the direction in which the aforementioned viewpoint is to be moved in the virtual space; and a rotating part (205), which is Centering on the selected visible object, the aforementioned viewpoint and the aforementioned projection surface are described with respect to the object disposed in the aforementioned virtual space Given direction relative to the rotation. A computer-readable information recording medium characterized in that a program is recorded in which the computer functions as a part of the following. Recorded 4 (20 1)' records the position and orientation of the object disposed in the virtual space, the position of the viewpoint, and the position and orientation of the face having the predetermined shape of the transparent field and the opaque field; a display portion (202) a line segment connecting the position of the object to the position of the aforementioned viewpoint through the object of the transparent field (hereinafter referred to as a visible object) to project at the intersection of the line segment and the transparent field. The way of the image is displayed; the selection refers to the non-receiving part (2〇3), and the selection instruction input is input. The selection indicates the rounding, which indicates that the visual object is to be selected, and the meaning is selected; a part (2〇4) that accepts a direction specifying input that specifies a direction in which the aforementioned viewpoint is to be moved in the virtual space; and a rotating part (205) that is selected from the aforementioned visible The object is centered, and 201030673 corresponds to the aforementioned specified direction to rotate the position and orientation of the object disposed in the aforementioned virtual space. a program characterized in that a computer can be used as a function of: a bit: a recording unit (201) ′ is a record of objects arranged in a virtual space 位置和朝向、視點的位置、以及預定的尺寸的投影面的 置和朝向; -顯不部(202) ’係由前述視點,以在前述投影面上產 生透視投影而成的影像之方式,來顯示前述物件; ^選擇指示接收部⑽),係接受選擇指示輸人,該選 擇和不輸入,係指示要由前述被產生的影像中所包含的物 件(以下稱為可視物件),選擇個之意思; ^匕—方向指定接收部(2〇4) ’係接受方向指定輸入,該方 鲁肖心疋輪入,係指定在前述虛擬空間中欲使前述視點進行 移動的方向;以及 ' 、旋轉部(2〇5) ’係以前述被選擇的可才見物件為中心, 、’述絲和别述投影面,相對於被配置在前述虛擬空間 物件,以前述被指定方向進行相對地旋轉。 功、.程式,其特徵在於可使電腦作為下述部分而發揮 位置記錄邛(2〇1) ’係記錄被配置在虛擬空間内的物件的 朝向、視點的位置、以及具有透明領域及不透明領 201030673 域之預定形狀的的面的位置和朝向; 一顯示部(202) ’係將前述物件的位置與前述視點的位 置所連結的線段係通過前述透明領域之物件(以下稱為^ 視物件)’以在該線段與該透明領域的交點上產生投影而I 的影像之方式,加以顯示; 一選擇指示接收部係接受選㈣*輸人,該選The position and orientation, the position of the viewpoint, and the orientation of the projection surface of the predetermined size; the display portion (202) is formed by the aforementioned viewpoint to produce a perspective projection image on the projection surface. Displaying the foregoing object; ^Selecting the indication receiving portion (10)), accepting the selection indication input, and selecting and not inputting, indicating that the object (hereinafter referred to as a visible object) to be included in the generated image is selected. Meaning; ^匕—Direction designation receiving unit (2〇4) 'Accepting direction designation input, which is the direction in which the above-mentioned viewpoint is moved in the virtual space; and ', rotation The portion (2〇5)' is centered on the selected visible object, and the 'the silk and the other projection surface are relatively rotated in the specified direction with respect to the virtual space object disposed. A function, program, which is characterized in that the computer can be used as a part of the following position recording position (2〇1) 'recording the orientation of the object arranged in the virtual space, the position of the viewpoint, and having a transparent field and an opaque collar 201030673 The position and orientation of the surface of the predetermined shape of the domain; a display portion (202)' is a line segment connecting the position of the object to the position of the viewpoint through the transparent field (hereinafter referred to as an object) 'Displaying the image of the projection I at the intersection of the line segment and the transparent field; a selection indicating that the receiving department accepts the selection (four) * input, the selection 擇指示輸人,係指示要由前述可視物件,選擇任—個 思; ' 一方向指定接收部⑽),係接受方向指定輸入,該方 向心定輸人ϋ疋在前述虛擬空間中欲使前述視點進行 移動的方向;以及 一旋轉部(205),俜 a^ 保以刚达被選擇的可視物件為中心, 對應於前述被指定方a y^ 向’使被配置在前述虛擬空間中的物 件的位置和朝向,進行旋轉。Selecting the instruction input means that the visual object is selected from the above-mentioned visual object, and the 'one-direction designation receiving unit (10)) accepts the direction designation input, and the direction is determined to be in the virtual space. a direction in which the viewpoint moves; and a rotating portion (205) that is centered on the selected visible object, corresponding to the designated party ay to "make the object disposed in the virtual space" Position and orientation, rotate. 4242
TW098133438A 2008-10-07 2009-10-01 Display device, display method, information recording medium, and program TW201030673A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008260979A JP4912377B2 (en) 2008-10-07 2008-10-07 Display device, display method, and program

Publications (1)

Publication Number Publication Date
TW201030673A true TW201030673A (en) 2010-08-16

Family

ID=42100511

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098133438A TW201030673A (en) 2008-10-07 2009-10-01 Display device, display method, information recording medium, and program

Country Status (3)

Country Link
JP (1) JP4912377B2 (en)
TW (1) TW201030673A (en)
WO (1) WO2010041557A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5634765B2 (en) * 2010-06-24 2014-12-03 東芝機械株式会社 Pulse laser machining method and pulse laser machining data creation method
JP2012103746A (en) * 2010-11-05 2012-05-31 Avix Inc Method for display control for fascinating viewer, digital signage system, and computer program
JP2014238621A (en) * 2013-06-06 2014-12-18 カルソニックカンセイ株式会社 Input receiving device
JP6087453B1 (en) * 2016-02-04 2017-03-01 株式会社コロプラ Method and program for providing virtual space
CN106975219B (en) * 2017-03-27 2019-02-12 网易(杭州)网络有限公司 Display control method and device, storage medium, the electronic equipment of game picture
JP6684746B2 (en) * 2017-05-12 2020-04-22 株式会社コロプラ Information processing method, computer and program
JP6393387B1 (en) * 2017-10-17 2018-09-19 株式会社コロプラ Programs, computers, and methods for providing a virtual experience
JP7304701B2 (en) * 2019-01-28 2023-07-07 株式会社コーエーテクモゲームス Game program, recording medium, game processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06111001A (en) * 1992-09-25 1994-04-22 Toshiba Corp Three-dimensional graphic image system
JPH07225859A (en) * 1994-02-14 1995-08-22 Toshiba Corp Object display device
JP3491832B2 (en) * 2001-01-30 2004-01-26 株式会社ナムコ GAME DEVICE AND INFORMATION STORAGE MEDIUM
JP4825594B2 (en) * 2005-07-07 2011-11-30 株式会社リコー Parts selection device

Also Published As

Publication number Publication date
JP2010092233A (en) 2010-04-22
WO2010041557A1 (en) 2010-04-15
JP4912377B2 (en) 2012-04-11

Similar Documents

Publication Publication Date Title
TW201030673A (en) Display device, display method, information recording medium, and program
US8988494B2 (en) Storage medium encoded with display control program, display, display system, and display control method
JP6875346B2 (en) Information processing methods and devices, storage media, electronic devices
CN107924113A (en) User interface for camera effect
US9152301B2 (en) Information processing apparatus including plurality of display portions and information processing system
EP3835953A1 (en) Display adaptation method and apparatus for application, device, and storage medium
WO2019024700A1 (en) Emoji display method and device, and computer readable storage medium
US9275608B2 (en) Display device
CN108710462A (en) Equipment, method and graphic user interface for manipulating user interface object using vision and/or touch feedback
US20150067540A1 (en) Display apparatus, portable device and screen display methods thereof
TWI377082B (en) Instruction content determination device, instruction content determination method, and information recording medium
EP2919103A1 (en) Information processing device, information processing method and computer-readable recording medium
TW201214266A (en) Three dimensional user interface effects on a display by using properties of motion
WO2022083241A1 (en) Information guide method and apparatus
US9495064B2 (en) Information processing method and electronic device
CN107977083A (en) Operation based on VR systems performs method and device
CN112691372B (en) Virtual item display method, device, equipment and readable storage medium
US20180268568A1 (en) Color analysis and control using an electronic mobile device transparent display screen
WO2022067344A1 (en) User interfaces associated with remote input devices
CN112044065A (en) Virtual resource display method, device, equipment and storage medium
AU2020101324A4 (en) Multi-participant live communication user interface
CN107656794B (en) Interface display method and device
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
CN112007362A (en) Display control method, device, storage medium and equipment in virtual world
CN110209316A (en) Class label display methods, device, terminal and storage medium