WO2019039416A1 - Display device and program - Google Patents

Display device and program Download PDF

Info

Publication number
WO2019039416A1
WO2019039416A1 PCT/JP2018/030604 JP2018030604W WO2019039416A1 WO 2019039416 A1 WO2019039416 A1 WO 2019039416A1 JP 2018030604 W JP2018030604 W JP 2018030604W WO 2019039416 A1 WO2019039416 A1 WO 2019039416A1
Authority
WO
WIPO (PCT)
Prior art keywords
input area
image
display
input
virtual space
Prior art date
Application number
PCT/JP2018/030604
Other languages
French (fr)
Japanese (ja)
Inventor
長谷川 洋
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2019039416A1 publication Critical patent/WO2019039416A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • the following invention relates to a display device and a program.
  • Japanese Patent Application Laid-Open No. 2013-167938 discloses an information input device that receives such an input operation in a three-dimensional space.
  • the information input device sets a plurality of rectangular parallelepiped space areas obtained by dividing the virtual space in front of the display screen (user side) as areas to receive input from the user, and input options are assigned to the respective areas.
  • the information input device detects three-dimensional coordinates of the user's finger, determines which rectangular solid area in the virtual space the detected three-dimensional coordinate is included in, and an area including the position of the user's finger Perform the corresponding processing.
  • the invention disclosed below aims to provide a technology that enables an input operation at an appropriate position according to 3D image content.
  • a display apparatus includes image data for displaying at least one object included in content as a stereoscopic image, and input area information indicating an input area for the object in a virtual space in which a stereoscopic image of the object is displayed.
  • Detection unit for acquiring an image, a display unit for displaying the stereoscopic image based on the image data, and detection for detecting a position in an three-dimensional space of an operating body for performing an input operation on the stereoscopic image to be displayed
  • the input area of the virtual space for the stereoscopic image is set based on the unit, the display position of the stereoscopic image to be displayed, and the input area information, and the operation is performed based on the detected position of the operating body
  • a determination unit that determines whether or not a body is included in the input area; and the operation body is included in the input area If it is determined, and a execution unit for executing predetermined processing, the input area is a predetermined range overlapping at least a portion of the stereoscopic image in
  • FIG. 1 is an image diagram for explaining a stereoscopic image and an input area in the embodiment.
  • FIG. 2 is a functional block diagram of the display device in the embodiment.
  • FIG. 3 is an image diagram for explaining the input area of the modification (1).
  • FIG. 4 is an image diagram for explaining an input area different from that in FIG.
  • FIG. 5 is an image diagram for explaining an input area different from those in FIGS. 3 and 4.
  • FIG. 6 is an image diagram for explaining an input area different from those in FIGS. 3 to 5.
  • a display apparatus is an input indicating image data for displaying at least one object included in content as a stereoscopic image, and an input area for the object in a virtual space where the stereoscopic image of the object is displayed.
  • a position in an three-dimensional space of an operating tool for performing an input operation on the displayed three-dimensional image and a display unit for displaying the three-dimensional image based on the image data and an acquisition unit for acquiring area information The input area of the virtual space for the stereoscopic image is set based on the detecting unit, the display position of the stereoscopic image to be displayed, and the input area information, and the detected position of the operating body is determined.
  • a determination unit that determines whether the operating body is included in the input area; and the operating body is in the input area If it is determined that Murrell, and a execution unit for executing predetermined processing, the input area is a predetermined range overlapping at least a portion of the stereoscopic image in the virtual space (a first configuration).
  • the input area is set for the stereoscopic image of the object included in the content.
  • the input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space, and when the position of the operating body in the three-dimensional space is included in the input area, predetermined processing is performed. Therefore, it is possible to receive an input operation at an appropriate position according to the object in the content without the content being restricted.
  • the input area information may include information indicating a range of each input area for each of a plurality of portions in the three-dimensional image of the virtual space (second configuration). According to the second configuration, since a plurality of input areas can be set for an object, the degree of freedom of input operation can be improved.
  • the acquisition unit acquires the plurality of image data for displaying each of a plurality of objects as a stereoscopic image, and the input area information for the stereoscopic image in the virtual space of each of the plurality of objects.
  • the display unit displays a plurality of stereoscopic images on the same screen based on the plurality of image data, and the determination unit determines that the detected position of the operating body is the stereoscopic image of the plurality of objects. It may be determined whether it is included in any one of the input areas (third configuration). According to the third configuration, it is possible to receive an input operation for each of stereoscopic images of a plurality of objects.
  • the determination unit may reset the input area for the stereoscopic image according to a change in a display position of the stereoscopic image (fourth configuration).
  • the input area can be set at an appropriate position according to the display position of the object.
  • a program is an image data for displaying at least one object included in content as a stereoscopic image on the display unit on a computer of a display device having the display unit, and a stereoscopic image of the object Obtaining an input area information indicating an input area for the object in the virtual space in which the image is displayed, a display control step of causing the display unit to display the stereoscopic image based on the acquired image data, and Detecting the position in the three-dimensional space of the operating body for performing an input operation on the three-dimensional image, and based on the display position of the three-dimensional image to be displayed and the input area information, Set the input area of the virtual space for the A determination step of determining whether the operation body is included in the input area, and execution means for executing a predetermined process when it is determined that the operation body is included in the input area; And the input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space (first program).
  • an input area is set for a stereoscopic image of an object included in the content.
  • the input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space, and when the position of the operating body in the three-dimensional space is included in the input area, predetermined processing is performed. Therefore, it is possible to receive an input operation at an appropriate position according to the object in the content without the content being restricted.
  • the display device in the present embodiment displays, for example, a stereoscopic image (3D image) of an object representing a character such as a game.
  • the display device receives an input operation from the user for the displayed 3D image, and when the input operation is performed on an input area preset for the displayed 3D image, the input Execute processing according to the operation.
  • the 3D image is composed of display images for the right and left eyes that represent the same object.
  • An object is stereoscopically viewed by utilizing the parallax generated by shifting the positions of the display image for the right eye and the display image for the left eye.
  • FIG. 1 shows a 3D image displayed on the display device in the present embodiment and an image diagram showing an input area set for the 3D image.
  • the 3D image Pa of the object displayed on the display surface D is viewed at a position where the display surface D protrudes to the user side as viewed from the user (X-axis positive direction side).
  • a rectangular parallelepiped input area Pb for receiving an input operation is set.
  • the input area Pb is invisible to the user.
  • the display device executes a process according to the input operation.
  • the 3D image can be stereoscopically viewed by the user with the naked eye.
  • the shape of the input area Pb is not limited to this as long as it is a three-dimensional shape.
  • FIG. 1 is a block diagram showing a functional configuration of the display device 1 in the present embodiment.
  • the display device 1 includes a control unit 10, a position detection unit 11, a display unit 12, and a storage unit 13.
  • the position detection unit 11 detects the position (coordinates) in the three-dimensional space of the operating body used for the input operation as needed, and outputs position information indicating the detection result to the control unit 10.
  • the operating body may be the user's finger.
  • the position detection unit 11 can use, for example, a three-dimensional position sensor.
  • a three-dimensional position sensor is attached to the user's hand, and position information detected by the three-dimensional position sensor is output to the input determination unit 111 by wire or wirelessly.
  • the position detection of the operation body is not limited to the three-dimensional position sensor, and an imaging unit such as a camera may be used.
  • a camera for example, one camera is disposed on each side of the display surface D, and the user's finger is photographed by each camera, and the user's finger is photographed based on the photographed image using triangulation method or the like. Three-dimensional position may be detected.
  • the display unit 12 includes, for example, a liquid crystal display panel or an organic EL display panel, and displays each image of an object in the game program under the control of the control unit 10.
  • the storage unit 13 is realized by, for example, a non-volatile memory such as a mask ROM (Read-Only Memory), a flash memory, and an EEPROM (Electrically Erasable Programmable Read-Only Memory).
  • the storage unit 13 stores scenario data including data such as an object according to the progress of the game.
  • scenario data stores input area information including the position and size of the input area Pb for each object in time series, as well as storing 3D image data for displaying the object as a 3D image in time series
  • the scenario data stores execution processing information that defines the display mode of the object with respect to the input area Pb, such as changing the expression or posture of the object, for example, when an input operation is performed on the input area Pb.
  • the 3D image data includes image data for right eye and left eye for each time-series object, and information indicating display positions of image data for right eye and left eye.
  • the apparent position of the 3D image Pa in the depth direction with respect to the display surface D is defined according to the display positions of the image data for the right eye and the left eye of the object.
  • the input area Pb is set at a position overlapping with a predetermined part such as the head and body of the object (3D image Pa) in the virtual space. That is, in the present embodiment, a predetermined range based on a predetermined position (coordinates) on the 3D image Pa in the virtual space is stored in advance as the input area information of the input area Pb.
  • the control unit 10 includes functional units of an input determination unit 111, a 3D image display control unit 112, and a process execution unit 113.
  • the control unit 10 has a CPU and a memory (ROM and RAM), and the CPU executes a game program stored in the ROM to execute game processing to realize the functions of the above-described units.
  • the game program may be stored in the storage unit 13 together with the scenario data.
  • the 3D image display control unit 112 displays the 3D image Pa by causing the display unit 12 to display the image data for the right eye and the left eye of the object based on the scenario data in the storage unit 13. Further, the 3D image display control unit 112 performs a process of changing the display mode of the object in accordance with an instruction from the process execution unit 113 described later.
  • the 3D image display control unit 112 may be realized by a GPU (Graphics Processing Unit).
  • the input determination unit 111 reads input area information of the input area Pb for the object displayed by the 3D image display control unit 112 from the storage unit 13, and sets the coordinate range of the input area Pb for the object. Note that the input determination unit 111 resets the coordinate range of the input area Pb along with the object based on the display position of the time-series object that changes as the game processing progresses.
  • the input determination unit 111 acquires position information of the operating tool from the position detection unit 11 as needed, converts the acquired position information into coordinates in the virtual space using a predetermined coordinate conversion table, and converts the converted coordinates into virtual
  • the three-dimensional coordinates indicate the position of the operating body in space.
  • the input determination unit 111 processes the determination result indicating that the input operation has been performed on the input area Pb when the three-dimensional coordinates indicating the position of the operating body in the virtual space are included in the input area Pb. Output to 113.
  • the process execution unit 113 acquires from the input determination unit 111 the determination result indicating that the input operation has been performed on the input area Pb, the process execution unit 113 refers to the execution process information of the scenario data in the storage unit 13 and The 3D image display control unit 112 is instructed to perform processing according to the request.
  • the input area Pb is set for each object at a position overlapping the predetermined portion of the 3D image Pa of the object, and the input area Pb is set again according to the display position of the object. Therefore, the input area Pb can be set in conjunction with the movement or state of the object, compared to the case where the range in which the input operation can be received is fixed, and the input operation can be performed at an appropriate position according to the progress of the game. It becomes possible to receive. Further, in the above embodiment, since the user can stereoscopically view the 3D image with the naked eye and operate on the 3D image in real space, when operating on the 3D image using an HMD (Head Mounted Display) Compared to, you can get a more realistic game feeling.
  • HMD Head Mounted Display
  • a display is not limited to composition of an embodiment mentioned above, but can be considered as various modification composition.
  • input areas Pb11 and Pb12 may be set for each of 3D images Pa1 and Pa2 of two objects displayed on the same screen.
  • the input area Pb is set for the 3D image Pa of the object.
  • an input area Pb 'at a position different from the input area Pb on the 3D image Pa shown in FIG. 5 (a) may be set.
  • input areas Pb31 and Pb41 are set for the 3D image Pa of the object.
  • input areas Pb32 and Pb42 of different sizes from the input areas Pb31 and Pb41 on the 3D image Pa shown in FIG. 6A may be set. That is, at least one of the size and the position of the input area Pb with respect to the object may be changed according to a scene change or the like.
  • 3D image data and input area information are stored in the display device 1
  • 3D image data and input area information are stored in a storage device external to the display device 1 May be stored.
  • the external storage device may be connected to the display device 1 via a wired or wireless communication line, or the external storage device may be a semiconductor memory or the like that can be attached to or detached from the display device 1.
  • the display device 1 includes an acquisition unit that acquires 3D image data and input area information from an external storage device, displays a 3D image based on the acquired 3D image data on the display unit 12, and acquires the input area Based on the information, set an input area for the 3D image.
  • the user stereoscopically view the 3D image displayed on the display device 1 with the naked eye and perform an input operation on the 3D image, but displays the 3D image using the HMD, An input operation may be received on the 3D image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This display device obtains image data for displaying one or more objects included in content as three-dimensional images, and also obtains input area information representing an input area Pb for the objects in a virtual space in which the three-dimensional images Pa of the objects are to be displayed. The display device displays the three-dimensional images Pa on the basis of the acquired image data, and detects the location in three-dimensional space of an operation element for subjecting the displayed three-dimensional images Pa to an input operation. The display device sets the input area Pb in the virtual space for the three-dimensional images on the basis of the input area information and the display location of the three-dimensional images Pa to be displayed, determines whether or not the operation element is included in the input area Pb on the basis of the location of the detected operation element, and executes prescribed processing when the operation element is included in the input area Pb.

Description

表示装置及びプログラムDisplay device and program
 以下の発明は、表示装置及びプログラムに関する。 The following invention relates to a display device and a program.
 従来より、ゲームのキャラクタの立体画像を表示させ、利用者による3次元空間上の入力操作を受け付けて、所定の処理を実行する技術が提案されている。特開2013-167938号公報には、このような3次元空間上の入力操作を受け付ける情報入力装置が開示されている。この情報入力装置は、表示画面の手前(ユーザ側)の仮想空間を分割した複数の直方体状の空間領域を、ユーザからの入力を受け付けるエリアとして設定し、各エリアに入力選択肢が割り当てられている。情報入力装置は、ユーザの手指の3次元座標を検出し、検出した3次元座標が、仮想空間におけるどの直方体状のエリアに含まれているかを判定し、ユーザの手指の位置が含まれるエリアに応じた処理を行う。 2. Description of the Related Art Conventionally, there has been proposed a technique for displaying a three-dimensional image of a game character, receiving an input operation on a three-dimensional space by a user, and executing predetermined processing. Japanese Patent Application Laid-Open No. 2013-167938 discloses an information input device that receives such an input operation in a three-dimensional space. The information input device sets a plurality of rectangular parallelepiped space areas obtained by dividing the virtual space in front of the display screen (user side) as areas to receive input from the user, and input options are assigned to the respective areas. . The information input device detects three-dimensional coordinates of the user's finger, determines which rectangular solid area in the virtual space the detected three-dimensional coordinate is included in, and an area including the position of the user's finger Perform the corresponding processing.
 特開2013-167938号公報の場合、ユーザが入力可能な仮想空間上の入力エリアが固定されているため、ゲーム等の3D画像コンテンツが制限されたり、ゲームの進行に応じた適切な位置で入力操作を受け付けることが困難となる。 In the case of Japanese Patent Application Laid-Open No. 2013-167938, since the input area in the virtual space where the user can input is fixed, 3D image content such as a game is restricted, or input is performed at an appropriate position according to the progress of the game. It becomes difficult to accept the operation.
 以下に開示する発明は、3D画像コンテンツに応じた適切な位置での入力操作を可能とする技術を提供することを目的とする。 The invention disclosed below aims to provide a technology that enables an input operation at an appropriate position according to 3D image content.
 一実施形態における表示装置は、コンテンツに含まれる少なくとも1つのオブジェクトを立体画像として表示するための画像データと、前記オブジェクトの立体画像が表示される仮想空間における当該オブジェクトに対する入力エリアを示す入力エリア情報を取得する取得部と、前記画像データに基づいて前記立体画像を表示する表示部と、表示される前記立体画像に対して入力操作を行うための操作体の3次元空間における位置を検出する検出部と、表示される前記立体画像の表示位置と前記入力エリア情報とに基づいて、当該立体画像に対する前記仮想空間の入力エリアを設定し、検出された前記操作体の位置に基づいて、当該操作体が前記入力エリア内に含まれるか否かを判定する判定部と、前記操作体が前記入力エリア内に含まれると判定された場合、所定の処理を実行する実行部と、を備え、前記入力エリアは、前記仮想空間における前記立体画像の少なくとも一部と重なる所定範囲である。 A display apparatus according to an embodiment includes image data for displaying at least one object included in content as a stereoscopic image, and input area information indicating an input area for the object in a virtual space in which a stereoscopic image of the object is displayed. Detection unit for acquiring an image, a display unit for displaying the stereoscopic image based on the image data, and detection for detecting a position in an three-dimensional space of an operating body for performing an input operation on the stereoscopic image to be displayed The input area of the virtual space for the stereoscopic image is set based on the unit, the display position of the stereoscopic image to be displayed, and the input area information, and the operation is performed based on the detected position of the operating body A determination unit that determines whether or not a body is included in the input area; and the operation body is included in the input area If it is determined, and a execution unit for executing predetermined processing, the input area is a predetermined range overlapping at least a portion of the stereoscopic image in the virtual space.
 上記構成によれば、3D画像コンテンツに応じた適切な位置での入力操作を受け付けることができる。 According to the above configuration, it is possible to receive an input operation at an appropriate position according to the 3D image content.
図1は、実施形態における立体画像と入力エリアを説明するためのイメージ図である。FIG. 1 is an image diagram for explaining a stereoscopic image and an input area in the embodiment. 図2は、実施形態における表示装置の機能ブロック図である。FIG. 2 is a functional block diagram of the display device in the embodiment. 図3は、変形例(1)の入力エリアを説明するためのイメージ図である。FIG. 3 is an image diagram for explaining the input area of the modification (1). 図4は、図3とは異なる入力エリアを説明するためのイメージ図である。FIG. 4 is an image diagram for explaining an input area different from that in FIG. 図5は、図3及び図4とは異なる入力エリアを説明するためのイメージ図である。FIG. 5 is an image diagram for explaining an input area different from those in FIGS. 3 and 4. 図6は、図3~図5とは異なる入力エリアを説明するためのイメージ図である。FIG. 6 is an image diagram for explaining an input area different from those in FIGS. 3 to 5.
 発明の一実施形態における表示装置は、コンテンツに含まれる少なくとも1つのオブジェクトを立体画像として表示するための画像データと、前記オブジェクトの立体画像が表示される仮想空間における当該オブジェクトに対する入力エリアを示す入力エリア情報を取得する取得部と、前記画像データに基づいて前記立体画像を表示する表示部と、表示される前記立体画像に対して入力操作を行うための操作体の3次元空間における位置を検出する検出部と、表示される前記立体画像の表示位置と前記入力エリア情報とに基づいて、当該立体画像に対する前記仮想空間の入力エリアを設定し、検出された前記操作体の位置に基づいて、当該操作体が前記入力エリア内に含まれるか否かを判定する判定部と、前記操作体が前記入力エリア内に含まれると判定された場合、所定の処理を実行する実行部と、を備え、前記入力エリアは、前記仮想空間における前記立体画像の少なくとも一部と重なる所定範囲である(第1の構成)。 A display apparatus according to an embodiment of the present invention is an input indicating image data for displaying at least one object included in content as a stereoscopic image, and an input area for the object in a virtual space where the stereoscopic image of the object is displayed. A position in an three-dimensional space of an operating tool for performing an input operation on the displayed three-dimensional image and a display unit for displaying the three-dimensional image based on the image data and an acquisition unit for acquiring area information The input area of the virtual space for the stereoscopic image is set based on the detecting unit, the display position of the stereoscopic image to be displayed, and the input area information, and the detected position of the operating body is determined. A determination unit that determines whether the operating body is included in the input area; and the operating body is in the input area If it is determined that Murrell, and a execution unit for executing predetermined processing, the input area is a predetermined range overlapping at least a portion of the stereoscopic image in the virtual space (a first configuration).
 第1の構成によれば、コンテンツに含まれるオブジェクトの立体画像に対して入力エリアが設定される。入力エリアは、仮想空間における立体画像の少なくとも一部と重なる所定範囲であり、3次元空間における操作体の位置が入力エリアに含まれる場合、所定の処理が実行される。そのため、コンテンツが制限されることなく、コンテンツにおけるオブジェクトに応じた適切な位置での入力操作を受け付けることができる。 According to the first configuration, the input area is set for the stereoscopic image of the object included in the content. The input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space, and when the position of the operating body in the three-dimensional space is included in the input area, predetermined processing is performed. Therefore, it is possible to receive an input operation at an appropriate position according to the object in the content without the content being restricted.
 第1の構成において、前記入力エリア情報は、前記仮想空間の前記立体画像における複数部分のそれぞれに対する各入力エリアの範囲を示す情報を含むこととしてもよい(第2の構成)。第2の構成によれば、オブジェクトに対して複数の入力エリアを設定することができるので、入力操作の自由度を向上させることができる。 In the first configuration, the input area information may include information indicating a range of each input area for each of a plurality of portions in the three-dimensional image of the virtual space (second configuration). According to the second configuration, since a plurality of input areas can be set for an object, the degree of freedom of input operation can be improved.
 第1の構成において、前記取得部は、複数のオブジェクトのそれぞれを立体画像として表示するための複数の画像データと、前記複数のオブジェクトのそれぞれの前記仮想空間における立体画像に対する前記入力エリア情報を取得し、前記表示部は、前記複数の画像データに基づいて複数の立体画像を同一画面上に表示し、前記判定部は、検出された前記操作体の位置が、前記複数のオブジェクトの前記立体画像のいずれかの前記入力エリア内に含まれるか否かを判定することとしてもよい(第3の構成)。第3の構成によれば、複数のオブジェクトの立体画像に対してそれぞれ入力操作を受け付けることができる。 In the first configuration, the acquisition unit acquires the plurality of image data for displaying each of a plurality of objects as a stereoscopic image, and the input area information for the stereoscopic image in the virtual space of each of the plurality of objects. The display unit displays a plurality of stereoscopic images on the same screen based on the plurality of image data, and the determination unit determines that the detected position of the operating body is the stereoscopic image of the plurality of objects. It may be determined whether it is included in any one of the input areas (third configuration). According to the third configuration, it is possible to receive an input operation for each of stereoscopic images of a plurality of objects.
 第1から第3のいずれかの構成において、前記判定部は、前記立体画像の表示位置の変化に応じて当該立体画像に対する前記入力エリアを設定し直すこととしてもよい(第4の構成)。第4の構成によれば、オブジェクトの表示位置に応じた適切な位置に入力エリアを設定することができる。 In any one of the first to third configurations, the determination unit may reset the input area for the stereoscopic image according to a change in a display position of the stereoscopic image (fourth configuration). According to the fourth configuration, the input area can be set at an appropriate position according to the display position of the object.
 本発明の一実施形態に係るプログラムは、表示部を有する表示装置のコンピュータに、コンテンツに含まれる少なくとも1つのオブジェクトを立体画像として前記表示部に表示するための画像データと、前記オブジェクトの立体画像が表示される仮想空間における当該オブジェクトに対する入力エリアを示す入力エリア情報を取得する取得ステップと、取得された前記画像データに基づいて前記立体画像を前記表示部に表示させる表示制御ステップと、表示された前記立体画像に対して入力操作を行うための操作体の3次元空間における位置を検出する検出ステップと、表示される前記立体画像の表示位置と前記入力エリア情報とに基づいて、当該立体画像に対する前記仮想空間の入力エリアを設定し、検出された前記操作体の位置に基づいて、当該操作体が前記入力エリア内に含まれるか否かを判定する判定ステップと、前記操作体が前記入力エリア内に含まれると判定された場合、所定の処理を実行する実行手段と、を実行させ、前記入力エリアは、前記仮想空間における前記立体画像の少なくとも一部と重なる所定範囲である(第1のプログラム)。 A program according to an embodiment of the present invention is an image data for displaying at least one object included in content as a stereoscopic image on the display unit on a computer of a display device having the display unit, and a stereoscopic image of the object Obtaining an input area information indicating an input area for the object in the virtual space in which the image is displayed, a display control step of causing the display unit to display the stereoscopic image based on the acquired image data, and Detecting the position in the three-dimensional space of the operating body for performing an input operation on the three-dimensional image, and based on the display position of the three-dimensional image to be displayed and the input area information, Set the input area of the virtual space for the A determination step of determining whether the operation body is included in the input area, and execution means for executing a predetermined process when it is determined that the operation body is included in the input area; And the input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space (first program).
 第1のプログラムによれば、コンテンツに含まれるオブジェクトの立体画像に対して入力エリアが設定される。入力エリアは、仮想空間における立体画像の少なくとも一部と重なる所定範囲であり、3次元空間における操作体の位置が入力エリアに含まれる場合、所定の処理が実行される。そのため、コンテンツが制限されることなく、コンテンツにおけるオブジェクトに応じた適切な位置での入力操作を受け付けることができる。 According to the first program, an input area is set for a stereoscopic image of an object included in the content. The input area is a predetermined range overlapping with at least a part of the three-dimensional image in the virtual space, and when the position of the operating body in the three-dimensional space is included in the input area, predetermined processing is performed. Therefore, it is possible to receive an input operation at an appropriate position according to the object in the content without the content being restricted.
 以下、図面を参照し、発明の実施の形態を詳しく説明する。図中同一または相当部分には同一符号を付してその説明は繰り返さない。なお、説明を分かりやすくするために、以下で参照する図面においては、構成が簡略化または模式化して示されたり、一部の構成部材が省略されたりしている。また、各図に示された構成部材間の寸法比は、必ずしも実際の寸法比を示すものではない。 Hereinafter, embodiments of the invention will be described in detail with reference to the drawings. The same or corresponding parts in the drawings have the same reference characters allotted and description thereof will not be repeated. In order to make the description easy to understand, in the drawings referred to in the following, the configuration is simplified or schematically shown, or some constituent members are omitted. Also, the dimensional ratios among the components shown in the drawings do not necessarily indicate the actual dimensional ratios.
 (表示装置の概要)
 本実施形態における表示装置は、例えばゲーム等のキャラクタを表すオブジェクトの立体画像(3D画像)を表示する。表示装置は、表示された3D画像に対して利用者から入力操作を受け付けるが、その入力操作が、表示される3D画像に対して予め設定された入力エリアに対してなされた場合に、その入力操作に応じた処理を実行する。
(Overview of Display Device)
The display device in the present embodiment displays, for example, a stereoscopic image (3D image) of an object representing a character such as a game. The display device receives an input operation from the user for the displayed 3D image, and when the input operation is performed on an input area preset for the displayed 3D image, the input Execute processing according to the operation.
 3D画像は、同じオブジェクトを表す右目用と左目用の表示画像で構成される。右目用の表示画像と左目用の表示画像の位置をずらすことにより生じる視差を利用することで、オブジェクトが立体視される。 The 3D image is composed of display images for the right and left eyes that represent the same object. An object is stereoscopically viewed by utilizing the parallax generated by shifting the positions of the display image for the right eye and the display image for the left eye.
ここで、図1に、本実施形態における表示装置で表示される3D画像と、3D画像に対して設定される入力エリアを表すイメージ図を示す。この例では、表示面Dに表示されるオブジェクトの3D画像Paが、利用者(X軸正方向側)から見て、表示面Dよりも利用者側に飛び出した位置に視認される。 Here, FIG. 1 shows a 3D image displayed on the display device in the present embodiment and an image diagram showing an input area set for the 3D image. In this example, the 3D image Pa of the object displayed on the display surface D is viewed at a position where the display surface D protrudes to the user side as viewed from the user (X-axis positive direction side).
 本実施形態では、表示面Dの利用者側に3D画像Paが表示される仮想空間において、入力操作を受け付けるための直方体状の入力エリアPbを設定する。入力エリアPbは、利用者からは見えない。表示装置は、利用者による入力操作が入力エリアPbに対してなされた場合に、その入力操作に応じた処理を実行する。本実施形態において、3D画像は、利用者が裸眼で立体視し得ることが好ましい。なお、ここでは、入力エリアPbの形状が直方体形状の場合を例に説明するが、入力エリアPbの形状は3次元形状であればこれに限定されない。 In the present embodiment, in the virtual space where the 3D image Pa is displayed on the user side of the display surface D, a rectangular parallelepiped input area Pb for receiving an input operation is set. The input area Pb is invisible to the user. When the user performs an input operation on the input area Pb, the display device executes a process according to the input operation. In the present embodiment, it is preferable that the 3D image can be stereoscopically viewed by the user with the naked eye. Here, although the case where the shape of the input area Pb is a rectangular parallelepiped shape will be described as an example, the shape of the input area Pb is not limited to this as long as it is a three-dimensional shape.
 (表示装置の構成)
 以下、本実施形態の表示装置の構成について具体的に説明する。図1は、本実施形態における表示装置1の機能構成を示すブロック図である。表示装置1は、制御部10、位置検出部11、表示部12、及び記憶部13を備える。
(Configuration of display device)
Hereinafter, the configuration of the display device of the present embodiment will be specifically described. FIG. 1 is a block diagram showing a functional configuration of the display device 1 in the present embodiment. The display device 1 includes a control unit 10, a position detection unit 11, a display unit 12, and a storage unit 13.
 位置検出部11は、入力操作に用いられる操作体の3次元空間における位置(座標)を随時検出し、その検出結果を示す位置情報を制御部10へ出力する。操作体は、利用者の指であってもよい。 The position detection unit 11 detects the position (coordinates) in the three-dimensional space of the operating body used for the input operation as needed, and outputs position information indicating the detection result to the control unit 10. The operating body may be the user's finger.
 位置検出部11は、例えば3次元位置センサを用いることができる。利用者の手に3次元位置センサを装着し、3次元位置センサで検出される位置情報を有線又は無線により入力判定部111に出力する。なお、操作体の位置検出は、3次元位置センサに限らず、カメラ等の撮像手段を用いてもよい。カメラを用いる場合、例えば、表示面Dの左右にカメラを1台ずつ配置して各カメラで利用者の指を撮影し、三角測量法等を用いて、撮影画像を基に利用者の指の3次元位置を検出してもよい。 The position detection unit 11 can use, for example, a three-dimensional position sensor. A three-dimensional position sensor is attached to the user's hand, and position information detected by the three-dimensional position sensor is output to the input determination unit 111 by wire or wirelessly. The position detection of the operation body is not limited to the three-dimensional position sensor, and an imaging unit such as a camera may be used. When using a camera, for example, one camera is disposed on each side of the display surface D, and the user's finger is photographed by each camera, and the user's finger is photographed based on the photographed image using triangulation method or the like. Three-dimensional position may be detected.
 表示部12は、例えば、液晶表示パネル又は有機EL表示パネルを備え、制御部10の制御の下、ゲームプログラムにおけるオブジェクトの各画像を表示する。 The display unit 12 includes, for example, a liquid crystal display panel or an organic EL display panel, and displays each image of an object in the game program under the control of the control unit 10.
 記憶部13は、例えば、マスクROM(Read-Only Memory)、フラッシュメモリ、EEPROM(Electrically Erasable Programmable Read-Only Memory)等の不揮発性メモリで実現される。記憶部13は、ゲームの進行に応じたオブジェクト等のデータを含むシナリオデータを記憶する。具体的には、シナリオデータは、オブジェクトを3D画像として表示するための3D画像データを時系列に記憶するとともに、時系列の各オブジェクトに対する入力エリアPbの位置や大きさ等を含む入力エリア情報を記憶する。また、シナリオデータは、例えば、入力エリアPbに対して入力操作がなされた場合に、オブジェクトの表情や姿勢を変化させるなど、入力エリアPbに対するオブジェクトの表示態様を規定した実行処理情報を記憶する。 The storage unit 13 is realized by, for example, a non-volatile memory such as a mask ROM (Read-Only Memory), a flash memory, and an EEPROM (Electrically Erasable Programmable Read-Only Memory). The storage unit 13 stores scenario data including data such as an object according to the progress of the game. Specifically, scenario data stores input area information including the position and size of the input area Pb for each object in time series, as well as storing 3D image data for displaying the object as a 3D image in time series Remember. The scenario data stores execution processing information that defines the display mode of the object with respect to the input area Pb, such as changing the expression or posture of the object, for example, when an input operation is performed on the input area Pb.
 3D画像データは、時系列のオブジェクトごとの右目用と左目用の各画像データと、右目用と左目用の画像データの表示位置を示す情報とを含む。 The 3D image data includes image data for right eye and left eye for each time-series object, and information indicating display positions of image data for right eye and left eye.
 オブジェクトの右目用と左目用の各画像データの表示位置に応じて、表示面Dに対する奥行方向の3D画像Paの見かけ上の位置、すなわち、仮想空間上の位置が規定される。入力エリアPbは、仮想空間におけるオブジェクト(3D画像Pa)の頭部や胴部等の予め定められた部分と重なる位置に設定される。つまり、本実施形態では、仮想空間における3D画像Pa上において予め定められた位置(座標)を基準とする所定の範囲が、入力エリアPbの入力エリア情報として予め記憶される。 The apparent position of the 3D image Pa in the depth direction with respect to the display surface D, that is, the position in the virtual space is defined according to the display positions of the image data for the right eye and the left eye of the object. The input area Pb is set at a position overlapping with a predetermined part such as the head and body of the object (3D image Pa) in the virtual space. That is, in the present embodiment, a predetermined range based on a predetermined position (coordinates) on the 3D image Pa in the virtual space is stored in advance as the input area information of the input area Pb.
 制御部10は、入力判定部111、3D画像表示制御部112、及び処理実行部113の各機能部を有する。制御部10は、CPUとメモリ(ROM及びRAM)とを有し、ROMに記憶されたゲームプログラムをCPUが実行することによりゲーム処理を実行し、上記各部の機能を実現する。なお、ゲームプログラムは、シナリオデータとともに記憶部13に記憶されていてもよい。 The control unit 10 includes functional units of an input determination unit 111, a 3D image display control unit 112, and a process execution unit 113. The control unit 10 has a CPU and a memory (ROM and RAM), and the CPU executes a game program stored in the ROM to execute game processing to realize the functions of the above-described units. The game program may be stored in the storage unit 13 together with the scenario data.
 3D画像表示制御部112は、記憶部13におけるシナリオデータに基づき、オブジェクトの右目用と左目用の各画像データを表示部12に表示させることで、3D画像Paを表示する。また、3D画像表示制御部112は、後述の処理実行部113からの指示に従って、オブジェクトの表示態様を変化させる処理を行う。なお、3D画像表示制御部112は、GPU(Graphics Processing Unit)で実現されてもよい。 The 3D image display control unit 112 displays the 3D image Pa by causing the display unit 12 to display the image data for the right eye and the left eye of the object based on the scenario data in the storage unit 13. Further, the 3D image display control unit 112 performs a process of changing the display mode of the object in accordance with an instruction from the process execution unit 113 described later. The 3D image display control unit 112 may be realized by a GPU (Graphics Processing Unit).
 入力判定部111は、3D画像表示制御部112によって表示されるオブジェクトに対する入力エリアPbの入力エリア情報を記憶部13から読出し、当該オブジェクトに対する入力エリアPbの座標範囲を設定する。なお、入力判定部111は、ゲーム処理の進行に応じて変化する時系列のオブジェクトの表示位置に基づき、オブジェクトに付随して入力エリアPbの座標範囲を設定し直す。 The input determination unit 111 reads input area information of the input area Pb for the object displayed by the 3D image display control unit 112 from the storage unit 13, and sets the coordinate range of the input area Pb for the object. Note that the input determination unit 111 resets the coordinate range of the input area Pb along with the object based on the display position of the time-series object that changes as the game processing progresses.
 また、入力判定部111は、位置検出部11から操作体の位置情報を随時取得し、取得した位置情報を所定の座標変換テーブルを用いて仮想空間上の座標に変換し、変換した座標を仮想空間上の操作体の位置を示す3次元座標とする。入力判定部111は、仮想空間上の操作体の位置を示す3次元座標が、入力エリアPb内に含まれる場合、入力エリアPbに対して入力操作がなされたことを示す判定結果を処理実行部113へ出力する。 Further, the input determination unit 111 acquires position information of the operating tool from the position detection unit 11 as needed, converts the acquired position information into coordinates in the virtual space using a predetermined coordinate conversion table, and converts the converted coordinates into virtual The three-dimensional coordinates indicate the position of the operating body in space. The input determination unit 111 processes the determination result indicating that the input operation has been performed on the input area Pb when the three-dimensional coordinates indicating the position of the operating body in the virtual space are included in the input area Pb. Output to 113.
 処理実行部113は、入力判定部111から入力エリアPbに対して入力操作がなされたことを示す判定結果を取得すると、記憶部13のシナリオデータの実行処理情報を参照し、当該入力エリアPbに応じた処理を3D画像表示制御部112に指示する。 When the process execution unit 113 acquires from the input determination unit 111 the determination result indicating that the input operation has been performed on the input area Pb, the process execution unit 113 refers to the execution process information of the scenario data in the storage unit 13 and The 3D image display control unit 112 is instructed to perform processing according to the request.
 上記実施形態では、オブジェクトごとに、オブジェクトの3D画像Paの予め定められた部分と重なる位置に対して入力エリアPbが設定され、オブジェクトの表示位置に応じて入力エリアPbが設定し直される。そのため、入力操作を受付可能な範囲が固定されている場合と比べ、オブジェクトの動作や状態に連動して入力エリアPbを設定することができ、ゲームの進行に応じた適切な位置で入力操作を受け付けることが可能となる。また、上記実施形態では、利用者が裸眼で3D画像を立体視し、実空間において3D画像に対する操作を行うことができるため、HMD(Head Mounted Display)を用いて3D画像に対して操作する場合と比べ、よりリアリティのあるゲーム感が得られる。 In the above embodiment, the input area Pb is set for each object at a position overlapping the predetermined portion of the 3D image Pa of the object, and the input area Pb is set again according to the display position of the object. Therefore, the input area Pb can be set in conjunction with the movement or state of the object, compared to the case where the range in which the input operation can be received is fixed, and the input operation can be performed at an appropriate position according to the progress of the game. It becomes possible to receive. Further, in the above embodiment, since the user can stereoscopically view the 3D image with the naked eye and operate on the 3D image in real space, when operating on the 3D image using an HMD (Head Mounted Display) Compared to, you can get a more realistic game feeling.
 以上、表示装置の一例について説明したが、表示装置は、上述した実施形態の構成に限定されず、様々な変形構成とすることができる。 As mentioned above, although an example of a display was explained, a display is not limited to composition of an embodiment mentioned above, but can be considered as various modification composition.
<変形例>
 (1)上述した実施形態では、オブジェクトに対して1つの入力エリアPbを設定する例を説明したが、図3に示すように、オブジェクトの3D画像Paに対して2つの入力エリアPb1、Pb2を設定してもよい。要は、1つのオブジェクトに対して設定する入力エリアPbの数は少なくとも1つ以上であればよい。
<Modification>
(1) In the embodiment described above, an example in which one input area Pb is set for an object has been described, but as shown in FIG. 3, two input areas Pb1 and Pb2 for a 3D image Pa of an object It may be set. The point is that the number of input areas Pb set for one object may be at least one or more.
 また、図4に示すように、同じ画面に表示される2つのオブジェクトの3D画像Pa1、Pa2のそれぞれに対して入力エリアPb11、Pb12が設定されていてもよい。 Further, as shown in FIG. 4, input areas Pb11 and Pb12 may be set for each of 3D images Pa1 and Pa2 of two objects displayed on the same screen.
 また、例えば、あるシーンでは、図5(a)に示すように、オブジェクトの3D画像Paに対して入力エリアPbを設定する。次のシーンでは、図5(b)に示すように、図5(a)に示す3D画像Pa上の入力エリアPbとは異なる位置の入力エリアPb’が設定されてもよい。また、例えば、あるシーンでは、図6(a)に示すように、オブジェクトの3D画像Paに対して入力エリアPb31、Pb41を設定する。次のシーンでは、図6(b)に示すように、図6(a)に示す3D画像Pa上の入力エリアPb31、Pb41とは異なる大きさの入力エリアPb32、Pb42が設定されてもよい。つまり、オブジェクトに対する入力エリアPbの大きさと位置の少なくとも一方を、シーン変化等に応じて変化させてもよい。 Also, for example, in a certain scene, as shown in FIG. 5A, the input area Pb is set for the 3D image Pa of the object. In the next scene, as shown in FIG. 5 (b), an input area Pb 'at a position different from the input area Pb on the 3D image Pa shown in FIG. 5 (a) may be set. Also, for example, in a certain scene, as shown in FIG. 6A, input areas Pb31 and Pb41 are set for the 3D image Pa of the object. In the next scene, as shown in FIG. 6B, input areas Pb32 and Pb42 of different sizes from the input areas Pb31 and Pb41 on the 3D image Pa shown in FIG. 6A may be set. That is, at least one of the size and the position of the input area Pb with respect to the object may be changed according to a scene change or the like.
 (2)上述した実施形態では、表示装置1内に3D画像データと入力エリア情報とが記憶されている例を説明したが、表示装置1の外部の記憶装置に3D画像データと入力エリア情報とが記憶されていてもよい。外部の記憶装置は、例えば、有線又は無線による通信回線を介して表示装置1と接続されてもよいし、外部の記憶装置が、表示装置1に対して着脱可能な半導体メモリ等であってもよい。この場合、表示装置1は、外部の記憶装置から3D画像データと入力エリア情報とを取得する取得手段を備え、取得した3D画像データに基づく3D画像を表示部12に表示し、取得した入力エリア情報に基づいて、3D画像に対する入力エリアを設定する。 (2) In the embodiment described above, an example in which 3D image data and input area information are stored in the display device 1 has been described. However, 3D image data and input area information are stored in a storage device external to the display device 1 May be stored. For example, the external storage device may be connected to the display device 1 via a wired or wireless communication line, or the external storage device may be a semiconductor memory or the like that can be attached to or detached from the display device 1. Good. In this case, the display device 1 includes an acquisition unit that acquires 3D image data and input area information from an external storage device, displays a 3D image based on the acquired 3D image data on the display unit 12, and acquires the input area Based on the information, set an input area for the 3D image.
 (3)上述した実施形態において、表示装置1で表示する3D画像は利用者が裸眼で立体視し、当該3D画像に対する入力操作を行うことが好ましいが、HMDを用いて3D画像を表示し、当該3D画像に対して入力操作を受け付けるようにしてもよい。
 
 
(3) In the embodiment described above, it is preferable that the user stereoscopically view the 3D image displayed on the display device 1 with the naked eye and perform an input operation on the 3D image, but displays the 3D image using the HMD, An input operation may be received on the 3D image.

Claims (5)

  1.  コンテンツに含まれる少なくとも1つのオブジェクトを立体画像として表示するための画像データと、前記オブジェクトの立体画像が表示される仮想空間における当該オブジェクトに対する入力エリアを示す入力エリア情報を取得する取得部と、
     前記画像データに基づいて前記立体画像を表示する表示部と、
     表示される前記立体画像に対して入力操作を行うための操作体の3次元空間における位置を検出する検出部と、
     表示される前記立体画像の表示位置と前記入力エリア情報とに基づいて、当該立体画像に対する前記仮想空間の入力エリアを設定し、検出された前記操作体の位置に基づいて、当該操作体が前記入力エリア内に含まれるか否かを判定する判定部と、
     前記操作体が前記入力エリア内に含まれると判定された場合、所定の処理を実行する実行部と、を備え、
     前記入力エリアは、前記仮想空間における前記立体画像の少なくとも一部と重なる所定範囲である、表示装置。
    Image data for displaying at least one object included in the content as a stereoscopic image, and an acquiring unit for acquiring input area information indicating an input area for the object in a virtual space where the stereoscopic image of the object is displayed;
    A display unit for displaying the three-dimensional image based on the image data;
    A detection unit configured to detect a position of an operating body in a three-dimensional space for performing an input operation on the displayed stereoscopic image;
    The input area of the virtual space for the three-dimensional image is set based on the display position of the three-dimensional image to be displayed and the input area information, and the operating body is the one based on the detected position of the operating body. A determination unit that determines whether or not it is included in the input area;
    An execution unit configured to execute a predetermined process when it is determined that the operation body is included in the input area;
    The display device, wherein the input area is a predetermined range overlapping at least a part of the three-dimensional image in the virtual space.
  2.  前記入力エリア情報は、前記仮想空間の前記立体画像における複数部分のそれぞれに対する各入力エリアの範囲を示す情報を含む、請求項1に記載の表示装置。 The display device according to claim 1, wherein the input area information includes information indicating a range of each input area with respect to each of a plurality of portions in the stereoscopic image of the virtual space.
  3.  前記取得部は、複数のオブジェクトのそれぞれを立体画像として表示するための複数の画像データと、前記複数のオブジェクトのそれぞれの前記仮想空間における立体画像に対する前記入力エリア情報を取得し、
     前記表示部は、前記複数の画像データに基づいて複数の立体画像を同一画面上に表示し、
     前記判定部は、検出された前記操作体の位置が、前記複数のオブジェクトの前記立体画像のいずれかの前記入力エリア内に含まれるか否かを判定する、請求項1又は2に記載の表示装置。
    The acquisition unit acquires a plurality of image data for displaying each of a plurality of objects as a stereoscopic image, and the input area information for a stereoscopic image in the virtual space of each of the plurality of objects.
    The display unit displays a plurality of stereoscopic images on the same screen based on the plurality of image data,
    The display according to claim 1, wherein the determination unit determines whether or not the detected position of the operating body is included in the input area of any one of the stereoscopic images of the plurality of objects. apparatus.
  4.  前記判定部は、前記立体画像の表示位置の変化に応じて当該立体画像に対する前記入力エリアを設定し直す、請求項1から3のいずれか一項に記載の表示装置。 The display device according to any one of claims 1 to 3, wherein the determination unit resets the input area for the stereoscopic image according to a change in a display position of the stereoscopic image.
  5.  表示部を有する表示装置のコンピュータに、
     コンテンツに含まれる少なくとも1つのオブジェクトを立体画像として前記表示部に表示するための画像データと、前記オブジェクトの立体画像が表示される仮想空間における当該オブジェクトに対する入力エリアを示す入力エリア情報を取得する取得ステップと、
     取得された前記画像データに基づいて前記立体画像を前記表示部に表示させる表示制御ステップと、
     表示された前記立体画像に対して入力操作を行うための操作体の3次元空間における位置を検出する検出ステップと、
     表示される前記立体画像の表示位置と前記入力エリア情報とに基づいて、当該立体画像に対する前記仮想空間の入力エリアを設定し、検出された前記操作体の位置に基づいて、当該操作体が前記入力エリア内に含まれるか否かを判定する判定ステップと、
     前記操作体が前記入力エリア内に含まれると判定された場合、所定の処理を実行する実行手段と、を実行させ、
     前記入力エリアは、前記仮想空間における前記立体画像の少なくとも一部と重なる所定範囲である、プログラム。
    In a computer of a display device having a display unit,
    Acquisition to obtain image data for displaying at least one object included in content as a stereoscopic image on the display unit, and input area information indicating an input area for the object in a virtual space where the stereoscopic image of the object is displayed Step and
    A display control step of causing the display unit to display the stereoscopic image based on the acquired image data;
    Detecting the position of the operating body in the three-dimensional space for performing an input operation on the displayed three-dimensional image;
    The input area of the virtual space for the three-dimensional image is set based on the display position of the three-dimensional image to be displayed and the input area information, and the operating body is the one based on the detected position of the operating body. A determination step of determining whether or not included in the input area;
    An execution unit configured to execute a predetermined process when it is determined that the operation body is included in the input area;
    The program, wherein the input area is a predetermined range overlapping at least a part of the stereoscopic image in the virtual space.
PCT/JP2018/030604 2017-08-24 2018-08-20 Display device and program WO2019039416A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-160815 2017-08-24
JP2017160815 2017-08-24

Publications (1)

Publication Number Publication Date
WO2019039416A1 true WO2019039416A1 (en) 2019-02-28

Family

ID=65438768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/030604 WO2019039416A1 (en) 2017-08-24 2018-08-20 Display device and program

Country Status (1)

Country Link
WO (1) WO2019039416A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10207620A (en) * 1997-01-28 1998-08-07 Atr Chinou Eizo Tsushin Kenkyusho:Kk Stereoscopic interaction device and method therefor
US20140200080A1 (en) * 2011-06-15 2014-07-17 VTouch Co., Ltd. 3d device and 3d game device using a virtual touch
US20150363070A1 (en) * 2011-08-04 2015-12-17 Itay Katz System and method for interfacing with a device via a 3d display
JP2017004457A (en) * 2015-06-16 2017-01-05 株式会社ファイン Virtual reality display system, virtual reality display method, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10207620A (en) * 1997-01-28 1998-08-07 Atr Chinou Eizo Tsushin Kenkyusho:Kk Stereoscopic interaction device and method therefor
US20140200080A1 (en) * 2011-06-15 2014-07-17 VTouch Co., Ltd. 3d device and 3d game device using a virtual touch
US20150363070A1 (en) * 2011-08-04 2015-12-17 Itay Katz System and method for interfacing with a device via a 3d display
JP2017004457A (en) * 2015-06-16 2017-01-05 株式会社ファイン Virtual reality display system, virtual reality display method, and computer program

Similar Documents

Publication Publication Date Title
US10764565B2 (en) Computer-readable storage medium having stored therein display control program, display control apparatus, display control system, and display control method
JP5689707B2 (en) Display control program, display control device, display control system, and display control method
JP5845211B2 (en) Image processing apparatus and image processing method
US8894486B2 (en) Handheld information processing apparatus and handheld game apparatus
US8952956B2 (en) Computer-readable storage medium having stored therein display control program, display control apparatus, display control system, and display control method
CN111353930B (en) Data processing method and device, electronic equipment and storage medium
WO2014141504A1 (en) Three-dimensional user interface device and three-dimensional operation processing method
US20110210970A1 (en) Digital mirror apparatus
US20130057574A1 (en) Storage medium recorded with program, information processing apparatus, information processing system, and information processing method
KR20150085797A (en) Information processing apparatus and information processing method
JP5757790B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
US10853966B2 (en) Virtual space moving apparatus and method
US8854358B2 (en) Computer-readable storage medium having image processing program stored therein, image processing apparatus, image processing method, and image processing system
JP2019008623A (en) Information processing apparatus, information processing apparatus control method, computer program, and storage medium
CN108027700A (en) Information processor
CN111033573B (en) Information processing apparatus, information processing system, image processing method, and storage medium
JP2017191492A5 (en)
US20210256865A1 (en) Display system, server, display method, and device
JP2010205031A (en) Method, system and program for specifying input position
US11403830B2 (en) Image processing device, image processing method, and program
US10350497B2 (en) Information processing apparatus, information processing system, computer-readable storage medium, and information processing method for displaying a degree of progress for a task
WO2019039416A1 (en) Display device and program
US9740292B2 (en) Computer-readable storage medium having stored therein display control program, display control system, display control apparatus, and display control method
JP6467039B2 (en) Information processing device
JP3631890B2 (en) Electronic game equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18847705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 18847705

Country of ref document: EP

Kind code of ref document: A1