WO2000039662A1 - Program selective execution device, data selective execution device, image display device, and channel selection device - Google Patents

Program selective execution device, data selective execution device, image display device, and channel selection device Download PDF

Info

Publication number
WO2000039662A1
WO2000039662A1 PCT/JP1999/007307 JP9907307W WO0039662A1 WO 2000039662 A1 WO2000039662 A1 WO 2000039662A1 JP 9907307 W JP9907307 W JP 9907307W WO 0039662 A1 WO0039662 A1 WO 0039662A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
selection
input
dimensional
display
Prior art date
Application number
PCT/JP1999/007307
Other languages
French (fr)
Japanese (ja)
Inventor
Kenjiro Tsuda
Yoshihisa Nishigori
Hideaki Kobayashi
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP10368894A external-priority patent/JP2000196971A/en
Priority claimed from JP11009899A external-priority patent/JP3673425B2/en
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Publication of WO2000039662A1 publication Critical patent/WO2000039662A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Definitions

  • Program selection execution device data selection execution device, video display device, channel selection device
  • the present invention provides a program selection and execution device for selecting and executing a program on a personal computer or the like, a data selection and execution device for selecting and executing data, and further receives a television broadcast or the like and selects a channel by displaying a program guide.
  • the present invention relates to a video display device and a channel selection device, in particular, a program selection execution device, a data selection execution device, and a video display device capable of realizing an intuitive and familiar operating environment when a user performs a selection operation. It relates to a channel selection device. Background art
  • a conventional two-dimensional interface typified by Windows (registered trademark of Microsoft Corporation)
  • selection and execution of programs and data are displayed in parallel on a two-dimensional screen using menus and the like.
  • a method is used in which the selected item is selected using a pointing device such as a mouse.
  • a pointing device such as a mouse.
  • this method when the number of items to be selected increases, some items are not displayed in the display area.
  • the user performs an operation such as scrolling the display area. After displaying the item to be selected in the display area, it is necessary to select the item with a point device such as a mouse.
  • digital multi-channels have been advanced today, and a plurality of programs received via a broadcast network have been provided by a multi-screen display using a promotion channel broadcast. It is carried out.
  • the conventional multi-screen display uses a method in which a display screen is divided into rectangles, and images and channels are assigned to each divided area and displayed. This To select an image or channel from the multi-screen display, first display a cursor or a selection frame to indicate to the user that the image or channel is selectable. Then, the user moves the cursor or the selection frame using an input device such as a cross button or a mouse, and presses the selection button when the cursor or the selection frame matches the image or channel to be selected. Video or channel. The selected video or channel is switched from multi-screen display to full-screen display and displayed on the display.
  • the conventional program selection execution device and data selection execution device using menu display in a two-dimensional interface can be easily operated by a user who is accustomed to the operation of a personal computer or the like. For a user unfamiliar with the operation of a personal computer, etc., it was difficult to understand intuitively.
  • the conventional multi-screen display uses a display method in which the display screen is divided into rectangles, and as the number of divisions increases, the display size of one image becomes smaller, so that the image becomes difficult to see. Had a problem that it was difficult for them to select a channel. Furthermore, when selecting a channel or video, the operation procedure such as cursor movement is performed, so that there is a problem that the operation of the selection decision button becomes complicated as the number of display screens increases.
  • the present invention has been made to solve the above-mentioned problems, and provides an intuitive operation environment in which a user can easily understand programs and data in a personal computer and a multi-screen video in a broadcast.
  • Program selection and execution device, data selection and execution device, and video table that can be realized It is an object to provide a display device and a channel selection device. Disclosure of the invention
  • the program selection and execution device shows the contents of a program on each of the surfaces of a three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to a central axis.
  • the selection object display means for displaying an image in which the selection object on which the texture is pasted is arranged in a three-dimensional virtual space on a display screen, and the selection object display means are provided with the selection object.
  • a rotation display control means for providing a rotation display control signal for displaying an image rotating about the central axis in the three-dimensional virtual space as a center of rotation, and a selection input means for inputting a selection input for selecting a program And a step for determining which surface of the plurality of surfaces constituting the three-dimensional rotator object faces front on the display screen when a selection input is input from the selection input means.
  • the rotation display control means outputs the rotation display control signal in response to a rotation instruction input externally input. Is given to the object display means for selection.
  • the three-dimensional virtual space is used.
  • a three-dimensional rotating object By using a three-dimensional rotating object, it is possible to remind the user of the image of rolling a cylindrical rotating body in the real world. An operation environment can be realized.
  • the rotation display control means is configured to rotate the selection object in a predetermined pattern.
  • a storage means for storing information is provided, and the rotation display control signal is provided to the selection object display means based on the information stored in the storage means.
  • the rotation display control means is configured to rotate the selection object in a predetermined pattern. It is provided with holding means for holding information, and when a rotation instruction input is input from outside, the rotation display control signal is supplied to the selection object display means in response to the rotation instruction input, and the rotation is externally performed. When the instruction input is not input, the rotation display control signal is provided to the selection object display means based on the information stored in the storage means.
  • the program selection and execution device having such a configuration can recall an image of rolling a cylindrical rotating object in the real world.
  • Intuitive operating environment that is easy for users who are not accustomed to a personal computer to operate can be realized, and since the three-dimensional rotating object rotates automatically, the user only has to pay attention to the selection of the program. The operation can be further simplified.
  • This invention (Claim 5) is defined by Claims 1 to 4.
  • the number of times that the selection object rotates on the display screen and the number of switching of a front-facing surface among a plurality of surfaces constituting the three-dimensional rotating object is counted.
  • the selected plane determining means determines a front facing surface on a display screen based on the count information output by the power counter. That is what you do.
  • the selection surface determination means includes a selection object display means. It is characterized in that a front-facing surface on a display screen is determined based on depth information obtained when the selection object is displayed on a screen.
  • the present invention is the program selection execution device according to any one of Claims 1 to 4, wherein the selection surface determination means is arranged so that the selection object is in an initial state. It is characterized in that a front-facing surface on the display screen is determined based on rotation angle information indicating an angle rotated from.
  • the program selection and execution device By using a three-dimensional rotating object in a three-dimensional virtual space, the program selection and execution device having such a configuration can remind an image of rolling a cylindrical rotating object in the real world. It is possible to realize an intuitive operating environment that is easy to be used by users who are not used to personal computers.
  • This invention (Claim 8) is based on Claims 1 to 4.
  • the program selection and execution device having such a configuration can remind an image of rolling a cylindrical rotating object in the real world. Yes, and since the execution screen of the selected program is displayed, the selection can be easily confirmed, and an intuitive operation environment that can be easily used by users unfamiliar with the computer can be realized.
  • the data selection execution device provides a three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to a central axis, and each of the surfaces indicates data contents.
  • the selection object display means for displaying an image in which the selection object to which the texture is pasted is arranged in a three-dimensional virtual space on a display screen; and the selection object display means, Display control means for providing a rotation display control signal for displaying an image that rotates with the center axis as the center of rotation in a three-dimensional virtual space, and a selection input for inputting a selection input for selecting data Means and means for determining which face of the plurality of faces constituting the three-dimensional rotating object is facing the front on the display screen when a selection input is input from the selection input means.
  • a data deciding unit that decides what the data is based on the information held in the first correspondence table holding unit and determines data to be opened, and a correspondence relationship between the data and a program that opens the data.
  • a second correspondence table holding means for holding information indicating the following, and a program to be executed to open the data determined by the data determination means are determined based on the information held in the second correspondence table holding means and executed.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operation environment that is easy to be used by users who are not familiar with personal computers.
  • the rotation display control means is configured to control the rotation display control signal in response to a rotation instruction input externally input. Is given to the object display means for selection.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world. Therefore, an intuitive operation environment that can be easily used by a user who is not used to a bassocon can be realized.
  • the present invention (Claim 11) is the data selection execution device according to Claim 9, wherein the rotation display control means includes information for rotating the selection object in a predetermined pattern. Holding means for holding the rotation display control signal on the basis of the information held in the holding means.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
  • the rotation display control means rotates the selection object in a predetermined pattern.
  • Holding means for holding information for inputting the rotation instruction control signal to the object display means for selection when the rotation instruction input is input from outside.
  • the rotation display control signal is provided to the selection object display means based on the information held in the holding means.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
  • the selection object is rotated on a display screen.
  • Counter means for counting the number of times a face facing the front of the plurality of faces constituting the three-dimensional rotating object is switched and outputting count information, wherein the selected face determination means outputs the force counter It is characterized in that the face facing the front on the display screen is determined based on the count information.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
  • the invention is the data selection execution device according to any one of Claims 9 to 12, wherein the selection surface determination means includes the selection object display.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
  • the present invention is the data selection execution device according to any one of Claims 9 to 12, wherein: The step is characterized in that a face facing the front on the display screen is determined based on rotation angle information indicating an angle at which the selection object has rotated from an initial state.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
  • This invention provides the data selection execution device according to any one of Claims 9 to 15, wherein the program to be executed has an execution display screen.
  • a screen display switching means for switching the screen display so that the execution display screen is displayed when the program is executed is provided.
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operation environment that is easy to be used by users who are not familiar with personal computers.
  • the present invention (claim 17) is the data selection execution device according to any one of claims 9 to 16, wherein the selection object display means comprises a three-dimensional rotating device.
  • the selection object display means comprises a three-dimensional rotating device.
  • the present invention (claim 18) is the data selection execution device according to claim 17, wherein the object display means for selection is a three-dimensional object.
  • a moving image obtained by reproducing moving image data corresponding to the surface is pasted as a texture on a surface facing the front of the display screen among a plurality of surfaces constituting the rotating object, Of the multiple surfaces that make up the three-dimensional rotator object, the surface that is not facing the front on the display screen is a still image extracted from the moving image obtained by reproducing the moving image data associated with the surface It is characterized by sticking as a texture.
  • the data selection and execution device By using a three-dimensional rotating object in a three-dimensional virtual space, the data selection and execution device having such a configuration can associate the image of rolling a cylindrical rotating object in the real world. Also, to determine which surface can be selected at any one time, it is easy to determine whether the image pasted on the surface is moving, and it is the same for users who are not used to Bascon. An easy-to-see and intuitive operation environment can be realized.
  • the present invention provides the data selection execution device according to any one of claims 9 to 18, wherein the data associated with each surface of the three-dimensional rotating object is
  • the data is audio data, moving image data, or moving image data accompanied by audio data
  • the data reproduction display means performs reproduction display of the data associated with the selection object display.
  • the surface facing the front on the display screen is switched from the first surface to the second surface adjacent to the first surface due to the rotation of the object, it is associated with the first surface.
  • Data reproduction display means for displaying the reproduction display of the data to be faded out and fading in the reproduction display of the data associated with the second aspect. Also It is.
  • the present invention is the data selection execution device according to any one of Claims 9 to 18, wherein the data associated with each surface of the three-dimensional rotating object is When the data includes audio data, the data reproduction and display means reproduces and displays the associated data in addition to the display of the selection object.
  • the playback sound source position of the data associated with the first surface When the surface facing the front on the display screen switches from the first surface to the second surface adjacent to the first surface, the playback sound source position of the data associated with the first surface
  • a data reproduction display means for performing reproduction display by moving a reproduction sound source position of data associated with the second surface in accordance with the movement of the first and second surfaces on the display screen. It is characterized by that .
  • a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating body in the real world, An intuitive operating environment that is easy for users who are not used to a personal computer to use can be realized, and music data and moving image data that are displayed auxiliary with the selection object are not interrupted. Thus, a data selection execution device that allows a user to select data comfortably can be realized.
  • a video display device includes: a video receiving unit that receives an input signal transmitted via a broadcast or a network and outputs the input video signal; Memory means for storing the input video signal in the memory means, and outputting a memory control signal in accordance with area extraction information indicating a position at which an area to be used as a texture is extracted from the input video signal.
  • a memory input / output control unit that outputs to the memory unit and reads out a partial video signal from the memory unit; and the area cutout information based on parameter information including three-dimensional coordinate information and area cutout information. The three-dimensional coordinate information is separated from the three-dimensional coordinate information, and the area cutout information is output to the memory input / output control means.
  • a parameter separating means for outputting to the object position determining means, and an object for arranging the three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information and outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space
  • a projection position conversion means for perspectively projecting the object coordinate information on a display projection plane and converting the object coordinate information into display projection plane coordinate information; and the partial video signal based on the projection plane coordinate information.
  • Rasterizing means for generating a 3D video signal by texture mapping of the 3D video signal onto a predetermined surface of the 3D object, and holding the 3D video signal and outputting the output video signal at a predetermined timing
  • a video display means for displaying the output video signal.
  • a predetermined area is cut out from a transmitted and input video signal, and is pasted on a surface of an object in a three-dimensional virtual space, so that a three-dimensional display of a video is performed. This makes it possible to display images that are easy to understand.
  • the present invention (Claim 22) is the video display device according to Claim 21, wherein the parameter information input by the parameter separating means changes in a time series. Things.
  • a three-dimensional rotating object displayed in a three-dimensional virtual space can obtain an animation effect, and a video display that is easy to see is possible.
  • the image display apparatus further includes an Abuin conversion unit instead of the perspective projection conversion unit.
  • An image display device includes a predetermined number of partial images transmitted via a broadcast or a network.
  • a video receiving means for receiving an input signal and outputting an input video signal; a memory means for holding the input video signal; an area for writing the input video signal into the memory means and using the input video signal as a texture
  • a memory control signal to the memory means in accordance with area cutout information corresponding to a predetermined number of partial images, and a memory input / output control for reading a partial video signal from the memory means.
  • the three-dimensional coordinate information is separated from the three-dimensional coordinate information, and the area cutout information is output to the memory input / output control means.
  • Parameter separating means for outputting the three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information, and outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space.
  • a perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information; and converting the partial video signal into a three-dimensional object based on the projection plane coordinate information.
  • a video display means for displaying the output video signal When, is characterized in that a video display means for displaying the output video signal.
  • a video display device having such a configuration, an area is cut out from a video signal transmitted as a multi-screen along a division boundary of the multi-screen and pasted on an object plane in a three-dimensional virtual space. Accordingly, it is possible to realize a three-dimensional display of a plurality of videos, and it is possible to display a video which is easy to see.
  • the present invention (claim 25) is the video display device according to claim 24, wherein the parameter information input by the parameter separating means is provided. Is characterized in that it changes in chronological order.
  • a three-dimensional rotating object displayed in a three-dimensional virtual space can obtain an animation effect, and a video display that is easy to see can be realized.
  • the present invention (claim 26) is characterized in that, in the video display device according to claim 24, an affinity conversion means is provided instead of the perspective projection conversion means. .
  • a video display device receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network and outputs an input video signal A video receiving means, and a position at which a region to be used as a texture is cut out from the input video signal, the region is separated according to region cutout information corresponding to a predetermined number of partial videos, and a video signal for memory storage is output.
  • Memory input / output control means for outputting a partial video signal from the memory means, three-dimensional coordinate information corresponding to a predetermined number of partial video images, and area extraction information.
  • the area cutout information and the three-dimensional coordinate information are separated based on the parameter output control information from the parameter information composed of the following, and the area cutout information is output to the memory input / output control means.
  • the three-dimensional coordinate information is output to the object position determining means, and the three-dimensional object is arranged in the three-dimensional virtual space based on the three-dimensional coordinate information.
  • Object position determination means for outputting the object coordinate information of the object, perspective projection conversion means for perspectively projecting the object coordinate information on a display projection plane, and converting the object coordinate information into display projection plane coordinate information, and the projection plane coordinate information
  • the parameter output control information is output to the parameter separating means a number of times corresponding to the predetermined number of partial videos, and Rasterizing means for generating and outputting a three-dimensional video signal; frame memory means for holding the three-dimensional video signal and outputting the output video signal at a predetermined timing; and video display means for displaying the output video signal. It is characterized by that.
  • a video display device having such a configuration, when an area is cut out from an image and pasted on a surface of an object in a three-dimensional virtual space, the entire image is not stored in memory but only the cut out area. By storing in the memory, the amount of memory can be reduced.
  • a video display device receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network and outputs the input video signal Video receiving means, memory means for holding the input video signal, and area cutout information indicating a position when writing the input video signal into the memory means and cutting out an area used as a texture from the input video signal.
  • Memory input / output control means for outputting a memory control signal to the memory means in accordance with the following, and reading out a partial video signal from the memory means; video analysis means for determining a predetermined number from the input video signal and outputting area number information On the basis of the area number information, parameter information composed of three-dimensional coordinate information and area clipping information is generated, and parameter output control is performed.
  • the area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is output to the object position determination means.
  • a three-dimensional virtual space is obtained from the three-dimensional coordinate information.
  • An object position determining means for arranging a three-dimensional object in a three-dimensional virtual space and outputting object coordinate information of the three-dimensional object in a three-dimensional virtual space;
  • Perspective projection conversion means for converting into projection plane coordinate information, and a predetermined three-dimensional object for the partial video signal based on the projection plane coordinate information
  • the parameter output control information is output to the parameter generating means a number of times corresponding to a predetermined number of partial images, and rasterization for generating and outputting a three-dimensional video signal is performed.
  • the video display device having such a configuration recognizes the number of divisions of an image transmitted on a multi-screen after receiving it, and automatically generates shape information of a three-dimensional object according to the number of divisions. It is possible to support multiple types of multi-screen video.
  • a video display device selectively receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network based on channel information.
  • Video receiving means for outputting an input video signal, memory means for holding the input video signal, and writing the input video signal into the memory means, and using the input video signal as a texture Memory input / output control for indicating a position at which an area is cut out, outputting a memory control signal to the memory means in accordance with area cutout information corresponding to a predetermined number of partial images, and reading out a partial video signal from the memory means Means, three-dimensional coordinate information corresponding to a predetermined number of partial images, region cutout information, channel correspondence information indicating correspondence information between objects and channels, and The area cutout information and the three-dimensional coordinate information are separated based on parameter output control information from the parameter information composed of: The three-dimensional coordinate information is output to the object position determining means, and the channel correspondence information is output to the channel determining means.
  • Object position information for outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space and outputting the object arrangement order information from the object coordinate information according to the user input at the same time as the user input Means and Object position comparing means for comparing the position of each object with the object arrangement order information and outputting selected object information for selecting an object under predetermined conditions to the channel determining means; and A channel determining means for determining a channel corresponding to the selected object from the channel correspondence information and outputting the channel information; and perspectively projecting the object coordinate information onto a display projection plane; Perspective projection conversion means for converting to surface coordinate information, and parameter separation of parameter output control information when texture mapping the partial video signal to a predetermined surface of a three-dimensional object based on the projection surface coordinate information Output the number of times corresponding to the predetermined number of partial images to the Rasterizing means for generating and outputting a signal; frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing; and inputting the output video signal and the
  • a partial image of an input image composed of multiple screens is cut out and pasted as a texture on each of the surfaces of the object in a three-dimensional virtual space, and this three-dimensional object is displayed.
  • Animation display can be performed by moving.
  • the channel is selected by switching to the full screen display of the channel associated with the surface displayed at the position closest to the viewpoint in the 3D virtual space. Can be realized.
  • the present invention (claim 30) is the video display device according to claim 29, wherein the object position determining means selects a plane whose position is closest to the viewpoint. Is what you do.
  • a video display device having such a configuration, when a user presses a selection button using a three-dimensional rotating object in a three-dimensional virtual space, the image is displayed at a position closest to the viewpoint in the three-dimensional virtual space.
  • Channel selection by switching to the full screen display of the channel associated with the be able to.
  • a video display device receives a first input signal transmitted through a broadcast or a network, and receives a first input signal, and comprises a predetermined number of partial videos.
  • a first video receiving means for outputting a first input video signal, and a second input video signal selectively receiving a second input signal transmitted via a broadcast or a network based on channel information.
  • Second video receiving means for outputting the first input video signal, and memory means for holding the first input video signal, and writing the first input video signal to the memory means, and converting the input video signal into a texture based on the input video signal.
  • parameter information including three-dimensional coordinate information corresponding to a predetermined number of partial images, area cutout information, and channel correspondence information indicating correspondence information between an object and a channel.
  • the area cutout information and the three-dimensional coordinate information are separated based on the parameter output control information, the area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is an object.
  • the three-dimensional object is output in the three-dimensional virtual space from the three-dimensional coordinate space based on the three-dimensional coordinate information based on the parameter separating means output to the channel determining means.
  • the object position determining means for outputting the object arrangement order information is compared with the position of each object based on the object arrangement order information, and the selected object information in which the object is selected under predetermined conditions is determined by the channel determination.
  • the parameter output control information is transmitted to the parameter separating means for the partial video signal.
  • a rasterizing means for outputting a number of times corresponding to the predetermined number of times and generating and outputting a three-dimensional video signal, and a frame memory for holding the three-dimensional video signal and outputting the three-dimensional output video signal at a predetermined timing Means for enlarging and transforming the partial video signal to output a partial video enlarged and deformed signal; and disconnecting the three-dimensional output video signal and the partial video enlarged and deformed signal at a predetermined timing.
  • Video switching means for switching and outputting an output video signal; and video display means for switching and displaying the output video signal and the second input video signal. It is an feature.
  • the partial video used as the texture in the three-dimensional display is enlarged, deformed, displayed, and then displayed.
  • smooth video switching can be realized.
  • a channel selection device receives an input signal transmitted via a broadcast or a network, and adds the input signal to the selected channel information output from the channel determination means.
  • a video receiving means for selecting a channel and outputting an input video signal, a memory means for holding the input video signal, and writing the input video signal in the memory means, and an area inputted from the correspondence table holding means.
  • Memory input / output control means for outputting a memory control signal to the memory means according to the cut-out information and reading out a partial video signal from the memory means; Select the partial image showing the contents of the channel on each of the above surfaces of the placed three-dimensional rotating object, and paste the selection object pasted as a texture to the third order.
  • a selection object display means for displaying an image arranged in the original virtual space on a display screen; and Rotation display control that provides a rotation display control signal for displaying an image that rotates as a rotation center Means, a selection input means for inputting a selection input for selecting a channel, and when the selection input is input from the selection input means, which of a plurality of surfaces constituting the three-dimensional rotating object is displayed on the display screen
  • a selection plane determining means for determining whether the object is facing the front, a plurality of surfaces constituting the three-dimensional rotating object, texture information of a partial image corresponding to each channel, and an area input from outside
  • Correspondence table holding means for holding information indicating a correspondence relationship with area cutout information for generating a partial image corresponding to each channel based on the information parameter, and correspondence with the surface determined by the selected surface determination means
  • the channel is determined on the basis of the information held in the correspondence table holding means, the channel to be switched is determined, and the selected channel information is updated. It is characterized in that
  • the three-dimensional rotating object in the three-dimensional virtual space it is possible to associate the image of rolling the cylindrical rotating object in the real world with the channel selecting device having such a configuration. It is possible to realize an intuitive operation environment that is easy for users to use.
  • a parameter separating means for separating the pressure
  • an input signal such as a broadcast and a region information parameter can be received and separated at one place.
  • FIG. 1 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 1 of the present invention.
  • FIG. 2 is a diagram showing an example of a three-dimensional rotating object arranged in a three-dimensional virtual space in a program selection execution device, a data selection execution device, a video display device, and a channel selection device according to the present invention.
  • FIG. 3 is a diagram showing an example of a correspondence table held by a correspondence table holding means of the program selection and execution device according to the first embodiment.
  • FIG. 4 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 2 of the present invention.
  • FIG. 5 is a block diagram showing a configuration of a program selection and execution device according to a third embodiment of the present invention.
  • FIG. 6 is a block diagram showing a configuration of a program selection and execution device according to a fourth embodiment of the present invention.
  • FIG. 7 is a diagram for explaining the front determination in the program selection and execution device according to the fourth embodiment.
  • FIG. 8 is a block diagram showing a configuration of a program selection and execution device according to a fifth embodiment of the present invention.
  • FIG. 9 is a block diagram showing a configuration of a data selection execution device according to Embodiment 6 of the present invention.
  • FIG. 10 is a diagram showing an example of the correspondence table held by the correspondence table holding means of the data selection execution device according to the sixth embodiment.
  • FIG. 11 is a diagram showing a screen display example of the data selection execution device according to the sixth embodiment.
  • FIG. 12 is a block diagram showing a configuration of a data selection execution device according to Embodiment 7 of the present invention.
  • FIG. 13 is a block diagram showing a configuration of a data selection execution device according to Embodiment 8 of the present invention.
  • FIG. 14 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
  • FIG. 15 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
  • FIG. 16 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
  • FIG. 17 illustrates the operation of the data selection execution device according to the eighth embodiment. It is a figure for clarification.
  • FIG. 18 is a block diagram showing a configuration of a video display device according to Embodiment 9 of the present invention.
  • FIG. 19 is a conceptual diagram relating to three-dimensional display according to the ninth embodiment.
  • FIG. 20 is an explanatory diagram of information necessary for three-dimensional display according to the ninth embodiment.
  • FIG. 21 is an explanatory diagram of the channel selection method according to the ninth embodiment.
  • FIG. 22 is an explanatory diagram of a criterion for channel selection according to the ninth embodiment.
  • FIG. 23 is an explanatory diagram regarding the difference between the perspective projection conversion and the affinity conversion according to the ninth embodiment.
  • FIG. 24 is a block diagram showing a configuration of a video display device according to Embodiment 10 of the present invention.
  • FIG. 25 is an explanatory diagram relating to the memory retention of the partial video according to the tenth embodiment.
  • FIG. 26 is a block diagram showing a configuration of a video display device according to Embodiment 11 of the present invention.
  • FIG. 27 is an explanatory diagram relating to the generation of three-dimensional information according to Embodiment 11 described above.
  • FIG. 28 is a block diagram showing a configuration of a video display device according to Embodiment 12 of the present invention.
  • FIG. 29 is an explanatory diagram of a video switching method according to the ninth to eleventh embodiments.
  • FIG. 30 is an explanatory diagram relating to the video switching method according to Embodiment 12 above.
  • FIG. 31 is a block diagram showing a configuration of a channel selection device according to Embodiment 13 of the present invention.
  • FIG. 32 is a diagram showing an example of a correspondence table held by the correspondence table holding means of the channel selection device according to the embodiment 13;
  • FIG. 33 is an explanatory diagram of information necessary for three-dimensional display according to the first embodiment 13.
  • FIG. 1 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 1 of the present invention.
  • 101 is a rotation instruction input means for inputting an instruction to rotate a three-dimensional rotating object in a three-dimensional virtual space
  • 102 is a parameter for rotating the three-dimensional rotating object.
  • the parameter holding means 103 reads the pre-change parameter from the parameter holding means 102 based on the rotation command control signal from the rotation command input means 101, changes the parameter, and sets it as the changed parameter.
  • This is a parameter changing unit that records the parameter in the parameter holding unit 102 and outputs a counter control signal.
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 function as rotation display control means.
  • Numeral 104 denotes a three-dimensional model coordinate holding means for holding coordinate information of an object constituting a three-dimensional virtual space including a three-dimensional rotating object
  • numeral 105 reads parameter information from the parameter holding means 102
  • Coordinate conversion means for reading the three-dimensional model coordinates from the four-dimensional model coordinate holding means 104, performing coordinate conversion, and outputting the changed model coordinates, and 106, the changed model coordinates output from the coordinate conversion means 105.
  • This is a perspective transformation method that performs perspective transformation to a display screen in a three-dimensional virtual space including a three-dimensional rotating object using the coordinates and the viewpoint coordinates, and outputs projection plane coordinates.
  • Reference numeral 107 denotes a projection surface which reads projection plane coordinates from the perspective transformation means 106, excludes hidden and not displayed regions, extracts only displayed regions, and outputs depth information and raster information after hidden surface processing.
  • the processing means 108 is a depth information holding depth information extracted by the hidden surface processing means 107.
  • Information holding means 109 is a texture holding means for holding a texture to be attached to each surface.
  • the texture to be attached to the three-dimensional rotating object is an image for identifying a corresponding program, and uses a program name, an icon image corresponding to the program, or the like.
  • Reference numeral 110 denotes a texture holding unit based on the depth information held by the depth information holding unit 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing unit 107.
  • This is a texture mapping method to paste the texture read from 09.
  • Reference numeral 111 denotes the color and brightness of each pixel based on the frame information after texture mapping output by the texture mapping means 110 and the depth information held by the depth information holding means 108.
  • Rendering means for rendering all pixel information, etc., 1 and 2 are frame buffers which hold the frame information drawn by the rendering means 1 1 1, and 1 13 are frame buffers 1
  • the three-dimensional model coordinate holding means 104 to the screen display means 113 are used for the three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to the central axis.
  • 114 is a counter means for increasing the counter by a counter control signal from the parameter changing means 103
  • 115 is a selection input means for determining and inputting a program to be selected by the user
  • 1 16 is a selected surface judging means for judging the selected surface based on the count information from the counter means 114 and the selection control signal from the selection input means 115
  • 117 is a three-dimensional rotating body.
  • Correspondence table holding means that holds a correspondence table indicating correspondence between each surface composing the object and the program (plane-to-program correspondence information) and correspondence relation between each surface and texture (plane-to-texture correspondence information). is there.
  • FIG. 3 is a diagram showing an example of the correspondence table held by the correspondence table holding means 117.
  • 1 1 8 is the correspondence table holding means 1 1 7 Determining means to determine the program to be executed by referring to the corresponding information (face-to-face program corresponding information) read from the computer, and 119 is based on the selected program information selected by the program determining means 118 It is a program execution means that executes a program.
  • the program selection and execution device assigns a program to each surface of a three-dimensional rotating object placed in a three-dimensional virtual space, rotates the surface, and performs a predetermined operation by a user. It activates a program associated with the surface that faces the front most from the user's viewpoint.
  • the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read.
  • Perspective transformation means 106 Force Performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs projection plane coordinates. That is, at the time of the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without transforming them. 0 Output to 6.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and invisible area, extracts only the displayed area, and obtains depth information and raster information after hidden surface processing. Output. Based on the depth information held by the depth information holding unit 108, the texture mapping unit 110 responds to the hidden surface processed raster information in which the depth information is considered by the hidden surface processing unit 107. Paste the texture read from the texture holding means 109. Here, the correspondence between each surface of the three-dimensional rotator object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 117.
  • the rendering means 111 is held in the frame information after texture mapping output by the texture matching means 110, and is held by the depth information holding means 108. Based on the obtained depth information, all pixel information such as color and brightness of each pixel is drawn.
  • the frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 sets the frame information held in the frame buffer 112 to a predetermined value. Read at the timing of and display the screen. As a result, the screen in the initial state of the program selection operation mode is displayed.
  • FIG. 2 is a diagram showing an example of a three-dimensional rotator object arranged in a three-dimensional virtual space in the program selection and execution device according to the first embodiment.
  • the three-dimensional rotating object placed in the three-dimensional virtual space is composed of a plurality of surfaces, and each surface is a three-dimensional object arranged at regular intervals with respect to the central axis.
  • FIG. 2 there are six surfaces constituting the three-dimensional rotating object, and in FIG. 2 (a), the central axis of rotation is arranged in the horizontal direction in the three-dimensional virtual space, and in FIG. 2 (b), The figure shows an example in which the central axis of rotation is arranged vertically in a three-dimensional virtual space. .
  • the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction.
  • the parameter before change here, the parameter in the initial state
  • the parameter is read from the parameter holding means 102
  • the parameter is changed
  • the changed parameter is recorded in the parameter holding means 102 as the changed parameter.
  • It outputs a power counter control signal to the power counter means 114.
  • the coordinate transformation means 105 reads the changed parameters recorded in the parameter holding means 102 and uses the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 as the changed parameters.
  • the modified model coordinates obtained by the transformation are output to the perspective transformation means 106.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the program selection operation mode. Same as time The same process is performed, and the screen after inputting the rotation instruction control signal is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, what was displayed with the surface 1 facing the front in the initial state is changed to a positive direction rotation input control signal. (2) An image is displayed in which the screen rotates in the direction of the arrow in FIG. 2 and the surface (2) faces the front. An image with 6 facing front is displayed.
  • the operation of the cursor keys on the remote control or the keyboard corresponds to the rotation of the three-dimensional rotating object, or the movement of the mouse corresponds to the rotation of the three-dimensional rotating object. it vMedia.Creating characterize c for example, if the three-dimensional rotating body object are shown in. 2 (a) to FIG., the direction on the three-dimensional rotating body object up and down cursor keys on the remote Konyakiichi baud de (The direction opposite to the arrow in Fig. 2 (a)), and the downward (the direction of the arrow in Fig. 2 (a)) rotation, or the three-dimensional rotation of the mouse back and forth. What is necessary is to correspond to the upward and downward rotation of the body object.
  • the mouse is operated with a mouse equipped with a rotary button called a wheel, such as an IntelliMouse of Microsoft Corporation
  • a wheel such as an IntelliMouse of Microsoft Corporation
  • the front and rear rotation of the wheel is performed in the upward direction of the three-dimensional rotating object, and What is necessary is just to correspond to downward rotation.
  • the player operates with a trackball
  • the front and rear rotation of the trackball may be associated with the upward and downward rotation of the three-dimensional rotating object.
  • the input is operated by means of input using voice recognition, “yes”, “do”, or similar voice input can be used to rotate the three-dimensional rotating object upward and downward. I'll do it.
  • the force counter means 114 performs the counting operation by the counter control signal output from the parameter changing means 103. Specifically, for example, when a positive rotation instruction control signal is input from the rotation instruction input means 101, the parameter changing means 103 increments the count value of the counter means 114 by one. When a counter control signal is output and a negative rotation instruction control signal is input from the rotation instruction input means 101, The counter changing means 103 outputs a counter control signal for decrementing the count value of the counter means 114 by one, and the counter means 114 receives the counter control signal and holds it by itself. Change the force value to be applied.
  • the selection surface determination unit 116 is a counter unit.
  • the count value at that point is obtained from 4 as count information, and based on this count information, the face facing forward when the selection control signal is input is determined, and this face is selected as the selected face information. Is output as. For example, if the three-dimensional rotating object has the shape shown in FIG. 2, the selected surface determination means 1 16 divides the initial state (the count value is “0”) or the force point value by 6.
  • the remainder is “0”, it is determined that the surface facing the front is surface 1, and the remainder obtained by dividing the count value by 6 is “1,” “2,” “3,” “4,” If it is “5”, the faces facing the front are determined to be face 2, face 3, face 4, face 5, and face 6, respectively, and the surplus force obtained by dividing the count value by 6; If “one 2”, “one 3”, “ ⁇ 4”, and “one 5”, the faces facing the front are determined to be face 6, face 5, face 4, face 3, and face 2, respectively.
  • the program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
  • the program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
  • the program selection and execution device is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object placed in the three-dimensional virtual space (selection object). Is displayed on the screen, and the user gives an instruction by a predetermined operation to rotate the three-dimensional rotating object, and counts how many times the rotation instruction operation is repeated. In addition, when a predetermined selection operation is performed by the user, the surface facing the front of the user's viewpoint is determined from the count value based on the count value.
  • the configuration is such that the program associated with the object is selected by referring to the correspondence table and the program is activated.Thus, by using the three-dimensional rotating object in the three-dimensional virtual space, the cylindrical rotating object in the real world is used. It is possible to remind the user of the image of rolling, so that an intuitive operation environment that is easy to be used by a user who is not used to a personal computer can be realized.
  • the center of rotation is Although the axes are shown arranged in the horizontal or vertical direction in the three-dimensional virtual space, the number of surfaces constituting the three-dimensional rotating object is not limited to six, but is two to five. There may be seven or more screens, and the displayed rotating body may be changed according to the number of programs to be supported. When the number of programs is larger than the number of faces of the rotating body, all programs can be selected by switching the program information to be pasted on the faces at a predetermined timing. Alternatively, only specific programs such as frequently used programs may be selected and displayed. Further, the center axis of rotation may be arranged obliquely in the three-dimensional virtual space.
  • FIG. 4 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 2 of the present invention.
  • 12 0 holds a rotation angle change pattern for sequentially changing parameters so as to rotate the 3D rotating object in the 3D virtual space, and according to a request from the coordinate conversion means 12 1, the changed parameters are stored. It is a rotation angle change pattern holding means for sequentially outputting.
  • the rotation angle change pattern holding means 120 functions as rotation display control means.
  • the coordinate transformation means 121 Upon receiving the display end signal output from the screen display means 113, the coordinate transformation means 121 requests the rotation angle change pattern holding means 120 to output the changed parameter information.
  • Rotation angle change pattern holding means 1 The coordinate conversion of the three-dimensional model coordinates is performed using the parameter information, the converted model coordinates are output, and a force counter control signal is output to the counter means every time the coordinate conversion is performed.
  • the program selection and execution device is configured to automatically rotate at a predetermined rotation angular velocity instead of inputting a rotation instruction by a user.
  • the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read.
  • Perspective transformation means 106 Force Performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs projection plane coordinates. That is, at the time of the initial display operation in the program selection operation mode, the coordinate conversion means 121 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without transforming them. 0 Output to 6.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and invisible area, extracts only the displayed area, and obtains depth information and raster information after hidden surface processing. Output.
  • the texture mapping means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107.
  • the correspondence between each surface of the three-dimensional rotating object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding means 117.
  • the rendering means 111 is based on the frame information after texture mapping output from the texture mapping means 110 and the color and the color of each pixel based on the depth information held by the depth information holding means 108. Draws all pixel information such as brightness. Frame information drawn by the rendering means 111 is held in the frame buffer 112. Screen display means 1 1 3 is frame The frame information stored in the buffer 112 is read out at a predetermined timing and displayed on the screen (display of the image in the initial state of the program selection operation mode). When the display operation is completed, the coordinate conversion means is executed. A display end signal is sent to 1 2 1.
  • the coordinate conversion means 121 Upon receiving the display end signal from the screen display means 113, the coordinate conversion means 121 requests the rotation angle change pattern holding means 120 to output a parameter. In response to a request from the coordinate conversion means 122, the rotation angle change pattern holding means 120, based on the held rotation angle change pattern, changes the state in which the surface with the three-dimensional rotating object is directed to the front. Outputs the parameter changed so that it rotates until the other adjacent surface faces the front.
  • the coordinate conversion means 122 receives the changed parameter output from the rotation angle change pattern holding means 120, and changes the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 after the changed parameter.
  • the modified model coordinates obtained by the transformation are output to the perspective transformation means 106 and a counter control signal is outputted to the counter means 114.
  • the counter means 114 performs a count operation in accordance with the force counter control signal output from the coordinate conversion means 121.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates.
  • the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are in the initial state of the program selection operation mode. Performs the same processing as when the image is displayed, and displays a screen in which the three-dimensional rotating object has been rotated by a predetermined angle from the initial state. For example, if the three-dimensional rotating object has the shape shown in Fig.
  • the surface 1 displayed in the initial state facing the front will rotate in the direction of the arrow in Fig. 2 and the surface will rotate.
  • An image with 2 facing front is displayed.
  • the screen display means 113 sends a display end signal to the coordinate conversion means 122.
  • the above coordinate transformation, perspective transformation, hidden surface processing, texture mapping, rendering, and screen display processing are repeated.
  • An image in which the 3D rotating object with the texture indicating the program contents is automatically rotated is displayed.
  • the selection plane determination means 1 1 6 the program determination means 1
  • the operations of the program execution means 118 and the program execution means 119 are the same as those of the program selection execution device according to the first embodiment.
  • the selected surface determination means 1 16 obtains the current count value as the count information from the force counter means 114, and when the selection control signal is input based on this count information, the front face is determined. Judgment is performed on the surface that faces, and this surface is output as selected surface information.
  • the program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 to indicate the selected plane information.
  • the program corresponding to the surface to be output is output as selected program information.
  • the program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
  • the program selection execution device is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and the parameters are automatically changed so that the three-dimensional rotating object rotates from a state in which one surface faces front to a state in which another adjacent surface faces front. By repeating the change, the three-dimensional rotating object is automatically rotated on the screen, and the number of times the parameter change is repeated is counted, and the user determines the number of times.
  • the selection operation is performed, the surface facing the user's viewpoint is determined from the count value, and the program associated with that surface is selected with reference to the correspondence table.
  • rotation instruction input means 101 means for manually instructing the rotation of the program selection and execution device according to the first embodiment.
  • rotation instruction input means 101 means for manually instructing the rotation of the program selection and execution device according to the first embodiment.
  • rotation instruction input means 101 means for manually instructing the rotation of the program selection and execution device according to the first embodiment.
  • a timer is started and the predetermined time is measured, and if it exceeds, the rotation is automatically started. It may be.
  • the rotation may be stopped according to the operation of the user, and the program may be selected.
  • FIG. 5 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 3 of the present invention.
  • 1 2 2 is depth information holding means for holding the depth information extracted by the hidden surface processing means 107
  • 1 2 3 is the depth information from the depth information holding means 1 2 2 and the selection input means 1
  • This is a selected surface determination means for determining the selected surface based on the selection control signal from 15.
  • the surface to be selected (the surface facing front) is determined by counting the number of rotation instructions.
  • the execution device determines the most frontal surface from the user's viewpoint based on depth information obtained during hidden surface processing, instead of the force command value of the rotation instruction. It is.
  • the display of the screen in the initial state of the program selection operation mode and the operation by inputting the rotation instruction control signal are exactly the same as those of the program selection and execution device according to the first embodiment. Explanations are omitted.
  • the selection surface determination means 1 2 3 obtains the current depth information from the depth information holding means 1 2 2, determines the face facing forward when the selection control signal is input based on the depth information, and determines this face. Output as selected plane information. For example, when the three-dimensional rotating object has the shape shown in FIG. 2, the selected surface determination unit 123 determines that the surface located closest to the depth information is the surface facing the front most. Is determined.
  • the program determining means 1 18 obtains the selected plane information from the selected plane determining means 1 2 3 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
  • the program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
  • the texture indicating the program content is pasted on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and the user gives an instruction by a predetermined operation to rotate the three-dimensional rotating object, and when a predetermined selection operation is performed by the user, the user Is determined based on the depth information obtained during hidden surface processing, and the program associated with that surface is selected by referring to the correspondence table and the program is started.
  • a three-dimensional rotating object in a three-dimensional virtual space it is possible to associate the image of rolling a cylindrical rotating body in the real world. It is possible to realize an intuitive operation environment that is easy to be used by a user who is not used to the operation.
  • the selection object provides a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the central axis in the three-dimensional virtual space.
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the means for manually inputting the rotation instruction has been described.
  • the rotation angle change pattern holding means 120 may be provided as in the selection execution device, and the rotation display control may be automatically performed.
  • FIG. 6 is a block diagram showing a configuration of a program selection and execution device according to a fourth embodiment of the present invention.
  • 1 2 4 reads the parameter before change from the parameter holding means 102 based on the rotation instruction control signal from the rotation instruction input means 101, changes the parameter, and changes the parameter as the parameter after change.
  • This is a parameter changing means that records the information in 102 and outputs the rotation angle information.
  • 125 is based on the rotation angle information from the parameter change means 124, the selection control signal from the selection input means 115, and the rotation angle one-plane correspondence information from the rotation angle one-plane correspondence holding means 126.
  • it is a selected surface determining means for determining the selected surface.
  • the surface to be selected (the surface facing front) is determined by counting the number of rotation instructions.
  • the surface facing the front of the user is determined from the correspondence between the rotation angle and the surface index. It is.
  • the program The display operation of the screen in the initial state of the selection operation mode is exactly the same as that of the program selection execution device according to the first embodiment, and therefore the description is omitted.
  • the parameter changing unit 124 changes the rotation instruction from the rotation instruction input unit 101 to the rotation instruction.
  • the parameter before the change here, the parameter in the initial state
  • the parameter holding means 102 the parameter is changed, and the parameter is recorded in the parameter holding means 102 as the changed parameter. I do.
  • the parameter changing means outputs the counter control signal to the counter means 114.
  • the parameter changing means 124 outputs rotation angle information indicating how many times the three-dimensional rotating object has rotated from the initial state to the selected plane determining means 125.
  • coordinate transformation means 105, perspective transformation means 106, hidden surface processing means 107, texture matching means 110, rendering means 111, frame buffer 112, and screen display The means 113 performs the same processing as the program selection and execution device according to the first embodiment, and the screen after the rotation instruction control signal is input is displayed.
  • the selection surface determination means 1 2 5 obtains the rotation angle information at that time from the parameter changing means 1 2 4, and refers to the rotation angle-surface correspondence information held by the rotation angle one-side correspondence holding means 1 26 to obtain the selection control signal. When input, it determines the surface facing forward and outputs this surface as selected surface information.
  • FIG. 7 is a diagram for explaining an example of a method of determining a face facing the front in the program selection execution device according to the fourth embodiment. No.
  • FIG. 7 shows an example of determination when the three-dimensional rotating object has the shape shown in FIG. 2, and shows a cross section of the three-dimensional rotating object.
  • the program selection execution device for example, as shown in FIG.
  • the perpendicular from the axis of rotation to the surface 1 in the initial state is determined as the reference line for the angle
  • the angle formed by the perpendicular from the axis of rotation to the surface 1 as the reference line is detected as the rotation angle.
  • the parameter changing means 124 detects the rotation angle, which is the angle between the axis of rotation and the perpendicular to surface 1 as the reference line, and uses this as rotation angle information to the selected plane determination means 125. Output.
  • the three-dimensional rotating object shown in Fig. 2 is a hexahedron, and when it rotates 60 degrees from the state where one surface faces the front, the next surface faces the front. Then, when it is rotated by 36 ° from the initial state, it makes one rotation and becomes the initial state (rotation angle 0 °).
  • the rotation angle one-sided correspondence information held in the rotation angle one-sided correspondence holding means 1 26 is divided into six ranges equally divided into 60 degrees for the rotation angle of 0 to 360 degrees. Any information may be used as long as plane 1 to plane 6 are associated with the range of. Specifically, as shown in Fig.
  • the surface 1 is rotated when the rotation angle is 0 ° or more and less than 30 °, and the rotation angle is 3 ° or more and less than 360 ° (0 °). If the angle is between 0 ° and less than 90 °, face 2; if the rotation angle is between 90 ° and less than 150 °, face 3; if the rotation angle is between 150 ° and less than 210 °, face 4;
  • the surface 5 may be associated with the angle of 210 ° or more and less than 270 °, and the surface 6 may be associated with the rotation angle of 270 ° or more and less than 330 °.
  • the program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 25 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
  • the program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
  • the program selection and execution device is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and when the user gives an instruction by a predetermined operation, the three-dimensional rotating object is rotated, and when the user performs a predetermined selection operation, the user can use the object.
  • the plane facing the user's viewpoint is determined based on the rotation angle information indicating how many times the 3D rotating object has rotated from the initial state, and the program associated with that plane is determined.
  • the configuration is such that the program is selected by referring to the table, the image of rolling a cylindrical rotating body in the real world is associated with the use of a three-dimensional rotating body in a three-dimensional virtual space. This makes it possible to realize an intuitive operation environment that is easy to be used by users who are not used to bath control.
  • the selection object is a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the center axis in the three-dimensional virtual space.
  • a rotation instruction input means 101 a parameter holding means 102, and a parameter changing means 124 are provided, that is, a means for manually inputting a rotation instruction.
  • the rotation angle change pattern holding means 120 may be provided as in the program selection and execution device according to the above, and the rotation display control may be automatically performed.
  • FIG. 8 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 5 of the present invention.
  • Reference numeral 127 denotes a program executing means for executing a program based on the selected program information selected by the program determining means 118.
  • the program execution screen information is changed to a screen display switching means. 8 is output.
  • the screen display switching means 1 28 receives the program execution screen information output from the program execution means 127 and switches or combines the frame information with the frame information from the frame buffer 112 to the screen display means 113. Output.
  • the program selection execution device When the program has a display screen at the time of execution, the program selection execution device according to the fifth embodiment is configured to perform a tertiary program when the program is selected. The display of the original virtual space is switched to display the program execution screen.
  • the display of the screen in the initial state of the program selection operation mode and the operation by inputting the rotation instruction control signal are exactly the same as those of the program selection and execution device according to the first embodiment. Explanations are omitted.
  • the selection surface determination means 1 16 obtains the current count value from the counter means 114 as count information, and turns to the front when a selection control signal is input based on this count information.
  • the selected plane is determined, and this plane is output as selected plane information.
  • the program determination means 1 18 acquires the selected plane information from the selected plane determination means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17.
  • the program corresponding to the surface to be output is output as selected program information.
  • the program executing means 127 executes the program specified by the selected program information input from the program determining means 118.
  • the program execution means 127 outputs the execution screen information of the program to the screen display switching means 128.
  • the screen display switching means 1 28 receives the program execution screen information output from the program execution means 127 and switches to the frame information from the frame buffer 112 to output to the screen display means 113.
  • the program selection and execution device displays on the screen a texture in which the program contents are pasted on each surface of the three-dimensional rotating object placed in the three-dimensional virtual space. Then, the user rotates the three-dimensional rotating object by giving an instruction by a predetermined operation, and when a predetermined selection operation is performed by the user, the object is most viewed from the viewpoint of the user.
  • Judge the surface facing the front select the program associated with that surface with reference to the correspondence table, start the program, and when the program starts
  • the program execution screen is displayed, so the 3D rotating object in the 3D virtual space can be displayed.
  • the program execution screen when the program execution screen is displayed, the program execution screen is displayed in full screen in place of the display of the three-dimensional virtual space.
  • a two-dimensional rectangular area may be separately created on the screen on which the three-dimensional virtual space is displayed, and displayed together with the three-dimensional virtual space.
  • a rectangular object with the program execution screen pasted as a texture is generated, and from the display of the 3D rotating object surface at the time of selection, it corresponds to full screen display
  • the screen display may be switched by animating and displaying the animation by interpolating the way to the position.
  • the selection object provides a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the central axis in the three-dimensional virtual space.
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed.
  • the rotation angle change pattern holding means 120 may be provided as in the case of the apparatus, and the rotation display control may be automatically performed.
  • the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the force counter unit 114.
  • FIG. 9 is a block diagram showing a configuration of a data selection execution device according to Embodiment 6 of the present invention.
  • 101 is a rotation instruction input means for inputting an instruction for rotating a three-dimensional rotating object in the three-dimensional virtual space
  • 102 is a parameter for rotating the three-dimensional rotating object.
  • the parameter holding means 103 reads the pre-change parameter from the parameter holding means 102 based on the rotation instruction control signal from the rotation instruction input means 101, changes the parameter, and sets it as the post-change parameter.
  • This is a parameter changing means for recording in the parameter holding means 102 and outputting a counter control signal.
  • 104 is a three-dimensional model coordinate holding means for holding the coordinate information of the objects constituting the three-dimensional virtual space including the three-dimensional rotating object
  • 105 is reading the parameter information from the parameter holding means 102
  • the coordinate conversion means reads the three-dimensional model coordinates from the three-dimensional model coordinate holding means 104, converts the coordinates, and outputs the changed model coordinates, and 106 designates the change output from the coordinate conversion means 105.
  • This is a perspective transformation unit that performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the rear model coordinates and viewpoint coordinates, and outputs projection plane coordinates.
  • Reference numeral 107 denotes a projection surface which reads projection plane coordinates from the perspective transformation means 106, excludes hidden and not displayed regions, extracts only the displayed regions, and outputs depth information and raster information after hidden surface processing.
  • the processing means, 108 is a depth information holding means for holding the depth information extracted by the hidden surface processing means 107
  • 109 is a texture holding means for holding a texture to be attached to each surface.
  • the texture to be attached to the three-dimensional rotating object in the sixth embodiment is an image for identifying the corresponding data.
  • Reference numeral 110 denotes a texture holding unit based on the depth information held by the depth information holding unit 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing unit 107. This is a texture mapping unit that pastes the texture read from 109.
  • Reference numeral 111 denotes the color and brightness of each pixel based on the frame information after texture mapping output by the texture mapping means 110 and the depth information held by the depth information holding means 108.
  • Rendering means for rendering all pixel information, etc., 112 a frame buffer for holding frame information drawn by the rendering means 111, and 113, a frame buffer 1 This is a screen display means for outputting and displaying the frame information stored in 12 at a predetermined timing.
  • 114 is a counter means for increasing the counter by a counter control signal from the parameter changing means 103
  • 115 is a selection input means for determining and inputting a program to be selected by the user
  • 1 16 is a selected surface judging means for judging the selected surface based on the count information from the counter means 114 and the selection control signal from the selection input means 115.
  • Correspondence between each surface and the data that compose the body object (plane-to-data correspondence information), Correspondence between data and program (data-to-program correspondence information), and correspondence between each surface and texture (plane-to-technology) (Correspondence information).
  • FIG. 10 is a diagram showing an example of the correspondence table held by the correspondence table holding means 1229.
  • 13 0 refers to the correspondence information read from the correspondence table holding means 12 9 from the selected plane information output by the selected plane determination means 1 16 and the selected data by referring to the correspondence information (plane-to-data correspondence information).
  • 13 1 is the correspondence information read from the correspondence table holding means 1 29 from the selected data information outputted by the data decision means 130 (data program correspondence information).
  • a program deciding means for deciding a program to be executed, and 132 are programs based on the selected program information selected by the program deciding means 13 1 It is a program executing means for executing a program.
  • the data selection execution device assigns application data such as word processors and spreadsheets and multimedia data such as video and music to each surface of a three-dimensional rotating object placed in a three-dimensional virtual space.
  • application data such as word processors and spreadsheets
  • multimedia data such as video and music
  • the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are changed.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object, and converts the projection plane coordinates. Output. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do.
  • the texture matching means 110 is based on the depth information held by the depth information holding means 108, based on the depth information held by the depth information holding means 108, with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107.
  • Paste the texture read from the texture holding means 109.
  • the correspondence between each surface of the three-dimensional rotator object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 129.
  • the rendering means 1 1 1 1 1 outputs the color and the color of each pixel based on the frame information after texture mapping output from the texture mapping means 1 10 and the depth information held by the depth information holding means 1 08. Draws all pixel information such as brightness.
  • the frame information drawn by the rendering means 1 1 1 The screen display means 113 reads out the frame information held in the frame buffer 112 at a predetermined timing and displays the screen. As a result, the screen in the initial state of the data selection operation mode is displayed.
  • the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction.
  • the parameter before change here, the parameter in the initial state
  • the parameter is read from the parameter holding means 102
  • the parameter is changed
  • the changed parameter is recorded in the parameter holding means 102 as the changed parameter.
  • It outputs a power counter control signal to the power counter means 114.
  • the coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and reads the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 after the changed parameters. Then, the modified model coordinates obtained by the transformation are output to the perspective transformation means 106.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 perform the initial display operation in the data selection operation mode. The same process is performed as before, and the screen after the rotation instruction control signal is input is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, the surface 1 was displayed facing the front in the initial state. (2) An image is displayed in which the screen rotates in the direction of the arrow in FIG. 2 and the surface (2) faces the front. The image facing is displayed.
  • the operation of the cursor keys on the remote control keyboard and the movement of the mouse are made to correspond to the rotation of the three-dimensional rotating object. I just need.
  • parameter change is The counting operation is performed by the counter control signal output from the changing means 103.
  • the parameter changing means 103 increases the count value of the force counter means 114 by one increment.
  • the parameter changing means 103 changes the count value of the power counter means 114 to 1
  • a counter control signal to be decremented is output, and the counter means 114 receives this counter control signal and changes the force value held by itself.
  • the selection surface determination means 1 16 When the user inputs a selection control signal from the selection input means 1 15 with the surface on which the data to be processed is displayed facing front, the selection surface determination means 1 16 The count value at the time is acquired as count information, and based on this count information, the face facing forward when a selection control signal is input is determined, and this face is output as selected face information. I do.
  • the data determination means 130 obtains the selected plane information from the selected plane determination means 1 16 and refers to the plane data correspondence information held in the correspondence table holding means 1 29 to select the selected plane information. The data corresponding to the surface indicated by is output as selected data information.
  • the program deciding means 13 1 acquires the selected data information from the data deciding means 130 and refers to the data program correspondence information held in the correspondence table holding means 1 29 to indicate the selected data information by the selected data information. Outputs the data processing program as selected program information.
  • the program executing means 13 2 executes the program specified by the selected program information input from the program determining means 13 1.
  • the data selection execution device displays on the screen a texture representing the data content attached to each surface of the three-dimensional rotating object placed in the three-dimensional virtual space.
  • the user rotates the three-dimensional rotating object by giving an instruction according to a predetermined operation, and also keeps track of how many times the rotation instruction operation has been repeated.
  • a predetermined selection operation is performed by the user, the surface that faces the front of the user's viewpoint is determined based on the count value, and the data associated with that surface is determined in the correspondence table.
  • a program for processing the selected data is started to open the selected data. Therefore, by using a three-dimensional rotating object in a three-dimensional virtual space, a real-world cylindrical object can be obtained. It is possible to remind the user of the image of rolling a rotating body, thereby realizing an intuitive operating environment that is easy to use even for a user who is not used to a computer.
  • the image (texture) for identifying the corresponding data is displayed only by pasting it on the surface of the three-dimensional rotating object.
  • a texture that displays textual information such as the name of the data is pasted on the surface of the 3D object.
  • an icon image or a still image extracted from the movie The texture created by using such a method may be displayed together with a three-dimensional rotating object on the display screen 200 as shown in FIG.
  • a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter change means 103 are provided, that is, the rotation instruction input is performed manually.
  • the angle change pattern holding means 120 may be provided to automatically perform the rotation display control.
  • the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the counter unit 114.
  • FIG. 12 shows Embodiment 7 of the present invention
  • FIG. 12 starts the program indicated by the selected program information output by the program determining means 13 1, reproduces the moving image data indicated by the selected data information output by the data determining means 13, and reproduces the texture holding means 13.
  • This is a moving image playback means that outputs to 5.
  • the moving image data is pasted as a texture on a corresponding surface, and the data is further turned to the front.
  • the moving face is displayed as a moving picture, and the face not facing the front face is pasted as a still picture from a moving picture.
  • the data selection execution device when the candidate data to be selected is a moving image, the moving image data is pasted on the corresponding surface as a texture.
  • the data selection execution device when the data selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are changed.
  • the perspective transformation means 106 uses the initial coordinates and the viewpoint coordinates, the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object, and converts the projection plane coordinates. Output. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do.
  • the texture rubbing means 110 is considered depth information by the hidden surface processing means 107. Based on the depth information held by the depth information holding unit 108, the texture read from the texture holding unit 135 is pasted on the considered rasterized surface raster information.
  • the moving image reproducing means 134 outputs the data stored in the correspondence table holding means 129 for all data whose contents are to be displayed on each surface of the three-dimensional rotating object. It refers to the data correspondence information and the data-to-program correspondence information and reproduces it.
  • a certain screen in the moving image of each data is regarded as a still image and the texture holding means 1 3 5
  • the data is continuously reproduced and the moving image is output to the texture holding means 135.
  • the moving image reproducing means 134 outputs the moving image of each data with respect to the surfaces 2 to 6.
  • a certain screen is output to the texture holding means 135 as a still image, and for the surface 1, data is continuously reproduced and a moving image is output to the texture holding means 135.
  • the correspondence between each surface of the three-dimensional rotating object and the texture can be obtained by reading the correspondence information (plane-texture correspondence information) from the correspondence table holding means 1229.
  • the rendering means 1 1 1 1 and the color 2 of each pixel are based on the frame information after the texture mapping output by the texture mapping means 1 10 and the depth information held by the depth information holding means 1 08.
  • the frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 stores the frame information held in the frame buffer 112 in a predetermined manner. Read it out at the timing and display the screen. As a result, the screen in the initial state of the data selection operation mode is displayed.
  • the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction.
  • the parameter before changing from the parameter holding means 102 here, the parameter in the initial state
  • the parameter is changed
  • the changed parameter is recorded in the parameter holding means 102
  • the counter control signal is output to the counter means 114.
  • the coordinate transformation means 105 reads the changed parameters recorded in the parameter holding means 102 and uses the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 as the changed parameters.
  • the modified model coordinates obtained by the transformation are output to the perspective transformation means 106.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the data selection operation mode. Performs the same processing as before, and displays the screen after the rotation instruction control signal is input. For example, if the three-dimensional rotating object has the shape shown in Fig.
  • the second An image is displayed in which the image rotates in the direction of the arrow in the figure and the surface 2 faces the front.
  • a negative rotation instruction control signal is input, the image rotates in the opposite direction to the arrow in FIG.
  • the facing image is displayed.
  • the moving image reproducing means 134 sets the screen including the moving image of each data as a still image for the plane 1 and the planes 3 to 6.
  • the data is output to the texture holding means 135 and the moving image is output to the texture holding means 135 while the data is continuously reproduced on the surface 2.
  • the moving image reproducing means 13 4 sets the texture holding means 1 as a still image on a screen of the moving image of each data for the surfaces 1 to 5.
  • the data is output to the texture holding unit 135 while the data is continuously reproduced on the surface 6 and the moving image is output.
  • the rotation instruction input means 101 is configured so that the operation of the cursor key on the remote control board, the movement of the mouse, and the like correspond to the rotation of the three-dimensional rotating object. do it.
  • the parameter The counting operation is performed by the counter control signal output from the changing means 103.
  • the parameter changing means 103 changes the force for incrementing the count value of the force counter means 114 by one.
  • the parameter changing means 103 decrements the count value of the counter means 114 by 1 decrement.
  • the counter means 114 receives the counter control signal and changes the force count value held by itself.
  • the selected surface determination unit 116 is selected. Obtains the count value at that time as the count information from the counter means 114, determines the face facing the front when the selection control signal is input based on this count information, and Outputs the plane as selected plane information.
  • the data determination means 130 obtains the selected plane information from the selected plane determination means 1 16 and refers to the plane-to-data correspondence information held in the correspondence table holding means 1 29 to indicate the selected plane information.
  • the data corresponding to the surface to be output is output as selected data information.
  • the program determining means 13 1 obtains the selected data information from the data determining means 130, refers to the data program correspondence information held in the correspondence table holding means 1 29, and refers to the data indicated by the selected data information.
  • the program that processes is output as selected program information.
  • the moving image reproducing means 13 4 executes the program specified by the selected program information input from the program determining means 13 1, and reproduces the selected data.
  • the data corresponding to each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space is reproduced on the surface facing the front on the display screen.
  • the texture of the selected moving image is displayed on the screen with the texture of the still image of the corresponding data pasted on the screen other than the face facing the front on the display screen, and the user performs a predetermined operation.
  • the three-dimensional rotating object is rotated, and how many times the rotation instruction operation is repeated is counted, and used when the user performs a predetermined selection operation.
  • the face facing the viewer's viewpoint is determined from the count value, the data associated with that face is selected with reference to the correspondence table, and a program for processing the selected data is selected.
  • a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation.
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed.
  • an angle change pattern holding unit may be provided to automatically perform the rotation display control.
  • the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the counter unit 114.
  • FIG. 13 is a block diagram showing a configuration of a data selection execution device according to Embodiment 8 of the present invention.
  • 13 6 receives the selected surface information indicating the currently selectable surface (the surface determined to be facing the front) from the selected surface determination means 1 16 and selects the next by rotating the three-dimensional rotating object.
  • Next selection surface determination means for determining what surface is a possible surface, and outputting next selection surface information indicating the next surface to be selected.
  • the selected plane information is received from, and the correspondence information (plane-to-data correspondence information) read from the correspondence table holding means 12 is referred to to determine the data corresponding to the currently selectable plane and the selected data information is output
  • the first data decision means 1338 converts correspondence information (data program correspondence information) read from the correspondence table holding means 1229 from the selected data information output by the first data decision means 1337.
  • the first program deciding means to refer to and decide the program to be executed 13 9 starts the program indicated by the selected program information output by the first program determining means 13 8 and reproduces the data indicated by the selected data information output by the first data determining means 13 This is a data reproducing means for outputting 1.
  • the second data decision means for judging the data corresponding to the possible surface and outputting the next selection data information, and 141 is a correspondence table from the next selection data information output by the second data decision means 140
  • Mixing means for inputting reproduction data 1 and reproduction data 2 to create and output mixed data according to the rotation of the three-dimensional rotating object, and mixing data from mixing means 1 and 4 Is a data output unit that displays images or sounds.
  • the data selection execution device is a sound.
  • data that changes over time such as voice / music data, moving image data, or voice / music data attached to moving image data, from the data corresponding to the front facing at one point to the data of the next surface
  • switching is made between feed-in and feed-out based on the pattern of the mixing ratio of the volume and the brightness level according to the rotation angle.
  • the data selection execution device when the data selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read.
  • the perspective transformation means 106 performs perspective transformation to the display screen of the three-dimensional virtual space including the three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs the projection plane coordinates. I do. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes hidden and invisible areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing. I do.
  • the texture matching means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107.
  • Paste the texture read from the texture holding means 109.
  • the correspondence between each surface of the three-dimensional rotating object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 129.
  • the rendering means 111 is based on the frame information after texture mapping output by the texture mapping means 110 and the color and the pixel of each pixel based on the depth information held by the depth information holding means 108. All pixel information such as brightness is drawn.
  • the frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 sets the frame information held in the frame buffer 112 to a predetermined time. Read it out by mining and display the screen. This displays the screen in the initial state of the data selection operation mode. You.
  • each of the data reproducing means 13 9 and the next data reproducing means 14 2 has data corresponding to the surface facing the front of the surfaces constituting the three-dimensional rotating object. Then, the data corresponding to the surface facing the front is reproduced and output to the mixing means 144.
  • the data reproducing means 1339 stores the data corresponding to the surface 1
  • the next data reproducing means 14 42 stores the data corresponding to the surface 2. The corresponding data is reproduced and output to the mixing means 144.
  • the mixing means 144 In the initial display state, the mixing means 144 outputs a composite signal having a mixing ratio that maximizes the reproduction signal of the data corresponding to the surface 1 and minimizes the reproduction signal of the data corresponding to the surface 2. That is, in the initial display state, only the reproduction signal of the data corresponding to the surface 1 is output to the data output means 144, and the data output means 144 displays the reproduction signal as an image or a sound. As a method of displaying an image, for example, as shown in FIG. 11, a three-dimensional rotating object is displayed on a display screen 200.
  • the parameter changing unit 103 changes the parameter from the rotation instruction input unit 101 to the rotation instruction input signal 101.
  • the parameter before change here, the parameter in the initial state
  • the parameter holding means 102 the parameter is changed, and the parameter is held as the changed parameter.
  • the operation of the cursor keys on the remote control keyboard or the movement of the mouse may be associated with the rotation of the three-dimensional rotating object. Les ,.
  • the coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and converts the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 using the changed parameters.
  • the modified model coordinates obtained by the above are output to the perspective transformation means 106.
  • the perspective transformation means 106 is Using the changed model coordinates and viewpoint coordinates, perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object is performed, and projection plane coordinates are output. Thereafter, the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are initially displayed in the data selection operation mode. The same processing as during operation is performed, and the screen after the rotation instruction control signal is input is displayed.
  • the mixing means 144 sets the reproduction signal of the data corresponding to the surface 1 to the maximum and the reproduction signal of the data corresponding to the surface 2 to the minimum in the initial display state.
  • a composite signal with a mixing ratio that minimizes the reproduction signal of the data corresponding to surface 1 and maximizes the reproduction signal of the data corresponding to surface 2 at time t 1 As described above, the mixing ratio of the reproduction signal of the data corresponding to the surface 1 is gradually reduced, and the mixing ratio of the reproduction signal of the data corresponding to the surface 2 is gradually increased. As a result, as shown in FIG. 14 (b), the display of the reproduced signal of the data corresponding to the surface 1 and the display of the reproduced signal of the data corresponding to the surface 2 are switched by cross-fading.
  • the data reproducing means 1 39 switches the data to be reproduced from the data corresponding to the surface 1 to the data corresponding to the surface 2, and reproduces the next data.
  • Means 142 switches the data to be reproduced from data corresponding to surface 2 to data corresponding to surface 3.
  • the mixing means 1 4 3 displays the reproduced signal of the data corresponding to the surface 2 and the display 3
  • the composite signal is output so that the display of the playback signal of the corresponding data is switched by crossfading.
  • the selection surface determination unit 1 16 When the user inputs a selection control signal from the selection input unit 1 15 with the surface on which the data desired to be processed is facing forward, the selection surface determination unit 1 16 outputs at that time. A selection display signal indicating that the plane indicated by the selected plane information has been actually selected is output.
  • the first data deciding means 13 7 and the first program deciding means 13 8 transmit the selected display signal to the data reproducing means 13 9.
  • the data reproducing means 13 9 having received the selection display signal reproduces the selected data from the beginning by using the currently executing program, and mixes the reproduced data together with the selection display signal 14 3 Output to Upon receiving the selection display signal, the mixing means 144 stops mixing the reproduction data 1 and the reproduction data 2, and outputs the reproduction data 1 and the selection display signal to the data output means 144.
  • the data output means 144 When the data output means 144 receives the selection display signal, it switches the screen display from the screen on which the selection object is displayed to the data display screen and displays the reproduction data 1.
  • the data selection execution device displays on the screen a texture indicating the data content attached to each surface of the three-dimensional rotating object placed in the three-dimensional virtual space.
  • the music data and moving image data associated with the surface facing the front are displayed as an auxiliary display without interruption, and when a user performs a predetermined selection operation, the user faces the user's viewpoint most Since the system is configured to reproduce the data associated with the facing surface, the three-dimensional rotating object in the three-dimensional virtual space is used to roll the cylindrical rotating body in the real world.
  • Music that can be reminiscent of a virtual computer provides an intuitive operating environment that is easy for users unfamiliar with Bascon, and music that is displayed as an aid with selection objects.
  • a data selection execution device that allows the user to select data comfortably can be realized.
  • the display of the reproduced signal of the data associated with the selected surface and the display of the reproduced signal of the data associated with the next selected surface are crossed.
  • the display of the data reproduced signal of the data associated with the selected surface is faded out, and the display is switched to the next selected surface, as shown in Fig. 14 (c).
  • the display of the data reproduction signal may be fed in. In this case, there is no need to reproduce the two data at the same time, so there is no need to double the data decision means, program decision means, and data reproduction device.
  • a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation
  • the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed.
  • an angle change pattern holding unit may be provided to automatically perform the rotation display control.
  • the selection surface determination unit 116 determines the surface facing the front on the display screen based on the count information output from the force counter unit 114.
  • a configuration may be adopted in which the face facing the front is determined.
  • FIG. 15 is a diagram for explaining the operation of switching the reproduction sound display when the technique of the three-dimensional sound is applied in the data selection execution device according to the eighth embodiment.
  • the number of surfaces that make up the three-dimensional rotator object is six
  • the central axis of rotation is arranged vertically in the three-dimensional virtual space
  • the three-dimensional rotator object is moved from the central axis direction of rotation.
  • the sound source position of the sound data corresponding to surface 1 (corresponding to the reproduced data 1 in Fig. 13) is at the center of the screen.
  • the sound is displayed as if the sound source position of the audio data corresponding to surface 2 (corresponding to playback data 2 in Fig. 13) is in the space on the left side when viewed from the screen.
  • the sound source position of the audio data corresponding to surface 1 moves to the right space toward the screen,
  • the sound source position in the audio display is controlled so that the sound source position of the sound data corresponding to surface 2 is toward the center of the screen, and at the time shown in Fig.
  • the sound source position of the audio data corresponding to surface 2 is displayed at the center of the screen, and the sound source position of the audio data corresponding to surface 1 is displayed as sound in the space on the right side of the screen.
  • a method of arranging the sound source of the audio data is conceivable, other methods may be used. For example, as shown in FIG. 17, a predetermined line on the extension of a straight line connecting the rotation axis and the center of the plane is used. A sound source of audio data corresponding to the plane may be arranged by projecting from a distance position onto a straight line parallel to the display screen.
  • FIG. 18 is a block diagram showing a configuration of a video display device according to Embodiment 9 of the present invention.
  • reference numeral 1101 denotes a video receiving means for receiving an input signal transmitted via a broadcasting network and outputting an input video signal, and 1104 a memo for holding the input video signal.
  • 1103 writes the input video signal to the memory means 1104, and stores the memory control signal in the memory according to the area cutout information indicating the position at which the area used as the texture is cut out from the input video signal.
  • Parameter separation means, 1105 places a three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information from the parameter separation means 111, and sets the object coordinate information of the three-dimensional object in the three-dimensional virtual space.
  • Object position determining means for outputting object arrangement order information from object coordinate information in accordance with a user input, and outputting object arrangement information from object position determining means 1105 in accordance with user input.
  • Object position comparing means for comparing the position of each object based on the order information, selecting an object under predetermined conditions, and outputting the selected object information, 1 1 1 1 is the object position comparing means 1 1 1
  • the channel corresponding to the selected object is determined from the selected object information from 0 and the channel correspondence information from the parameter separation means 1102.
  • Channel determining means for determining and outputting channel information.
  • 1106 perspectively projects the object coordinate information of the three-dimensional object from the object position determining means 1105 onto a display projection plane, and displays the information.
  • Perspective projection conversion means for converting into projection plane coordinate information, 1107 is a partial image read out from memory input / output control means 1103 based on projection plane coordinate information from perspective projection conversion means 1106. The signal is transferred to the specified surface of the 3D object.
  • a frame memory means for outputting a signal, 110 9 is a video display means for displaying an output video signal from the frame memory means 111 or an input video signal from the video receiving means 111. It is.
  • the video display device according to Embodiment 9 cuts out an area used as a texture from an input video signal transmitted via a broadcast network or an input video signal such as a multi-screen, The above-mentioned texture is attached to each surface of the three-dimensional rotating object placed in the virtual space, and channel selection is performed.
  • the initial channel of the input signal input to video receiving means 1101 is a multi-screen video composed of a plurality of partial videos.
  • the video receiving unit 1101 When an input signal such as a split screen or a multi-screen having a predetermined number of input signals composed of a plurality of independent videos is input to the video receiving unit 1101, the video receiving unit 111 The input video signal is output to the memory input / output control means 1103.
  • the memory input / output control means 1103 outputs a memory control signal to the memory means 110 based on the cutout coordinates of the area cutout information, and the input image held in the memory means 1104.
  • a partial video signal is extracted from the signal, and the partial video signal is output to the rasterizing means 1107.
  • the rasterizing means 111 attaches the partial video signal to the three-dimensional object perspectively projected on the display as a texture based on the projection plane coordinate information from the perspective projection converting means 1106. At this time, the rasterizing means 1107 needs to repeat the processing for the number of partial images constituting the multi-screen. Output information to parameter separation means 1 1 0 2 c P99 / 07307
  • the frame memory unit 1108 outputs an output video signal to the video display unit 1109 at a predetermined display timing, and views the video.
  • the video displayed at this time is a three-dimensional rotating object that separates the partial video signal from the input video signal and pastes it as a texture on each surface of the three-dimensional object placed in the three-dimensional virtual space.
  • FIG. 2 shows an example of a three-dimensional rotating object.
  • the object position determining means 1105 determines a 3D object in a 3D virtual space based on the 3D coordinate information. The position is determined, and the object coordinate information is output to the perspective projection conversion means 111.
  • the perspective projection conversion means 1106 performs perspective projection conversion of the object coordinate information on the display projection plane, and outputs it to the rasterization means 11007 as projection plane coordinate information.
  • the object position determining means 1105 compares the object arrangement order information with the object position comparing means 1.
  • the selected object is determined under predetermined conditions by comparing the positional relationship between the objects, and the selected object information is output to the channel determining means 111.
  • the channel determining means 1 1 1 1 1 refers to the channel correspondence information output from the parameter separating means 1 1 0 2,
  • the channel corresponding to the selected object information output from 111 is determined, and is output to the video receiving means 110 as channel information.
  • the video receiving unit 1101 switches the receiving channel based on the channel information and outputs the input video signal to the video display unit 110.
  • the video display means 1109 stops displaying the output video signal from the frame memory means 1108, and Switch to image signal and display.
  • the displayed video is a full screen display of the channel selected by the user.
  • FIG. 19 is a conceptual diagram relating to three-dimensional display according to the ninth embodiment.
  • reference numeral 201 denotes an input video signal in the case of a 4-split multi-screen
  • 202 denotes a three-dimensional rotating object arranged in a three-dimensional virtual space
  • 203 denotes a three-dimensional rotating object.
  • This is the display projection surface when projected on the display.
  • a three-dimensional rotator object arranged in a three-dimensional virtual space is a three-dimensional object composed of a plurality of surfaces, each surface being arranged at regular intervals with respect to a central axis.
  • the four planes constituting the three-dimensional rotating object are four planes, and the central axis of rotation is arranged in the vertical direction in the three-dimensional virtual space.
  • FIG. 20 is an explanatory diagram relating to information necessary for three-dimensional display according to the ninth embodiment. Fig.
  • Figure 20 (a) shows the input video in the 4-split multi-screen, and the lower part of the figure shows the vertex coordinates (1) of the cutout area along the division boundary of each partial video.
  • Figure 20 (b) is a three-dimensional object, and the lower part of the figure shows the coordinates of the vertex coordinates of the three-dimensional object (2), and the correspondence between the coordinates of the vertex of the three-dimensional object and the coordinates of the region clipping of the partial image (3).
  • the distance from the viewpoint to the display projection plane and the distance from the viewpoint to the origin of the 3D virtual space (4) are shown as information necessary for the perspective transformation.
  • the parameter information input to the parameter separating means 1102 includes the coordinate information and the perspective transformation information shown in FIGS. 20 (1) to (4), and Channel support information for each side of the 3D object 7
  • the parameter separating means 111 reports. Then, by the parameter separating means 111, the three-dimensional object term coordinates (2) the correspondence between the three-dimensional object vertex coordinates and the cut-out coordinates (3), and the information for perspective transformation (4)
  • the three-dimensional coordinate information composed of the following is output to the object position determining means 1105.
  • the cutout coordinates (1) are output to the memory input / output control means 1103 as area cutout information.
  • the channel correspondence information is output to the channel determining means 1 1 1 1.
  • the three-dimensional animation display can be performed by preparing the coordinate information in which the value of the parameter information changes with time. It is possible.
  • FIG. 21 is an explanatory diagram relating to a channel selection method according to the ninth embodiment.
  • reference numeral 204 denotes an input video signal in the case of a 4-split multi-screen, in which three-dimensional objects corresponding to four partial videos are arranged in a circle and rotated to display a three-dimensional animation. Is done. 205 is a view of the three-dimensional object viewed from above, and time elapses from left to right in the figure.
  • Reference numeral 206 denotes an image on the display projection plane, and reference numeral 207 denotes a selected image.
  • step S1 if the selection button is pressed at the time of the arrow, a channel is selected according to predetermined criteria.
  • step S2 the one that is closest to the viewpoint and has the largest display area is selected as a judgment criterion.
  • the corresponding partial image is circle 1 and the image switched to the channel corresponding to circle 1 is displayed (207)
  • FIG. 22 is an explanatory diagram of a criterion for selecting a channel according to the ninth embodiment.
  • FIG. 22 (a) is a first criterion for channel selection
  • FIG. 22 (b) is a second criterion for channel selection.
  • 208 and 210 are views of the three-dimensional object viewed from above
  • 209 and 211 are images on the display projection surface.
  • the criterion in Fig. 22 (b) is the degree to which the object is inclined with respect to the display surface, that is, the straight line (dotted line in the figure) formed by the reference position of the object and the center of the object. In this case, the judgment is made based on the absolute value of the angle formed by the reference axis. From Fig. 2 (b), PQ is the reference axis, O is the center of rotation, A1 is the reference position of circle 1, A2 is the reference position of circle 2, A3 is the reference position of circle 3, A4 Is the reference position of circle 4.
  • the perspective projection transformation unit 1106 performs three-dimensional coordinate calculation, whereas the affine transformation is performed. Since the means performs two-dimensional coordinate calculation, the amount of calculation can be reduced.
  • FIG. 23 is an explanatory diagram regarding the difference between perspective projection transformation and affinity transformation.
  • reference numeral 212 denotes an image serving as a source of texture mapping, which is shown by a grid-like pattern for easy explanation.
  • 2 13 is an image in the case of perspective projection transformation
  • 2 14 is an image in the case of affinity transformation.
  • the grid width is wider toward the front, whereas in the affinity transformation image 2 14, the grid width is almost uniform. Therefore, the perspective projection transformation can express the sense of depth more than the affinity transformation, but in any case, the sense of depth from the overview of the object can be maintained.
  • a predetermined number of input video signals transmitted via broadcast or network, or a plurality of input video signals called split screens or multi-screens are provided.
  • An input video signal composed of independent video is displayed with a texture pasted on a predetermined surface of a 3D object, and a predetermined selection operation is performed by the user.
  • the procedure for selecting the target program by moving the cursor can be omitted, and the number of divisions increases to reduce the partial video per program.
  • the object can be enlarged and displayed by arranging the object near the viewpoint in the three-dimensional virtual space.
  • the effect of the three-dimensional animation can be obtained by changing the coordinates of the three-dimensional object of the predetermined three-dimensional shape information according to time.
  • the surface constituting the three-dimensional rotating object is four, and the central axis of rotation is Is shown in the vertical direction in the three-dimensional virtual space.
  • the number of surfaces constituting the three-dimensional rotating object is not limited to four, but one to three. It may be more than the surface, or it may be changed to a rotating body to be displayed according to the input video signal to be supported.
  • the center axis of rotation may be arranged in the horizontal or oblique direction in the three-dimensional virtual space.
  • the parameter information is configured to be separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102.
  • Information and area cutout information may be multiplexed into an input signal, input to the video receiving means 1101, and separated.
  • FIG. 24 is a block diagram showing a configuration of a video display device according to Embodiment 10 of the present invention.
  • the memory Does not hold the input signal, but holds only the partial video signal necessary for the texture matching processing of the rasterizing means 107.
  • the video display device according to Embodiment 10 cuts an area from an input video signal and pastes the area onto an object surface in the three-dimensional virtual space, instead of holding the entire video in a memory, Only the released area is retained in memory.
  • the video display device is the same as the ninth embodiment except for the operation of the structure in which the area separating means 1301 is added. Therefore, only the parts different from the ninth embodiment will be described.
  • parameter information including the vertex coordinates of the cutout area along the division boundary of each partial video of the input video signal is output to the parameter separating unit 1102.
  • the area cutout information output from the parameter separation means 1102 is input to the area separation means 1301, and also to the memory input / output control means 110103.
  • the area separating means 1301 separates an area of the input video signal output from the video receiving means 1101 according to the area cutout information, and outputs the area as a memory storing video signal. Output to 1103.
  • the memory input / output control means 110 outputs a memory control signal to the memory means 110 based on the cut-out coordinates of the area cut-out information, and is held in the memory means 110 A partial video signal is extracted from the video signal for memory storage and output to the rasterizing means 110 7.
  • the rasterizing means 111 differs from the ninth embodiment in that the memory means 1104 does not hold all input video signals, It holds only the partial video signal necessary for the texture mapping process of the first means 107.
  • FIG. 25 is an explanatory diagram relating to memory retention of a partial video according to the tenth embodiment of the present invention. From Fig. 25, 2 15 is the input video signal in the case of 4-split multi-screen, 2 16 is the partial video signal to be held in memory, and 2 17 is the 3D object viewed from above It is a diagram, and time elapses from left to right in the diagram. 218 is an image on the display projection surface.
  • the input video signal 2 15 is obtained by extracting the area into the partial video signal 2 16 to be held in the memory means 1 104 from the area extraction information from the parameter separation means 1 102 and extracting the area. Only in the memory means 1104.
  • the partial video signal 2 16 held in the memory unit 110 4 is output to the rasterizing unit 110 4, and is subjected to a texture mapping process on a predetermined surface of the three-dimensional object. Therefore, on the display, only the image 218 on the display projection plane is stored in the memory, and the image not projected is not retained. In other words, taking the example of the left end of Fig.
  • the video receiving unit 1101 receives the input signal composed of the predetermined number of partial videos transmitted via the broadcast or the network. Output the input video signal, and separate the area from the input video signal according to the area cutout information.When pasting to the surface of the object in the three-dimensional virtual space, the whole video is not stored in the memory but cut out. Since only the extracted area is held in the memory, the amount of memory can be reduced.
  • the parameter information is separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102, but the present invention is not limited to this.
  • the parameter information and the region cutout information are multiplexed with the input signal and input to the video receiving unit 111. It may be a configuration that separates and separates.
  • Embodiment 11 1.
  • FIG. 26 is a block diagram showing a configuration of a video display device according to Embodiment 11 of the present invention.
  • the parameter generating means 1401 automatically generates three-dimensional coordinate information and area cutout information based on the number of areas.
  • Numeral 2 analyzes the input video signal, which is a multi-screen video composed of a plurality of partial videos, input from the video receiving means 111, measures the number of partial videos, and generates information on the number of areas as a parameter. This is a video analysis means that outputs to the means 1401.
  • the video display device according to Embodiment 11 recognizes the number of divisions of an image transmitted on a multi-screen after receiving it, and generates shape information of a three-dimensional object according to the number of divisions.
  • the video display device is the same as the ninth embodiment except for the operation of the structure in which the parameter generating means 1401 and the video analyzing means 1442 are added. Only the parts that differ from 9 are explained.
  • the video receiving unit 111 inputs an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network, and transmits the input video signal to the memory input / output control unit 1.
  • the video analyzing means 1402 outputs to the parameter generating means 1441, information on the number of areas determined from the input video signal by a predetermined number.
  • the parameter generating means 1401, based on the number-of-regions information, is a parameter composed of three-dimensional coordinate information and region cutout information indicating a position for cutting out a region used as a texture from the input video signal.
  • Information is automatically generated, and based on the parameter output control information, the area cutout information and the three-dimensional coordinate information are separated, and the area cutout information is stored in the memory input / output control means. And outputs the three-dimensional coordinate information to the object position determining means 1105.
  • FIG. 27 is an explanatory diagram relating to generation of three-dimensional information according to Embodiment 11 of the present invention.
  • Fig. 27 (a) shows the input video signal when splitting into two
  • Fig. 27 (b) shows the input video signal when splitting into four
  • Fig. 27 (c) shows the input video when splitting into six
  • FIG. 27 (d) shows the input video signal in the case of 9 divisions.
  • the lower diagram of each divided input video signal is an example of the arrangement method of the automatically generated 3D object viewed from above.
  • the n-divided input video signal is output to the video analyzing means 1402.
  • the image analysis means 1442 determines the number of divisions of the image (in this case, the number of divisions is n), and prepares n three-dimensional object surfaces to be pasted as textures according to the number of divisions. They are arranged at equal intervals in a circular shape so as to form an n-gon (Fig. 27, lower panel).
  • an input signal composed of a predetermined number of partial videos transmitted via a broadcast or a network is received by the video receiving unit 1101.
  • the input video signal is output, and the video analysis means 1402 determines the number of video divisions and generates 3D object shape information according to the number of divisions. Can be realized.
  • FIG. 27 shows an example in which the surface of the three-dimensional object is arranged in a circular shape.
  • the present invention is not limited to this, and the surface is shifted in the depth direction. It may be arranged.
  • the parameter information is automatically generated by the parameter generation means 1401, based on the number-of-regions information output from the video analysis means 144.
  • the configuration is not limited to this, and the configuration may be such that parameter information and area cutout information are multiplexed into an input signal, input to the video receiving unit 1101, and separated.
  • FIG. 28 is a block diagram showing a configuration of a video display device according to Embodiment 12 of the present invention.
  • 1508 is a video that receives a first input signal transmitted through a broadcast or a network and outputs a first input video signal composed of a predetermined number of partial videos
  • Video receiving means for selectively receiving a second input signal transmitted via a broadcast or a network based on channel information, and outputting a second input video signal
  • 1511 is an enlargement / transformation means for enlarging and transforming the partial video signal output from the memory input / output control means 11053 and outputting a partial video enlargement / deformation signal
  • 1505 is a free Video switching that outputs the output video signal by switching the 3D output video signal output from the memory memory means and the partial video expansion and deformation signal output from the magnifying and deforming means 1 5 1 1 at a predetermined timing. Means.
  • the video display device according to Embodiment 12 switches the displayed video smoothly when switching to the full-screen display of the selected channel.
  • the video receiving means 1 101 is replaced with the video receiving means 1 (1508) and the video receiving means 2 (150 2) is arranged, and the operation is the same as that of the ninth embodiment except for the operation of the structure in which the enlarging / deforming means 1511 and the video switching means 1505 are added. explain.
  • the video receiving unit 1 (1508) receives an input signal 1 which is a multi-image video channel composed of a plurality of partial videos, and converts the input video signal 1 into a memory input / output control unit 1 1 0 3 Output to This input video signal 1 is used to generate a three-dimensional display video as in the ninth embodiment.
  • the video receiving means 2 (1502) receives the input signal 2 based on the channel information 1 2 18 output from the channel determining means 1 1 1 1 and displays the input video signal 2 on the video display means 1 1 0 Output to 9.
  • Video signal 2 displays the selected channel in full screen.
  • the enlargement / transformation means 1511 1 performs predetermined image effect processing such as enlargement / deformation on the partial video signal output from the memory input / output control means 1103, and outputs the image as a partial image enlargement / deformation signal.
  • the video switching means 1505 switches and outputs between the three-dimensional output video signal output from the frame memory means 1108 and the partial video enlarged deformation signal output from the magnifying and deforming means 1511.
  • the video signal is output to the video display means 110.
  • the video display means 110 switches between the output video signal and the input video signal 2 for display.
  • FIG. 29 shows an explanatory diagram of the video switching method according to the embodiments 9 to 11
  • FIG. 30 shows an explanatory diagram of the video switching method according to the embodiment 12 to explain the difference between the two. .
  • Fig. 29, 219 is the input video signal in the case of the 4-split multi-screen, and the 3D objects corresponding to the 4 partial images are arranged in a circle and rotated to perform the 3D animation. Is displayed.
  • Reference numeral 220 denotes an image in which a three-dimensional object is projected on a display
  • reference numeral 222 denotes an image of a selected channel.
  • the displayed image is immediately switched from the three-dimensional display to the image of circle 1.
  • 222 is an input video signal indicating that the video of circle 1 has been selected from the input video signal 222, and 222 and 222 are portions of the selected circle 1.
  • This is an input video signal that has been enlarged and transformed.
  • the same reference numeral is assigned and the description is omitted.
  • FIG. 30 when a channel is selected (circle 1 in the figure), the image is displayed while performing enlargement and deformation processing using the partial image corresponding to the circle 1 selected in step S3, and S After a predetermined time in 4, the screen is smoothly switched to the full screen image of circle 1.
  • the partial video used as the texture in the three-dimensional display is enlarged and deformed. After switching to full-screen display after displaying, smooth video switching can be realized.
  • the parameter information is separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102, but the present invention is not limited to this.
  • the configuration may be such that the parameter information and the region cutout information are multiplexed on the input signal 1 and input to the video receiving means 1 (1508) to be separated.
  • Embodiment 1 3.
  • FIG. 31 is a block diagram showing a configuration of a channel selection device according to Embodiment 13 of the present invention.
  • 1 4 5 is a selection input means for inputting a selection input for the user to select a channel
  • 1 4 6 is a three-dimensional rotating object when a selection input is input from the selection input means 1 4 5
  • Selection surface determination means for determining which of the plurality of surfaces faces the front on the display screen
  • 147 is a plurality of surfaces constituting a three-dimensional rotating object and a portion corresponding to each channel
  • Correspondence table holding means for holding information indicating a correspondence relationship between image texture information and area cutout information for generating a partial image corresponding to each channel based on an externally input area information parameter
  • FIG. 32 is a diagram showing an example of the correspondence table held by the correspondence table holding means 147.
  • 1 4 8 should determine the channel associated with the surface determined by the selected surface determination means 1 4 6 based on the information held in the correspondence table holding means 1 4 7, and switch and display it.
  • Channel determining means for determining a channel and outputting the selected channel information to the video receiving means 150, and 150 receiving an input signal transmitted via broadcast or a network, and determining the channel.
  • Video receiving means for selecting an input video signal and outputting an input video signal; 152, a memory means for holding the input video signal; 151, an input video signal for writing to the memory means 152, a correspondence table holding means 1 Memory input / output control means for outputting a memory control signal to the memory means 152 in accordance with the area cutout information inputted from the memory means 47, and for reading out a partial video signal from the memory means 152.
  • the channel selection device applies a texture to an input signal transmitted via a broadcast network on each surface of a three-dimensional rotating object placed in a three-dimensional virtual space.
  • the channel information associated with the surface facing the front of the user's viewpoint is displayed.
  • the input video signal is output from the video receiving means 150 to the memory input / output control means 151.
  • the memory input / output control means 15 1 outputs a memory control signal to the memory means 15 2 based on the cut-out coordinates of the area cut-out information, and outputs a partial video signal from the input video signal held in the memory means 15 2. And outputs the partial video signal to the texture holding means 149.
  • the channel selection device when the channel selection operation mode starts, the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space is displayed.
  • the initial coordinates are read out, and the perspective transformation means 106 performs perspective transformation to the display screen of the three-dimensional virtual space including the three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and obtains the projection plane coordinates. Is output.
  • the hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do.
  • the texture matching means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107. Then, paste the texture read from the above texture holding means 1 4 9. Where 3D times / JP 07 7
  • the correspondence between each surface of the rolling object and the texture can be obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding means 144.
  • the rendering means 111 is based on the frame information after texture mapping output by the texture mapping means 110 and the color and brightness of each pixel based on the depth information held by the depth information holding means 108. Draws all pixel information.
  • the frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 displays the frame information held in the frame buffer 112 at a predetermined timing. Read and display the screen. As a result, the screen in the initial state of the channel selection operation mode is displayed.
  • the parameter changing unit 103 changes the parameter before changing from the parameter holding unit 102 based on the rotation instruction control signal from the rotation instruction input unit 101.
  • the parameter (in this case, the parameter in the initial state) is read, the parameter is changed, and the changed parameter is recorded in the parameter holding means 102, and a power counter control signal is output to the power counter means 114.
  • the coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and converts the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 using the changed parameters.
  • the changed model coordinates obtained as a result are output to the perspective transformation means 106.
  • the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the channel selection operation mode. The same process as above is performed, and the screen after the rotation instruction control signal is input is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, what was displayed with the surface 1 facing the front in the initial state is changed to a positive direction rotation input control signal.
  • the rotation instruction input means 101 is configured so that the operation of the remote control board, the operation of the sol key, the movement of the mouse, and the like correspond to the rotation of the three-dimensional rotating object. do it.
  • the counter means 114 performs the count operation by the counter control signal output from the parameter changing means 103. Specifically, for example, when a rotation instruction control signal in the forward direction is input from the rotation instruction input means 101, the parameter changing means 103 changes the count value of the force counter means 114 by one increment. When the counter control signal is output and the negative direction rotation control signal is input from the rotation direction input means 101, the parameter changing means 103 decrements the count value of the counter means 114 by 1 decrement. The counter means 114 outputs a force counter control signal to be incremented, and the counter means 114 receives this counter control signal and changes the force counter value held by itself.
  • the selection surface determination means 144 becomes counter means 114.
  • the count value at that point is obtained as count information from, and based on this count information, the face facing forward when the selection control signal is input is determined, and this face is used as the selected face information. And output.
  • the channel determining means 148 acquires the selected plane information from the selected plane determining means 146, and refers to the plane-to-channel correspondence information held in the correspondence table holding means 147, and is indicated by the selected plane information.
  • the channel corresponding to the plane is output to the video receiving means 150 as selected channel information.
  • the video receiving means 150 switches the receiving channel based on the selected channel information and displays the input video signal on the screen display means 13.
  • the three-dimensional rotating object displayed on the screen display means 113 displays the rotating object shown in FIG. 2, but the texture information for selecting the channel corresponding to each surface is the third-two rotating object. It is displayed based on the figure. For example, face 1
  • the data of channel A displayed for is based on the information required for the three-dimensional display shown in Fig. 33, and the area is cut out along the division boundary of partial image A. Construct a body object.
  • the channel selection device cuts out the partial video signal from the input signal transmitted via broadcast or network, and pastes the partial video signal on each surface of the three-dimensional rotating object.
  • the three-dimensional rotating object is arranged and displayed in the three-dimensional virtual space, and the user rotates the three-dimensional rotating object by giving an instruction by a predetermined operation.
  • a predetermined selection operation is performed by the user, the surface that is most frontal to the user's viewpoint is determined, and the channel associated with that surface is selected by referring to the correspondence table.
  • the corresponding program is configured to be displayed on the screen, by using the three-dimensional rotating object in the three-dimensional virtual space, it is possible to associate the image of rolling a cylindrical rotating object in the real world, Intuitive, easy to use for users An operation environment can be realized.
  • the configuration is such that an input signal such as a broadcast is input to the video receiving means 150, but the present invention is not limited to this.
  • a parameter separating unit is provided to separate the input signal and the area information parameter, and the input signal is transmitted to the video receiving means 150.
  • the area information parameter may be input to the correspondence table holding means 147.
  • the program selection execution device, the data selection execution device, the video display device, and the channel selection device convert a three-dimensional rotating object, which has previously displayed a two-dimensional selection display screen. It can be configured and rotated in a 3D virtual space. Thereby, it is possible to associate the image of rolling a cylindrical rotating body in the real world, and an intuitive operation environment that is easy for the user to use can be realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A program selective execution device, data selective execution device and image display device display on a screen a selecting object formed by pasting a texture of a still or moving image showing the descriptions of an object of selection on each face of a 3-D cylindrical rotating body disposed in a 3-D virtual space; and a user rotates the selecting object by performing a specified rotation instruction operation, judges a face most squarely facing the viewpoint of the user when the user gives a specified selective instruction and selects an object of selection corresponding to that face. The above construction eliminates the need of downsizing each face according to the number of objects of selection, enhances visibility, provides to users an intuitive operational environment associative of an image of rolling a cylindrical rotating body.

Description

明 細 書 プログラム選択実行装置、 データ選択実行装置、 および映像表示装置、 チャンネル選択装置 技術分野  Description Program selection execution device, data selection execution device, video display device, channel selection device
この発明は、 パソコン等においてプログラムを選択、 実行するプログ ラム選択実行装置、 及びデータを選択、 実行するデータ選択実行装置、 さらにテレビ放送等を受信して番組ガイ ド表示によ りチャンネルを選択 する映像表示装置、 およびチャンネル選択装置に関し、 特に、 使用者が 選択動作を行う際に直感的でなじみ易い操作環境を実現することができ るプログラム選択実行装置、 データ選択実行装置、 および映像表示装置、 チャンネル選択装置に関するものである。 背景技術  The present invention provides a program selection and execution device for selecting and executing a program on a personal computer or the like, a data selection and execution device for selecting and executing data, and further receives a television broadcast or the like and selects a channel by displaying a program guide. In particular, the present invention relates to a video display device and a channel selection device, in particular, a program selection execution device, a data selection execution device, and a video display device capable of realizing an intuitive and familiar operating environment when a user performs a selection operation. It relates to a channel selection device. Background art
Windows (マイク ロ ソフ ト株式会社の登録商標) などに代表される従 来の 2次元のインタ一フェースにおいては、プログラムやデータの選択, 実行は、 メニューなどで 2次元画面上に並列に表示された項目をマウス などのポイ ン ト装置で選択する方法が用いられている。 この方法では、 選択対象の項目が増えると、 表示領域に表示されない項目が生じ、 使用 者は、 選択しょ う とする項目が表示領域に表示されていないときには表 示領域のスク ロール等の操作を行って選択しよ う とする項目を表示領域 に表示させた後に項目をマウスなどのポイン ト装置で選択する必要があ る。  In a conventional two-dimensional interface typified by Windows (registered trademark of Microsoft Corporation), selection and execution of programs and data are displayed in parallel on a two-dimensional screen using menus and the like. A method is used in which the selected item is selected using a pointing device such as a mouse. In this method, when the number of items to be selected increases, some items are not displayed in the display area. When the item to be selected is not displayed in the display area, the user performs an operation such as scrolling the display area. After displaying the item to be selected in the display area, it is necessary to select the item with a point device such as a mouse.
また別の形態と して、 今日デジタル多チャンネル化が進み、 放送ゃネ ッ トワークを経由して受信される複数の番組は、 マルチ画面表示による プロモ一ショ ンチャンネルの放送によ り番組ガイ ドを行っている。  As another form, digital multi-channels have been advanced today, and a plurality of programs received via a broadcast network have been provided by a multi-screen display using a promotion channel broadcast. It is carried out.
従来のマルチ画面表示はディスプレイ表示画面を矩形分割し、 各分割 領域に映像やチヤンネルを割り 当てて表示する方法を用いている。 この マルチ画面表示から映像やチヤンネルを選択するには、 まず使用者に対 し選択可能な映像やチヤンネルであることを示すためにカーソル表示あ るいは選択枠表示を行う。 そして、 使用者は十字ボタンやマウスなどの 入力装置を用いて、 カーソルあるいは選択枠を移動し、 選択したい映像 やチャンネルにカーソルあるいは選択枠が合致したときに、 選択決定の ボタンを押すことによ り映像やチヤンネルを選択する。 選択された映像 やチャンネルはマルチ画面表示から全画面表示に切り替えて、 ディスプ レイに表示されること となる。 The conventional multi-screen display uses a method in which a display screen is divided into rectangles, and images and channels are assigned to each divided area and displayed. this To select an image or channel from the multi-screen display, first display a cursor or a selection frame to indicate to the user that the image or channel is selectable. Then, the user moves the cursor or the selection frame using an input device such as a cross button or a mouse, and presses the selection button when the cursor or the selection frame matches the image or channel to be selected. Video or channel. The selected video or channel is switched from multi-screen display to full-screen display and displayed on the display.
しかしながら、 従来の 2次元のィ ンターフェ一スにおけるメニュー表 示を用いたプロ グラム選択実行装置, データ選択実行装置は、 パソコン などの操作に慣れた使用者にとっては容易に操作できるものであるが、 パソコンなどの操作に慣れていない使用者にとっては、 直感的にわかり にく く 、 操作にとまどう場合があった。  However, the conventional program selection execution device and data selection execution device using menu display in a two-dimensional interface can be easily operated by a user who is accustomed to the operation of a personal computer or the like. For a user unfamiliar with the operation of a personal computer, etc., it was difficult to understand intuitively.
また、 従来のマルチ画面表示は、 ディ スプレイ表示画面を矩形分割す る表示方法を用いており、 分割数が増えるごとに 1つあたりの映像の表 示サイズが小さく 、 映像が見にく く なり、 使用者にとってはチャンネル を選びにく く なるという問題があった。 さ らに、 チャンネルや映像を選 択する際には、 カーソル移動などの操作手順を行う ことから、 表示画面 が増えるにつれて選択決定ボタンの操作が複雑になるという問題があつ た。  In addition, the conventional multi-screen display uses a display method in which the display screen is divided into rectangles, and as the number of divisions increases, the display size of one image becomes smaller, so that the image becomes difficult to see. Had a problem that it was difficult for them to select a channel. Furthermore, when selecting a channel or video, the operation procedure such as cursor movement is performed, so that there is a problem that the operation of the selection decision button becomes complicated as the number of display screens increases.
従って、 ノ ソコンに見られるプロ グラム、 あるレ、はデータの選択、 お よび放送ゃネッ トワークにおける番組選択は、 選択項目が繁雑になるに つれて、 使用者にとっては操作が複雑になり、 短時間ですばやく選択す ることができず、 しかも誤動作が生じやすいという共通した問題があつ た。  Therefore, the selection of programs, certain programs, and data found on a computer, and the selection of programs on a broadcast network, become more complicated for the user as the selection items become more complicated, and shorter. There was a common problem that it was not possible to make a quick selection in time and malfunctions were likely to occur.
本発明は上記のよ うな問題点を解消するためになされたもので、 パソ コンにおけるプログラムやデータ、 および放送におけるマルチ画面で構 成された映像を使用者にもなじみ易い直感的な操作環境を実現すること ができるプログラム選択実行装置、 データ選択実行装置、 および映像表 示装置、 チャンネル選択装置を提供することを目的とする。 発明の開示 SUMMARY OF THE INVENTION The present invention has been made to solve the above-mentioned problems, and provides an intuitive operation environment in which a user can easily understand programs and data in a personal computer and a multi-screen video in a broadcast. Program selection and execution device, data selection and execution device, and video table that can be realized It is an object to provide a display device and a channel selection device. Disclosure of the invention
本発明 (請求の範囲第 1項) に係るプログラム選択実行装置は、 複数 の面が中心軸に対して一定の間隔で配置された 3次元回転体物体の上記 各面にそれぞれプログラムの内容を示すテクスチャを貼り付けた選択用 オブジェク トを 3次元仮想空間内に配置した画像を表示画面上に表示す る選択用オブジェク ト表示手段と、選択用ォブジェク ト表示手段に対し、 上記選択用オブジェク トが 3次元仮想空間内で上記中心軸を回転の中心 と して回転する画像を表示するための回転表示制御信号を与える回転表 示制御手段と、 プログラムを選択する選択入力が入力される選択入力手 段と、 選択入力手段から選択入力が入力されたときに 3次元回転体物体 を構成する複数の面のうちどの面が表示画面上において正面を向いてい るかを判定する選択面判定手段と、 上記 3次元回転体物体を構成する複 数の面とプログラムとの対応関係を示す情報を保持する対応表保持手段 と、 選択面判定手段が判定した面に対応づけられたプログラムが何であ るかを上記対応表保持手段に保持された情報に基づいて判定し、 実行す べきプログラムを決定するプログラム決定手段と、 プログラム決定手段 が決定したプログラムを実行するプログラム実行手段とを備えたことを 特徴とする ものである。  The program selection and execution device according to the present invention (claim 1) shows the contents of a program on each of the surfaces of a three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to a central axis. The selection object display means for displaying an image in which the selection object on which the texture is pasted is arranged in a three-dimensional virtual space on a display screen, and the selection object display means are provided with the selection object. A rotation display control means for providing a rotation display control signal for displaying an image rotating about the central axis in the three-dimensional virtual space as a center of rotation, and a selection input means for inputting a selection input for selecting a program And a step for determining which surface of the plurality of surfaces constituting the three-dimensional rotator object faces front on the display screen when a selection input is input from the selection input means. Surface determination means, correspondence table holding means for storing information indicating the correspondence between the plurality of surfaces constituting the three-dimensional rotating object and the program, and a program associated with the surface determined by the selected surface determination means A program deciding means for deciding what is is based on the information held in the correspondence table holding means and deciding a program to be executed, and a program executing means for executing the program decided by the program deciding means. It is characterized by that.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いることによ り 、 現実世界の円筒状の回転体 を転がすイメージを連想させることが可能であり、 パソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができる。 この発明 (請求の範囲第 2項) は、 請求の範囲第 1項記載のプロダラ ム選択実行装置において、 上記回転表示制御手段が、 外部から入力され る回転指示入力に応じて上記回転表示制御信号を選択用オブジェク ト表 示手段に与えることを特徴とするものである。  In a program selection and execution apparatus having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating body in the real world, It is possible to realize an intuitive operating environment that is easy to be used by users who are not used to personal computers. According to a second aspect of the present invention, in the program selection execution device according to the first aspect, the rotation display control means outputs the rotation display control signal in response to a rotation instruction input externally input. Is given to the object display means for selection.
このよ う な構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いることによ り 、 現実世界の円筒状の回転体 を転がすィメ一ジを連想させることが可能であり、 バソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができる。 In the program selection and execution device having such a configuration, the three-dimensional virtual space is used. By using a three-dimensional rotating object, it is possible to remind the user of the image of rolling a cylindrical rotating body in the real world. An operation environment can be realized.
この発明 (請求の範囲第 3項) は、 請求の範囲第 1項記載のプロダラ ム選択実行装置において、 上記回転表示制御手段は、 上記選択用ォブジ ェク トを所定のパターンで回転させるための情報を保持する保持手段を 備え、 該保持手段に保持された情報に基づいて上記回転表示制御信号を 選択用オブジェク ト表示手段に与えることを特徴とするものである。  According to a third aspect of the present invention, in the program selection execution device according to the first aspect, the rotation display control means is configured to rotate the selection object in a predetermined pattern. A storage means for storing information is provided, and the rotation display control signal is provided to the selection object display means based on the information stored in the storage means.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いるこ とによ り 、 現実世界の円筒状の回転体 を転がすィメ一ジを連想させるこ とが可能であり、 バソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができ、 ま た、 3次元回転体物体は自動的に回転するので、 使用者はプログラムの 選択のみに注意すればよく 、 操作をよ り簡便にできる。  In the program selection and execution apparatus having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, the image of rolling a cylindrical rotating body in the real world is reminiscent. It is possible to realize an intuitive operation environment that is easy for users who are not used to the bath computer, and because the 3D rotating object rotates automatically, the user can select a program. It is only necessary to pay attention to the operation, and the operation can be further simplified.
この発明 (請求の範囲第 4項) は、 請求の範囲第 2項記載のプロダラ ム選択実行装置において、 上記回転表示制御手段は、 上記選択用ォブジ ェク トを所定のパターンで回転させるための情報を保持する保持手段を 備えたものであり、 外部から回転指示入力が入力されるときにはこの回 転指示入力に応じて上記回転表示制御信号を選択用オブジェク ト表示手 段に与え、 外部から回転指示入力が入力されないときには上記保持手段 に保持された情報に基づいて上記回転表示制御信号を選択用オブジェク ト表示手段に与えることを特徴とするものである。  According to a fourth aspect of the present invention, in the program selection execution device according to the second aspect, the rotation display control means is configured to rotate the selection object in a predetermined pattern. It is provided with holding means for holding information, and when a rotation instruction input is input from outside, the rotation display control signal is supplied to the selection object display means in response to the rotation instruction input, and the rotation is externally performed. When the instruction input is not input, the rotation display control signal is provided to the selection object display means based on the information stored in the storage means.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いるこ とによ り 、 現実世界の円筒状の回転体 を転がすイメージを連想させることが可能であり、 パソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができ、 ま た、 3次元回転体物体は自動的に回転するので、 使用者はプログラムの 選択のみに注意すればよく 、 操作をよ り簡便にできる。  By using a three-dimensional rotating object in a three-dimensional virtual space, the program selection and execution device having such a configuration can recall an image of rolling a cylindrical rotating object in the real world. Intuitive operating environment that is easy for users who are not accustomed to a personal computer to operate can be realized, and since the three-dimensional rotating object rotates automatically, the user only has to pay attention to the selection of the program. The operation can be further simplified.
この発明 (請求の範囲第 5項) は、 請求の範囲第 1 項ないし第 4項の いずれかに記載のプログラム選択実行装置において、 表示画面上におい て上記選択用オブジェク トが回転して 3次元回転体物体を構成する複数 の面のう ち正面を向いている面が切り替わる回数をカウン ト してカウン ト情報を出力するカウンタ手段を備え、 上記選択面判定手段は、 上記力 ゥンタの出力するカウン ト情報に基づいて表示画面上において正面を向 いている面を判定することを特徴とする ものである。 This invention (Claim 5) is defined by Claims 1 to 4. In the program selection execution device according to any one of the above, the number of times that the selection object rotates on the display screen and the number of switching of a front-facing surface among a plurality of surfaces constituting the three-dimensional rotating object is counted. And a counter for outputting count information.The selected plane determining means determines a front facing surface on a display screen based on the count information output by the power counter. That is what you do.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いる ことによ り、 現実世界の円筒状の回転体 を転がすィメージを連想させることが可能であり 、 バソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができる。  In the program selection and execution device having such a configuration, by using the three-dimensional rotating object in the three-dimensional virtual space, it is possible to remind an image of rolling a cylindrical rotating body in the real world. It is possible to realize an intuitive operation environment that is easy to be used by users who are not used to the bath computer.
この発明 (請求の範囲第 6項) は、 請求の範囲第 1項ないし第 4項の いずれかに記載のプログラム選択実行装置において、 上記選択面判定手 段は、 上記選択用ォブジェク ト表示手段が上記選択用オブジェク トを画 面表示する際に求める奥行き情報に基づいて表示画面上において正面を 向いている面を判定することを特徴とするものである。  According to a sixth aspect of the present invention, in the program selection execution device according to any one of the first to fourth aspects, the selection surface determination means includes a selection object display means. It is characterized in that a front-facing surface on a display screen is determined based on depth information obtained when the selection object is displayed on a screen.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体 を転がすィメージを連想させるこ とが可能であり、 バソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができる。 この発明 (請求の範囲第 7項) は、 請求の範囲第 1項ないし第 4項の いずれかに記載のプログラム選択実行装置において、 上記選択面判定手 段は、 上記選択用オブジェク トが初期状態から回転した角度を示す回転 角情報に基づいて表示画面上において正面を向いている面を判定するこ とを特徴とするものである。  By using a three-dimensional rotating object in a three-dimensional virtual space, the program selection and execution device having such a configuration can remind an image of rolling a cylindrical rotating object in the real world. Therefore, an intuitive operation environment that is easy to be used by users who are not used to the bassocon can be realized. The present invention (Claim 7) is the program selection execution device according to any one of Claims 1 to 4, wherein the selection surface determination means is arranged so that the selection object is in an initial state. It is characterized in that a front-facing surface on the display screen is determined based on rotation angle information indicating an angle rotated from.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体 を転がすィメージを連想させることが可能であり、 パソコンに慣れてい ない使用者にもなじみ易い直感的な操作環境を実現することができる。 この発明 (請求の範囲第 8項) は、 請求の範囲第 1項ないし第 4項の いずれかに記載のプログラム選択実行装置において、 選択されたプログ ラムが実行表示画面を有する場合に、 プログラム実行時に上記実行表示 画面が表示されるよ うに画面表示を切り替える画面表示切替手段を備え たことを特徴とするものである。 By using a three-dimensional rotating object in a three-dimensional virtual space, the program selection and execution device having such a configuration can remind an image of rolling a cylindrical rotating object in the real world. It is possible to realize an intuitive operating environment that is easy to be used by users who are not used to personal computers. This invention (Claim 8) is based on Claims 1 to 4. The program selection and execution device according to any of the above, further comprising a screen display switching unit that switches the screen display so that the execution display screen is displayed when the program is executed, when the selected program has an execution display screen. It is characterized by the following.
このよ うな構成のプログラム選択実行装置では、 3次元仮想空間にお ける 3次元回転体物体を用いるこ とによ り 、 現実世界の円筒状の回転体 を転がすイメージを連想させるこ とが可能であり、 また、 選択したプロ グラムの実行画面が表示されるので、 容易に選択の確認ができ、 バソコ ンに慣れていない使用者にもなじみ易い直感的な操作環境を実現するこ とができる。  By using a three-dimensional rotating object in a three-dimensional virtual space, the program selection and execution device having such a configuration can remind an image of rolling a cylindrical rotating object in the real world. Yes, and since the execution screen of the selected program is displayed, the selection can be easily confirmed, and an intuitive operation environment that can be easily used by users unfamiliar with the computer can be realized.
この発明 (請求の範囲第 9項) に係るデータ選択実行装置は、 複数の 面が中心軸に対して一定の間隔で配置された 3次元回転体物体の上記各 面にそれぞれデータの内容を示すテクスチャを貼り付けた選択用ォブジ ェク トを 3次元仮想空間内に配置した画像を表示画面上に表示する選択 用ォブジェク ト表示手段と、 選択用オブジェク ト表示手段に対し、 上記 選択用オブジェク 卜が 3次元仮想空間内で上記中心軸を回転の中心と し て回転する画像を表示するための回転表示制御信号を与える回転表示制 御手段と、 データを選択する選択入力が入力される選択入力手段と、 選 択入力手段から選択入力が入力されたときに 3次元回転体物体を構成す る複数の面のうちどの面が表示画面上において正面を向いているかを判 定する選択面判定手段と、 上記 3次元回転体物体を構成する複数の面と データとの対応関係を示す情報を保持する第 1 の対応表保持手段と、 選 択面判定手段が判定した面に対応づけられたデータが何であるかを上記 第 1 の対応表保持手段に保持された情報に基づいて判定し、 開くべきデ ータを決定するデータ決定手段と、 データ とそのデータを開く プロダラ ムとの対応関係を示す情報を保持する第 2の対応表保持手段と、 データ 決定手段が決定したデータを開く ために実行するプログラムを上記第 2 の対応表保持手段に保持された情報に基づいて判定し、 実行すべきプロ グラムを決定するプログラム決定手段と、 プログラム決定手段が決定し たプログラムを実行しデータ決定手段が決定したデータを開く プロダラ ム実行手段とを備えたことを特徴とするものである。 The data selection execution device according to the present invention (claim 9) provides a three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to a central axis, and each of the surfaces indicates data contents. The selection object display means for displaying an image in which the selection object to which the texture is pasted is arranged in a three-dimensional virtual space on a display screen; and the selection object display means, Display control means for providing a rotation display control signal for displaying an image that rotates with the center axis as the center of rotation in a three-dimensional virtual space, and a selection input for inputting a selection input for selecting data Means and means for determining which face of the plurality of faces constituting the three-dimensional rotating object is facing the front on the display screen when a selection input is input from the selection input means. A step, first correspondence table holding means for holding information indicating a correspondence relationship between the plurality of surfaces constituting the three-dimensional rotator object and the data, and a surface determined by the selected surface determination means. A data deciding unit that decides what the data is based on the information held in the first correspondence table holding unit and determines data to be opened, and a correspondence relationship between the data and a program that opens the data. A second correspondence table holding means for holding information indicating the following, and a program to be executed to open the data determined by the data determination means are determined based on the information held in the second correspondence table holding means and executed. Program determining means for determining the program to be Program executing means for executing the program and opening the data determined by the data determining means.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 パソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operation environment that is easy to be used by users who are not familiar with personal computers.
この発明 (請求の範囲第 1 0項) は、 請求の範囲第 9項記載のデータ 選択実行装置において、 上記回転表示制御手段は、 外部から入力される 回転指示入力に応じて上記回転表示制御信号を選択用オブジェク ト表示 手段に与えることを特徴とする ものである。  According to a tenth aspect of the present invention, in the data selection execution device according to the ninth aspect, the rotation display control means is configured to control the rotation display control signal in response to a rotation instruction input externally input. Is given to the object display means for selection.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いるこ とによ り 、 現実世界の円筒状の回転体を転 がすィメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world. Therefore, an intuitive operation environment that can be easily used by a user who is not used to a bassocon can be realized.
この発明 (請求の範囲第 1 1項) は、 請求の範囲第 9項記載のデータ 選択実行装置において、 上記回転表示制御手段は、 上記選択用オブジェ ク トを所定のパターンで回転させるための情報を保持する保持手段を備 え、 該保持手段に保持された情報に基づいて上記回転表示制御信号を選 択用ォブジェク ト表示手段に与える  The present invention (Claim 11) is the data selection execution device according to Claim 9, wherein the rotation display control means includes information for rotating the selection object in a predetermined pattern. Holding means for holding the rotation display control signal on the basis of the information held in the holding means.
ことを特徴とするものである。 It is characterized by the following.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすィメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
この発明 (請求の範囲第 1 2項) は、 請求の範囲第 1 0項記載のデー タ選択実行装置において、 上記回転表示制御手段が、 上記選択用ォブジ ェク トを所定のパターンで回転させるための情報を保持する保持手段を 備え、 外部から回転指示入力が入力されるときにはこの回転指示入力に 応じて上記回転表示制御信号を選択用オブジェク ト表示手段に与え、 外 部から回転指示入力が入力されないときには上記保持手段に保持された 情報に基づいて上記回転表示制御信号を選択用ォブジェク 卜表示手段に 与えることを特徴とするものである。 According to a second aspect of the present invention, in the data selection execution device according to the tenth aspect, the rotation display control means rotates the selection object in a predetermined pattern. Holding means for holding information for inputting the rotation instruction control signal to the object display means for selection when the rotation instruction input is input from outside. When a rotation instruction input is not input from the section, the rotation display control signal is provided to the selection object display means based on the information held in the holding means.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすィメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
この発明 (請求の範囲第 1 3項) は、 請求の範囲第 9項ないし第 1 2 項のいずれかに記載のデータ選択実行装置において、 表示画面上におい て上記選択用オブジェク トが回転して 3次元回転体物体を構成する複数 の面のうち正面を向いている面が切り替わる回数をカウン ト してカウン ト情報を出力するカウンタ手段を備え、 上記選択面判定手段が、 上記力 ゥンタの出力するカウン ト情報に基づいて表示画面上において正面を向 いている面を判定することを特徴とする ものである。  According to a third aspect of the present invention, in the data selection execution device according to any one of the ninth to the twelfth aspects, the selection object is rotated on a display screen. Counter means for counting the number of times a face facing the front of the plurality of faces constituting the three-dimensional rotating object is switched and outputting count information, wherein the selected face determination means outputs the force counter It is characterized in that the face facing the front on the display screen is determined based on the count information.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすィメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
この発明 (請求の範囲第 1 4項) は、 請求の範囲第 9項ないし第 1 2 項のいずれかに記載のデータ選択実行装置において、 上記選択面判定手 段は、 上記選択用ォブジェク ト表示手段が上記選択用ォブジェク トを画 面表示する際に求める奥行き情報に基づいて表示画面上において正面を 向いている面を判定することを特徴とするものである。  The invention (Claim 14) is the data selection execution device according to any one of Claims 9 to 12, wherein the selection surface determination means includes the selection object display. A means for judging a front-facing surface on the display screen based on depth information obtained when displaying the selection object on a screen.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
この発明 (請求の範囲第 1 5項) は、 請求の範囲第 9項ないし第 1 2 項のいずれかに記載のデータ選択実行装置において、 上記選択面判定手 段は、 上記選択用オブジェク トが初期状態から回転した角度を示す回転 角情報に基づいて表示画面上において正面を向いている面を判定するこ とを特徴とするものである。 The present invention (Claim 15) is the data selection execution device according to any one of Claims 9 to 12, wherein: The step is characterized in that a face facing the front on the display screen is determined based on rotation angle information indicating an angle at which the selection object has rotated from an initial state.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすィメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operating environment that is easy for users who are not used to Vascon.
この発明 (請求の範囲第 1 6項) は、 請求の範囲第 9項ないし第 1 5 項のいずれかに記載のデータ選択実行装置において、 実行すべきプログ ラムが実行表示画面を有する場合に、 プログラム実行時に上記実行表示 画面が表示されるよ うに画面表示を切り替える画面表示切替手段を備え たことを特徴とするものである。  This invention (Claim 16) provides the data selection execution device according to any one of Claims 9 to 15, wherein the program to be executed has an execution display screen. A screen display switching means for switching the screen display so that the execution display screen is displayed when the program is executed is provided.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 パソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, It is possible to realize an intuitive operation environment that is easy to be used by users who are not familiar with personal computers.
この発明 (請求の範囲第 1 7項) は、 請求の範囲第 9項ないし第 1 6 項のいずれかに記載のデータ選択実行装置において、 上記選択用ォブジ ェク ト表示手段は、 3次元回転体物体の各面に対応づけられるデータが 動画像データである とき、 動画像データを再生して得られる画像をテク スチヤと して対応する面に貼り付けるこ とを特徴とする ものである。 このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いるこ とによ り 、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 また、 ある時点で選択可 能な面がどれかを判断するのに、 面に貼り付けた画像が動いているかど うかで容易に判断可能であり、 バソコンに慣れていない使用者にもなじ み易い直感的な操作環境を実現することができる。  The present invention (claim 17) is the data selection execution device according to any one of claims 9 to 16, wherein the selection object display means comprises a three-dimensional rotating device. When the data associated with each surface of the body object is moving image data, an image obtained by reproducing the moving image data is pasted on the corresponding surface as a texture. In the data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world. Also, to determine which surface can be selected at any one time, it is easy to determine whether the image pasted on the surface is moving, and it is the same for users who are not used to Bascon. An easy-to-see and intuitive operation environment can be realized.
この発明 (請求の範囲第 1 8項) は、 請求の範囲第 1 7項記載のデー タ選択実行装置において、 上記選択用オブジェク ト表示手段は、 3次元 回転体物体を構成する複数の面のう ち表示画面上で正面を向いている面 には該面に対応づけられる動画像データを再生して得られる動画像をテ クスチヤと して貼り付け、 3次元回転体物体を構成する複数の面のう ち 表示画面上で正面を向いていない面には該面に対応づけられる動画像デ ータを再生して得られる動画像から取り出した静止画像をテクスチヤと して貼り付けることを特徴とする ものである。 The present invention (claim 18) is the data selection execution device according to claim 17, wherein the object display means for selection is a three-dimensional object. A moving image obtained by reproducing moving image data corresponding to the surface is pasted as a texture on a surface facing the front of the display screen among a plurality of surfaces constituting the rotating object, Of the multiple surfaces that make up the three-dimensional rotator object, the surface that is not facing the front on the display screen is a still image extracted from the moving image obtained by reproducing the moving image data associated with the surface It is characterized by sticking as a texture.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いるこ とによ り、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 また、 ある時点で選択可 能な面がどれかを判断するのに、 面に貼り付けた画像が動いているかど うかで容易に判断可能であり、 バソコンに慣れていない使用者にもなじ み易い直感的な操作環境を実現することができる。  By using a three-dimensional rotating object in a three-dimensional virtual space, the data selection and execution device having such a configuration can associate the image of rolling a cylindrical rotating object in the real world. Also, to determine which surface can be selected at any one time, it is easy to determine whether the image pasted on the surface is moving, and it is the same for users who are not used to Bascon. An easy-to-see and intuitive operation environment can be realized.
この発明 (請求の範囲第 1 9項) は、 請求の範囲第 9項ないし第 1 8 項のいずれかに記載のデータ選択実行装置において、 3次元回転体物体 の各面に対応づけられるデータが音声データ, 動画像データ, あるいは 音声データを伴う動画像データであるとき、 上記選択用ォブジェク 卜の 表示に併せて、 対応づけられるデータの再生表示を行うデータ再生表示 手段であって、 上記選択用オブジェク 卜の回転によ り表示画面上で最も 正面を向いている面が第 1 の面から該第 1 の面に隣接する第 2の面へと 切り替わる際に、 上記第 1 の面に対応づけられるデータの再生表示をフ エ ー ドアウ ト し、 上記第 2の面に対応づけられるデータの再生表示をフ ェ一 ドインするよ うに再生表示を行うデータ再生表示手段を備えたこと を特徴とする ものである。  The present invention (claim 19) provides the data selection execution device according to any one of claims 9 to 18, wherein the data associated with each surface of the three-dimensional rotating object is When the data is audio data, moving image data, or moving image data accompanied by audio data, the data reproduction display means performs reproduction display of the data associated with the selection object display. When the surface facing the front on the display screen is switched from the first surface to the second surface adjacent to the first surface due to the rotation of the object, it is associated with the first surface. Data reproduction display means for displaying the reproduction display of the data to be faded out and fading in the reproduction display of the data associated with the second aspect. Also It is.
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 バソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現するこ とができ、 また、 選択用オブジェク ト と と もに補助表示される音楽データや動画像データ が途切れることがないため、 使用者が快適にデータ選択をすることがで JP /073 7 In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating object in the real world, An intuitive operation environment that is easy to use for users who are not familiar with Basocon can be realized, and the music data and moving image data that are displayed auxiliary with the selection object are not interrupted. The user can select data comfortably. JP / 073 7
11 きるデータ選択実行装置を実現できる。  11 can realize a data selection execution device.
この発明 (請求の範囲第 2 0項) は、 請求の範囲第 9項ないし第 1 8 項のいずれかに記載のデータ選択実行装置において、 3次元回転体物体 の各面に対応づけられるデータが音声デ一タを含むデータであるとき、 上記選択用オブジェク トの表示に併せて, 対応づけられるデータの再生 表示を行うデータ再生表示手段であって、 上記選択用オブジェク トの回 転によ り表示画面上で最も正面を向いている面が第 1 の面から該第 1 の 面に隣接する第 2 の面へと切り替わる際に、 上記第 1 の面に対応づけら れるデータの再生音源位置と上記第 2 の面に対応づけられるデータの再 生音源位置を、 表示画面上における上記第 1, 第 2の面の位置の移動に 合わせて移動させて再生表示を行うデータ再生表示手段を備えたことを 特徴とするものである。  The present invention (Claim 20) is the data selection execution device according to any one of Claims 9 to 18, wherein the data associated with each surface of the three-dimensional rotating object is When the data includes audio data, the data reproduction and display means reproduces and displays the associated data in addition to the display of the selection object. When the surface facing the front on the display screen switches from the first surface to the second surface adjacent to the first surface, the playback sound source position of the data associated with the first surface And a data reproduction display means for performing reproduction display by moving a reproduction sound source position of data associated with the second surface in accordance with the movement of the first and second surfaces on the display screen. It is characterized by that .
このよ うな構成のデータ選択実行装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り 、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 パソコンに慣れていない 使用者にもなじみ易い直感的な操作環境を実現することができ、 また、 選択用オブジェク ト と と もに補助表示される音楽データや動画像データ が途切れるこ とがないため、 使用者が快適にデータ選択をするこ とがで きるデータ選択実行装置を実現できる。  In a data selection execution device having such a configuration, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate an image of rolling a cylindrical rotating body in the real world, An intuitive operating environment that is easy for users who are not used to a personal computer to use can be realized, and music data and moving image data that are displayed auxiliary with the selection object are not interrupted. Thus, a data selection execution device that allows a user to select data comfortably can be realized.
この発明 (請求の範囲第 2 1項) に係る映像表示装置は、 放送または ネッ トワークを経由 して伝送される入力信号を受信し、 入力映像信号を 出力する映像受信手段と、 上記入力映像信号を保持するメモ リ手段と、 上記入力映像信号を上記メモリ 手段に書き込み、 上記入力映像信号から テクスチャと して用いる領域を切り 出す際の位置を示す領域切り 出し情 報に従ってメ モ リ制御信号を上記メ モ リ手段に出力し、 該メモリ手段か ら部分映像信号を読み出すメモリ入出力制御手段と、 3次元座標情報と、 領域切り出し情報とから構成されるパラメータ情報から、 上記領域切り 出し情報と上記 3次元座標情報とを分離して,, 上記領域切り出し情報は 上記メモリ入出力制御手段に出力し、 上記 3次元座標情報はオブジェク ト位置決定手段に出力するパラメータ分離手段と、 上記 3次元座標情報 から 3次元仮想空間に 3次元オブジェク トを配置し、 3次元仮想空間に おける 3次元オブジェク トのォブジェク ト座標情報を出力するオブジェ ク ト位置決定手段と、 上記オブジェク ト座標情報をディスプレイ投影面 に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手 段と、 上記投影面座標情報に基づいて、 上記部分映像信号を 3次元ォブ ジェク トの所定の面にテクスチャマッピングして、 3次元映像信号を生 成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所定の タイ ミ ングで出力映像信号を出力するフ レームメモリ手段と、 上記出力 映像信号を表示する映像表示手段とを備えたことを特徴とするものであ る。 A video display device according to the present invention (claim 21) includes: a video receiving unit that receives an input signal transmitted via a broadcast or a network and outputs the input video signal; Memory means for storing the input video signal in the memory means, and outputting a memory control signal in accordance with area extraction information indicating a position at which an area to be used as a texture is extracted from the input video signal. A memory input / output control unit that outputs to the memory unit and reads out a partial video signal from the memory unit; and the area cutout information based on parameter information including three-dimensional coordinate information and area cutout information. The three-dimensional coordinate information is separated from the three-dimensional coordinate information, and the area cutout information is output to the memory input / output control means. A parameter separating means for outputting to the object position determining means, and an object for arranging the three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information and outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space A projection position conversion means for perspectively projecting the object coordinate information on a display projection plane and converting the object coordinate information into display projection plane coordinate information; and the partial video signal based on the projection plane coordinate information. Rasterizing means for generating a 3D video signal by texture mapping of the 3D video signal onto a predetermined surface of the 3D object, and holding the 3D video signal and outputting the output video signal at a predetermined timing And a video display means for displaying the output video signal.
このよ うな構成の映像表示装置では、 伝送されて入力された映像信号 から、 所定の領域を切り出して、 3次元仮想空間内のオブジェク トの面 に貼り付けることによ り 、 映像の 3次元表示を実現することができ、 見 た目にもわかりやすい映像表示が可能となる。  In a video display device having such a configuration, a predetermined area is cut out from a transmitted and input video signal, and is pasted on a surface of an object in a three-dimensional virtual space, so that a three-dimensional display of a video is performed. This makes it possible to display images that are easy to understand.
この発明 (請求の範囲第 2 2項) は、 請求の範囲第 2 1項記載の映像 表示装置において、 上記パラメータ分離手段が入力するパラメ一タ情報 は、 時系列で変化することを特徴とする ものである。  The present invention (Claim 22) is the video display device according to Claim 21, wherein the parameter information input by the parameter separating means changes in a time series. Things.
このよ うな構成の映像表示装置では、 3次元仮想空間に表示される 3 次元回転体物体がアニメーショ ンの効果を得ることができ、 見た目にわ かりやすい映像表示が可能となる。  In a video display device having such a configuration, a three-dimensional rotating object displayed in a three-dimensional virtual space can obtain an animation effect, and a video display that is easy to see is possible.
この発明 (請求の範囲第 2 3項) は、 請求の範囲第 2 1項記載の映像 表示装置において、 上記透視投影変換手段に代えて、 アブイ ン変換手段 を備えることを特徴とするものである。  According to a second aspect of the present invention, in the video display device according to the twenty-first aspect, the image display apparatus further includes an Abuin conversion unit instead of the perspective projection conversion unit. .
このよ う な構成の映像表示装置では、 3次元仮想空間に構成される 3 次元回転体物体が奥行き感をある程度維持しながら、 演算量を低減する ことが可能である。  In a video display device having such a configuration, it is possible to reduce the amount of computation while maintaining a sense of depth of a three-dimensional rotating object formed in a three-dimensional virtual space.
この発明 (請求の範囲第 2 4項) に係る映像表示装置は、 放送または ネッ トワークを経由して伝送される、 所定数の部分映像から構成される 入力信号を受信し、 入力映像信号を出力する映像受信手段と、 上記入力 映像信号を保持するメモリ手段と、 上記入力映像信号を上記メモリ手段 に書き込み、 上記入力映像信号からテクスチヤ と して用いる領域を切り 出す際の位置を示し、 部分映像の所定数に対応した領域切り出し情報に 従ってメモ リ 制御信号を上記メモ リ手段に出力し、 該メモ リ手段から部 分映像信号を読み出すメモリ入出力制御手段と、 部分映像の所定数に対 応した 3次元座標情報と、 領域切り 出し情報とから構成されるパラメ一 タ情報から、 パラメータ出力制御情報に基づいて、 上記領域切り 出し情 報と上記 3次元座標情報とを分離して、 上記領域切り出し情報は上記メ モ リ入出力制御手段に出力し、 上記 3次元座標情報はオブジェク ト位置 決定手段に出力するパラメータ分離手段と、 上記 3次元座標情報から 3 次元仮想空間に 3次元ォブジェク トを配置し、 3次元仮想空間における 3次元オブジェク トのォブジェク ト座標情報を出力するオブジェク ト位 置決定手段と、 上記オブジェク ト座標情報をディ スプレイ投影面に透視 投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、 上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェク トの所定の面にテクスチヤマツビングする際に、 上記パラメータ出力制 御情報を上記パラメータ分離手段に対して部分映像の所定数に対応する 回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出力 するフ レームメモ リ手段と、 上記出力映像信号を表示する映像表示手段 とを備えたことを特徴とするものである。 An image display device according to the present invention (claim 24) includes a predetermined number of partial images transmitted via a broadcast or a network. A video receiving means for receiving an input signal and outputting an input video signal; a memory means for holding the input video signal; an area for writing the input video signal into the memory means and using the input video signal as a texture A memory control signal to the memory means in accordance with area cutout information corresponding to a predetermined number of partial images, and a memory input / output control for reading a partial video signal from the memory means. Means, three-dimensional coordinate information corresponding to a predetermined number of partial images, and parameter information consisting of area clipping information. The three-dimensional coordinate information is separated from the three-dimensional coordinate information, and the area cutout information is output to the memory input / output control means. Parameter separating means for outputting the three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information, and outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space. A perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information; and converting the partial video signal into a three-dimensional object based on the projection plane coordinate information. A rasterizing means for outputting the parameter output control information to the parameter separating means a number of times corresponding to a predetermined number of partial images when texturing on a predetermined surface of And a frame memory that holds the above three-dimensional video signal and outputs an output video signal at a predetermined timing. When, is characterized in that a video display means for displaying the output video signal.
このよ うな構成の映像表示装置では、 マルチ画面と して伝送された映 像信号から、 マルチ画面の分割境界に沿って領域を切り 出して、 3次元 仮想空間内のオブジェク ト面に貼り付けることによ り 、 複数の映像の 3 次元表示を実現することができ、 見た目にわかりやすい映像表示が可能 となる。  In a video display device having such a configuration, an area is cut out from a video signal transmitted as a multi-screen along a division boundary of the multi-screen and pasted on an object plane in a three-dimensional virtual space. Accordingly, it is possible to realize a three-dimensional display of a plurality of videos, and it is possible to display a video which is easy to see.
この発明 (請求の範囲第 2 5項) は、 請求の範囲第 2 4項記載の映像 表示装置において、 上記パラメータ分離手段が入力するパラメータ情報 は、 時系列で変化することを特徴とするものである。 The present invention (claim 25) is the video display device according to claim 24, wherein the parameter information input by the parameter separating means is provided. Is characterized in that it changes in chronological order.
このよ うな構成の映像表示装置では、 3次元仮想空間に表示される 3 次元回転体物体がアニメ一シヨ ンの効果を得ることができ、 見た目にわ かりやすい映像表示が可能となる。  In a video display device having such a configuration, a three-dimensional rotating object displayed in a three-dimensional virtual space can obtain an animation effect, and a video display that is easy to see can be realized.
この発明 (請求の範囲第 2 6項) は、 請求の範囲第 2 4項記載の映像 表示装置において、 上記透視投影変換手段に代えて、 ァフィ ン変換手段 を備えることを特徴とする ものである。  The present invention (claim 26) is characterized in that, in the video display device according to claim 24, an affinity conversion means is provided instead of the perspective projection conversion means. .
このよ うな構成の映像表示装置では、 3次元仮想空間に構成される 3 次元回転体物体が奥行き感をある程度維持しながら、 演算量を低減する ことが可能である。  In a video display device having such a configuration, it is possible to reduce the amount of computation while maintaining a certain sense of depth for a three-dimensional rotating object formed in a three-dimensional virtual space.
この発明 (請求の範囲第 2 7項) に係る映像表示装置は、 放送または ネッ トワークを経由して伝送される、 所定数の部分映像から構成される 入力信号を受信し、 入力映像信号を出力する映像受信手段と、 上記入力 映像信号からテクスチヤと して用いる領域を切り出す際の位置を示し、 部分映像の所定数に対応した領域切り出し情報に従って領域を分離し、 メモリ格納用映像信号を出力する領域分離手段と、 上記メモリ格納用映 像信号を保持するメモリ手段と、 上記メ モ リ格納用映像信号を上記メモ リ手段に書き込み、 領域切り 出し情報に従ってメモ リ制御信号を上記メ モ リ手段に出力し、 該メモ リ手段から部分映像信号を読み出すメモリ入 出力制御手段と、 部分映像の所定数に対応した 3次元座標情報と、 領域 切り出し情報とから構成されるパラメータ情報から、 パラメータ出力制 御情報に基づいて、 上記領域切り 出し情報と上記 3次元座標情報とを分 離して、 上記領域切り出し情報は上記メ モ リ入出力制御手段に出力し、 上記 3次元座標情報はオブジェク ト位置決定手段に出力するパラメータ 分離手段と、 上記 3次元座標情報から 3次元仮想空間に 3次元オブジェ ク トを配置し、 3次元仮想空間における 3次元オブジェク トのオブジェ ク ト座標情報を出力するオブジェク ト位置決定手段と、 上記オブジェク ト座標情報をディスプレイ投影面に透視投影し、 ディスプレイ投影面座 標情報に変換する透視投影変換手段と、上記投影面座標情報に基づいて、 上記部分映像信号を 3次元ォブジェク 卜の所定の面にテクスチャマツピ ングする際に、 上記パラメータ出力制御情報を上記パラメータ分離手段 に対して部分映像の所定数に対応する回数分、 出力し、 3次元映像信号 を生成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所 定のタイ ミングで出力映像信号を出力するフレームメモリ手段と、 上記 出力映像信号を表示する映像表示手段とを備えたことを特徴とするもの である。 A video display device according to the present invention (claim 27) receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network and outputs an input video signal A video receiving means, and a position at which a region to be used as a texture is cut out from the input video signal, the region is separated according to region cutout information corresponding to a predetermined number of partial videos, and a video signal for memory storage is output. Area separating means; memory means for holding the video signal for memory storage; writing the video signal for memory storage to the memory means; and memory control signal according to the area cut-out information. Memory input / output control means for outputting a partial video signal from the memory means, three-dimensional coordinate information corresponding to a predetermined number of partial video images, and area extraction information. The area cutout information and the three-dimensional coordinate information are separated based on the parameter output control information from the parameter information composed of the following, and the area cutout information is output to the memory input / output control means. The three-dimensional coordinate information is output to the object position determining means, and the three-dimensional object is arranged in the three-dimensional virtual space based on the three-dimensional coordinate information. Object position determination means for outputting the object coordinate information of the object, perspective projection conversion means for perspectively projecting the object coordinate information on a display projection plane, and converting the object coordinate information into display projection plane coordinate information, and the projection plane coordinate information On the basis of the, When texture-mapping the partial video signal onto a predetermined surface of a three-dimensional object, the parameter output control information is output to the parameter separating means a number of times corresponding to the predetermined number of partial videos, and Rasterizing means for generating and outputting a three-dimensional video signal; frame memory means for holding the three-dimensional video signal and outputting the output video signal at a predetermined timing; and video display means for displaying the output video signal. It is characterized by that.
このよ うな構成の映像表示装置では、 映像から領域を切り出して、 3 次元仮想空間内のオブジェク 卜の面に貼り付ける際に、 映像全体をメモ リ に保持するのではなく 、 切り 出した領域のみをメモリ に保持すること によ り、 メモリ量の低減を実現することができる。  In a video display device having such a configuration, when an area is cut out from an image and pasted on a surface of an object in a three-dimensional virtual space, the entire image is not stored in memory but only the cut out area. By storing in the memory, the amount of memory can be reduced.
この発明 (請求の範囲第 2 8項) に係る映像表示装置は、 放送または ネッ トワークを経由 して伝送される、 所定数の部分映像から構成される 入力信号を受信し、 入力映像信号を出力する映像受信手段と、 上記入力 映像信号を保持するメモリ手段と、 上記入力映像信号を上記メモリ手段 に書き込み、 上記入力映像信号からテクスチャと して用いる領域を切り 出す際の位置を示す領域切り出し情報に従ってメモリ制御信号を上記メ モリ手段に出力し、 該メモリ手段から部分映像信号を読み出すメモリ入 出力制御手段と、 上記入力映像信号から所定数を判別し、 領域数情報を 出力する映像分析手段と、 上記領域数情報に基づいて、 3次元座標情報 と領域切り出し情報とから構成されるパラメ一タ情報を生成し、 パラメ ータ出力制御情報に基づいて、 上記領域切り 出し情報は上記メモリ入出 力制御手段に出力し、 上記 3次元座標情報はォブジェク ト位置決定手段 に出力するパラメータ生成手段と、 上記 3次元座標情報から 3次元仮想 空間に 3次元オブジェク トを配置し、 3次元仮想空間における 3次元ォ ブジェク トのォブジェク ト座標情報を出力するォブジェク ト位置決定手 段と、 上記オブジェク ト座標情報をディスプレイ投影面に透視投影し、 ディスプレイ投影面座標情報に変換する透視投影変換手段と、 上記投影 面座標情報に基づいて、 上記部分映像信号を 3次元オブジェク トの所定 の面にテク スチャマッピングする際に、 上記パラメ一タ出力制御情報を 上記パラメ一タ生成手段に対して部分映像の所定数に対応する回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、 上記 3次 元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出力するフレ ームメモ リ手段と、 上記出力映像信号を表示する映像表示手段とを備え たことを特徴とするものである。 A video display device according to the present invention (claim 28) receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network and outputs the input video signal Video receiving means, memory means for holding the input video signal, and area cutout information indicating a position when writing the input video signal into the memory means and cutting out an area used as a texture from the input video signal. Memory input / output control means for outputting a memory control signal to the memory means in accordance with the following, and reading out a partial video signal from the memory means; video analysis means for determining a predetermined number from the input video signal and outputting area number information On the basis of the area number information, parameter information composed of three-dimensional coordinate information and area clipping information is generated, and parameter output control is performed. Based on the information, the area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is output to the object position determination means.A three-dimensional virtual space is obtained from the three-dimensional coordinate information. An object position determining means for arranging a three-dimensional object in a three-dimensional virtual space and outputting object coordinate information of the three-dimensional object in a three-dimensional virtual space; Perspective projection conversion means for converting into projection plane coordinate information, and a predetermined three-dimensional object for the partial video signal based on the projection plane coordinate information When texture mapping is performed on the surface, the parameter output control information is output to the parameter generating means a number of times corresponding to a predetermined number of partial images, and rasterization for generating and outputting a three-dimensional video signal is performed. Means, frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing, and video display means for displaying the output video signal. Things.
このよ うな構成の映像表示装置では、 マルチ画面で伝送される映像の 分割数を受信後に認識して、 分割数に応じて 3次元オブジェク トの形状 情報を自動的に生成することによ り 、 複数種類のマルチ画面構成の映像 への対応を実現することができる。  The video display device having such a configuration recognizes the number of divisions of an image transmitted on a multi-screen after receiving it, and automatically generates shape information of a three-dimensional object according to the number of divisions. It is possible to support multiple types of multi-screen video.
この発明 (請求の範囲第 2 9項) に係る映像表示装置は、 チャンネル 情報に基づいて、 放送またはネッ トワークを経由 して伝送される、 所定 数の部分映像から構成される入力信号を選択受信し、 入力映像信号を出 力する映像受信手段と、 上記入力映像信号を保持するメモ リ手段と、 上 記入力映像信号を上記メモ リ手段に書き込み、 上記入力映像信号からテ クスチヤと して用いる領域を切り 出す際の位置を示し、 部分映像の所定 数に対応した領域切り出し情報に従ってメモリ制御信号を上記メモリ手 段に出力し、 該メモ リ手段から部分映像信号を読み出すメモ リ入出力制 御手段と、 部分映像の所定数に対応した 3次元座標情報と、 領域切り 出 し情報と、 オブジェク トとチャンネルとの対応情報を示すチャンネル対 応情報とから構成されるパラメータ情報から、 パラメータ出力制御情報 に基づいて、上記領域切り出し情報と上記 3次元座標情報とを分離して、 上記領域切り 出し情報は上記メモ リ入出力制御手段に出力し、 上記 3次 元座標情報はオブジェク ト位置決定手段に出力し、 上記チャンネル対応 情報はチャ ンネル決定手段に出力するパラメータ分離手段と、 上記 3次 元座標情報から 3次元仮想空間に 3次元オブジェク トを配置し、 3次元 仮想空間における 3次元オブジェク トのォブジェク ト座標情報を出力す ると同時に、 ュ一ザ入力に従って、 上記オブジェク ト座標情報よ りォブ ジュク ト配置順序情報を出力するオブジェク ト位置決定手段と、 上記ォ ブジェク ト配置順序情報で各オブジェク 卜の位置を比較し、 所定の条件 でォブジェク トを選択した選択オブジェク ト情報を上記チャンネル決定 手段に出力するオブジェク ト位置比較手段と、 上記選択オブジェク ト情 報と上記チャ ンネル対応情報とから、 選択されたォブジェク トに対応す るチャンネルを決定し、 チャンネル情報を出力するチャンネル決定手段 と、 上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 デ ィスプレイ投影面座標情報に変換する透視投影変換手段と、 上記投影面 座標情報に基づいて、 上記部分映像信号を 3次元オブジェク トの所定の 面にテクスチャマッピングする際に、 パラメータ出力制御情報をパラメ ータ分離手段に対して部分映像の所定数に対応する回数分、 出力し、 3 次元映像信号を生成出力するラスタライズ手段と、 上記 3次元映像信号 を保持し、 所定のタイ ミ ングで出力映像信号を出力するフ レームメモ リ 手段と、 上記出力映像信号と上記映像受信手段から出力された入力映像 信号とを切り替えて表示する映像表示手段とを備えたこ とを特徴とする ものである。 A video display device according to the present invention (claim 29) selectively receives an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network based on channel information. Video receiving means for outputting an input video signal, memory means for holding the input video signal, and writing the input video signal into the memory means, and using the input video signal as a texture Memory input / output control for indicating a position at which an area is cut out, outputting a memory control signal to the memory means in accordance with area cutout information corresponding to a predetermined number of partial images, and reading out a partial video signal from the memory means Means, three-dimensional coordinate information corresponding to a predetermined number of partial images, region cutout information, channel correspondence information indicating correspondence information between objects and channels, and The area cutout information and the three-dimensional coordinate information are separated based on parameter output control information from the parameter information composed of: The three-dimensional coordinate information is output to the object position determining means, and the channel correspondence information is output to the channel determining means.The parameter separating means, and the three-dimensional object is arranged in the three-dimensional virtual space from the three-dimensional coordinate information. Object position information for outputting the object coordinate information of the three-dimensional object in the three-dimensional virtual space and outputting the object arrangement order information from the object coordinate information according to the user input at the same time as the user input Means and Object position comparing means for comparing the position of each object with the object arrangement order information and outputting selected object information for selecting an object under predetermined conditions to the channel determining means; and A channel determining means for determining a channel corresponding to the selected object from the channel correspondence information and outputting the channel information; and perspectively projecting the object coordinate information onto a display projection plane; Perspective projection conversion means for converting to surface coordinate information, and parameter separation of parameter output control information when texture mapping the partial video signal to a predetermined surface of a three-dimensional object based on the projection surface coordinate information Output the number of times corresponding to the predetermined number of partial images to the Rasterizing means for generating and outputting a signal; frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing; and inputting the output video signal and the output from the video receiving means Video display means for switching and displaying a video signal.
このよ うな構成の映像表示装置では、 マルチ画面で構成された入力映 像の部分映像を切り出して、 3次元仮想空間におけるオブジェク 卜の面 に各々テクスチャと して貼り付け、 この 3次元オブジェク トを動かすこ とによ りアニメーショ ン表示を行う ことができる。 さ らに、 ユーザが選 択ボタンを押した際に、 3次元仮想空間において、 視点に最も近い位置 に表示された面に対応づけられたチャ ンネルの全画面表示に切り替える ことによ りチャンネル選択を実現することができる。  In a video display device having such a configuration, a partial image of an input image composed of multiple screens is cut out and pasted as a texture on each of the surfaces of the object in a three-dimensional virtual space, and this three-dimensional object is displayed. Animation display can be performed by moving. Furthermore, when the user presses the select button, the channel is selected by switching to the full screen display of the channel associated with the surface displayed at the position closest to the viewpoint in the 3D virtual space. Can be realized.
この発明 (請求の範囲第 3 0項) は、 請求の範囲第 2 9項記載の映像 表示装置において、 上記オブジェク ト位置決定手段は、 視点からの位置 が最も近い面を選択することを特徴とするものである。  The present invention (claim 30) is the video display device according to claim 29, wherein the object position determining means selects a plane whose position is closest to the viewpoint. Is what you do.
このよ うな構成の映像表示装置では、 3次元仮想空間における 3次元 回転体物体を用いて、 ユーザが選択ボタ ンを押した際に、 3次元仮想空 間において、 視点に最も近い位置に表示された面に対応づけられたチヤ ンネルの全画面表示に切り替えるこ とによ りチャンネル選択を実現する こ とができる。 In a video display device having such a configuration, when a user presses a selection button using a three-dimensional rotating object in a three-dimensional virtual space, the image is displayed at a position closest to the viewpoint in the three-dimensional virtual space. Channel selection by switching to the full screen display of the channel associated with the be able to.
この発明 (請求の範囲第 3 1項) に係る映像表示装置は、 放送または ネッ トワークを経由して伝送される、 第 1 の入力信号を受信し、 所定数 の部分映像から構成される第 1 の入力映像信号を出力する第 1 の映像受 信手段と、 チャンネル情報に基づいて、 放送またはネッ トワークを経由 して伝送される、 第 2の入力信号を選択受信し、 第 2の入力映像信号を 出力する第 2 の映像受信手段と、 上記第 1 の入力映像信号を保持するメ モリ手段と、 上記第 1 の入力映像信号を上記メモ リ手段に書き込み、 上 記入力映像信号からテクスチヤと して用いる領域を切り 出す際の位置を 示し、 部分映像の所定数に対応した領域切り出し情報に従ってメモ リ制 御信号を上記メモ リ手段に出力し、 該メモ リ手段から部分映像信号を読 み出すメモリ入出力制御手段と、 部分映像の所定数に対応した 3次元座 標情報と、 領域切り出し情報と、 オブジェク 卜 とチヤンネルとの対応情 報を示すチャンネル対応情報とから構成されるパラメータ情報から、 パ ラメ一タ出力制御情報に基づいて、 上記領域切り出し情報と上記 3次元 座標情報とを分離して、 上記領域切り 出し情報は上記メモリ入出力制御 手段に出力し、 上記 3次元座標情報はオブジェク ト位置決定手段に出力 し、 上記チヤンネル対応情報はチヤンネル決定手段に出力するパラメ一 タ分離手段と、 上記 3次元座標情報から 3次元仮想空間に 3次元ォブジ ェク トを配置し、 3次元仮想空間における 3次元オブジェク トのォブジ ェク ト座標情報を出力すると同時に、 ユーザ入力に従って、 上記ォブジ ェ ク ト座標情報よ りオブジェク ト配置順序情報を出力するオブジェク ト 位置決定手段と、 上記オブジェク ト配置順序情報で各オブジェク 卜の位 置を比較し、 所定の条件でオブジェク トを選択した選択ォブジェク ト情 報を上記チャンネル決定手段に出力するオブジェク ト位置比較手段と、 上記選択ォブジェク ト情報と上記チャンネル対応情報とから、 選択され たオブジェク トに対応するチャンネルを決定し、 チャ ンネル情報を出力 するチャンネル決定手段と、 上記オブジェク ト座標情報をディ スプレイ 投影面に透視投影し、 ディスプレイ投影面座標情報に変換する透視投影 変換手段と、 上記投影面座標情報に基づいて、 上記部分映像信号を 3次 元ォブジェク トの所定の面にテク スチャマッピングする際に、 パラメ一 タ出力制御情報をパラメータ分離手段に対して部分映像の所定数に対応 する回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段 と、 上記 3次元映像信号を保持し、 所定のタイ ミ ングで 3次元出力映像 信号を出力するフ レームメモ リ手段と、 上記部分映像信号を拡大、 変形 処理して部分映像拡大変形信号を出力する拡大変形手段と、 上記 3次元 出力映像信号と上記部分映像拡大変形信号とを、 所定のタイ ミ ングで切 り替えて出力映像信号を出力する映像切り替え手段と、 上記出力映像信 号と上記第 2 の入力映像信号とを切り替えて表示する映像表示手段とを 備えたことを特徴とするものである。 A video display device according to the present invention (claim 31) receives a first input signal transmitted through a broadcast or a network, and receives a first input signal, and comprises a predetermined number of partial videos. A first video receiving means for outputting a first input video signal, and a second input video signal selectively receiving a second input signal transmitted via a broadcast or a network based on channel information. Second video receiving means for outputting the first input video signal, and memory means for holding the first input video signal, and writing the first input video signal to the memory means, and converting the input video signal into a texture based on the input video signal. And outputs a memory control signal to the memory means according to the area cutout information corresponding to a predetermined number of partial images, and reads out the partial video signal from the memory means. Note From input / output control means, parameter information including three-dimensional coordinate information corresponding to a predetermined number of partial images, area cutout information, and channel correspondence information indicating correspondence information between an object and a channel. The area cutout information and the three-dimensional coordinate information are separated based on the parameter output control information, the area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is an object. The three-dimensional object is output in the three-dimensional virtual space from the three-dimensional coordinate space based on the three-dimensional coordinate information based on the parameter separating means output to the channel determining means. Outputs the object coordinate information of the three-dimensional object in the space, and simultaneously outputs the object coordinate information according to the user input. The object position determining means for outputting the object arrangement order information is compared with the position of each object based on the object arrangement order information, and the selected object information in which the object is selected under predetermined conditions is determined by the channel determination. Means for comparing the object position to be output to the means, a channel corresponding to the selected object from the selected object information and the channel correspondence information, and a channel determining means for outputting channel information; Perspective projection, in which the coordinate information is perspective-projected onto the display projection plane and converted to display projection plane coordinate information. When texture mapping the partial video signal to a predetermined surface of the three-dimensional object based on the conversion means and the projection plane coordinate information, the parameter output control information is transmitted to the parameter separating means for the partial video signal. A rasterizing means for outputting a number of times corresponding to the predetermined number of times and generating and outputting a three-dimensional video signal, and a frame memory for holding the three-dimensional video signal and outputting the three-dimensional output video signal at a predetermined timing Means for enlarging and transforming the partial video signal to output a partial video enlarged and deformed signal; and disconnecting the three-dimensional output video signal and the partial video enlarged and deformed signal at a predetermined timing. Video switching means for switching and outputting an output video signal; and video display means for switching and displaying the output video signal and the second input video signal. It is an feature.
このよ うな構成の映像表示装置では、 選択されたチヤンネルの全画面 表示に切り替える際に、 3次元表示の際にテク スチャ と して用いた部分 映像を拡大、 変形処理して表示した後、 全画面表示に切り替えることに よ り スムーズな映像切り替えを実現することができる。  In the video display device having such a configuration, when switching to the full-screen display of the selected channel, the partial video used as the texture in the three-dimensional display is enlarged, deformed, displayed, and then displayed. By switching to the screen display, smooth video switching can be realized.
この発明 (請求の範囲第 3 2項) に係るチャ ンネル選択装置は、 放送 またはネッ ト ワークを経由して伝送される入力信号を受信し、 チャンネ ル決定手段から出力される選択チャ ンネル情報に基づき、 チャンネルを 選択して入力映像信号を出力する映像受信手段と、 上記入力映像信号を 保持するメモ リ手段と、上記入力映像信号を上記メモリ手段に書き込み、 対応表保持手段から入力された領域切り出し情報に従ってメモリ制御信 号を上記メモ リ手段に出力し、 該メ モ リ手段から部分映像信号を読み出 すメモ リ入出力制御手段と、 複数の面が中心軸に対して一定の間隔で配 置された 3次元回転体物体の上記各面にそれぞれチヤンネルの内容を示 す、 部分画像を選択し、 テク スチャ と して貼り付けた選択用オブジェク トを 3次元仮想空間内に配置した画像を表示画面上に表示する選択用ォ ブジェク ト表示手段と、 該選択用オブジェク ト表示手段に対し、 上記選 択用オブジェク 卜が 3次元仮想空間内で上記中心軸を回転の中心と して 回転する画像を表示するための回転表示制御信号を与える回転表示制御 手段と、 チヤンネルを選択する選択入力が入力される選択入力手段と、 該選択入力手段から選択入力が入力されたときに 3次元回転体物体を構 成する複数の面のうちどの面が表示画面上において正面を向いているか を判定する選択面判定手段と、 上記 3次元回転体物体を構成する複数の 面と、 各チャ ンネルに対応した部分画像のテク スチャ情報と、 外部から 入力された領域情報パラメータに基づいて各チャンネルに対応した部分 画像を生成するための領域切り 出し情報との対応関係を示す情報を保持 する対応表保持手段と、 選択面判定手段が判定した面に対応づけられた チャンネルが何であるかを上記対応表保持手段に保持された情報に基づ いて判定し、 切り替えて表示するべきチャンネルを決定して、 選択チヤ ンネル情報を上記映像受信手段に出力するチヤ ンネル決定手段とを備え たことを特徴とするものである。 A channel selection device according to the present invention (claim 32) receives an input signal transmitted via a broadcast or a network, and adds the input signal to the selected channel information output from the channel determination means. A video receiving means for selecting a channel and outputting an input video signal, a memory means for holding the input video signal, and writing the input video signal in the memory means, and an area inputted from the correspondence table holding means. Memory input / output control means for outputting a memory control signal to the memory means according to the cut-out information and reading out a partial video signal from the memory means; Select the partial image showing the contents of the channel on each of the above surfaces of the placed three-dimensional rotating object, and paste the selection object pasted as a texture to the third order. A selection object display means for displaying an image arranged in the original virtual space on a display screen; and Rotation display control that provides a rotation display control signal for displaying an image that rotates as a rotation center Means, a selection input means for inputting a selection input for selecting a channel, and when the selection input is input from the selection input means, which of a plurality of surfaces constituting the three-dimensional rotating object is displayed on the display screen A selection plane determining means for determining whether the object is facing the front, a plurality of surfaces constituting the three-dimensional rotating object, texture information of a partial image corresponding to each channel, and an area input from outside Correspondence table holding means for holding information indicating a correspondence relationship with area cutout information for generating a partial image corresponding to each channel based on the information parameter, and correspondence with the surface determined by the selected surface determination means The channel is determined on the basis of the information held in the correspondence table holding means, the channel to be switched is determined, and the selected channel information is updated. It is characterized in that a Chiya tunnel determining means for output to the video receiving unit.
このよ うな構成のチヤンネル選択装置では、 3次元仮想空間における 3次元回転体物体を用いることによ り 、 現実世界の円筒状の回転体を転 がすイメージを連想させることが可能であり、 使用者にもなじみ易い直 感的な操作環境を実現することができる。  By using the three-dimensional rotating object in the three-dimensional virtual space, it is possible to associate the image of rolling the cylindrical rotating object in the real world with the channel selecting device having such a configuration. It is possible to realize an intuitive operation environment that is easy for users to use.
この発明 (請求の範囲第 3 3項) は、 請求の範囲第 3 2項記載のチヤ ンネル選択装置において、 上記領域情報パラメータが、 入力信号に多重 されて入力される場合、 入力信号から領域パラメータを分離するパラメ ータ分離手段を備えたことを特徴とする ものである。  According to a third aspect of the present invention, in the channel selection device according to the third aspect, when the area information parameter is input after being multiplexed with an input signal, And a parameter separating means for separating the pressure.
このよ うな構成のチャンネル選択装置では、 放送などの入力信号と領 域情報パラメータを一個所で受信し、 分離することができる。 図面の簡単な説明  In the channel selection device having such a configuration, an input signal such as a broadcast and a region information parameter can be received and separated at one place. BRIEF DESCRIPTION OF THE FIGURES
第 1図は、 本発明の実施の形態 1 によるプログラム選択実行装置の構 成を示すブロ ック図である。  FIG. 1 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 1 of the present invention.
第 2図は、 本発明によるプログラム選択実行装置、 データ選択実行装 置、 及び映像表示装置、 チャンネル選択装置において 3次元仮想空間内 に配置する 3次元回転体物体の一例を示す図である。 第 3図は、 上記実施の形態 1 によるプログラム選択実行装置の対応表 保持手段が保持する対応表の一例を示す図である。 FIG. 2 is a diagram showing an example of a three-dimensional rotating object arranged in a three-dimensional virtual space in a program selection execution device, a data selection execution device, a video display device, and a channel selection device according to the present invention. FIG. 3 is a diagram showing an example of a correspondence table held by a correspondence table holding means of the program selection and execution device according to the first embodiment.
第 4図は、 本発明の実施の形態 2によるプログラム選択実行装置の構 成を示すブロ ック図である。  FIG. 4 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 2 of the present invention.
第 5図は、 本発明の実施の形態 3 によるプログラム選択実行装置の構 成を示すブロ ック図である。  FIG. 5 is a block diagram showing a configuration of a program selection and execution device according to a third embodiment of the present invention.
第 6図は、 本発明の実施の形態 4 によるプログラム選択実行装置の構 成を示すブロ ック図である。  FIG. 6 is a block diagram showing a configuration of a program selection and execution device according to a fourth embodiment of the present invention.
第 7図は、 上記実施の形態 4によるプログラム選択実行装置における 正面の判定を説明するための図である。  FIG. 7 is a diagram for explaining the front determination in the program selection and execution device according to the fourth embodiment.
第 8図は、 本発明の実施の形態 5 によるプログラム選択実行装置の構 成を示すブロ ック図である。  FIG. 8 is a block diagram showing a configuration of a program selection and execution device according to a fifth embodiment of the present invention.
第 9図は、 本発明の実施の形態 6 によるデータ選択実行装置の構成を 示すブロ ック図である。  FIG. 9 is a block diagram showing a configuration of a data selection execution device according to Embodiment 6 of the present invention.
第 1 0図は、 上記実施の形態 6によるデータ選択実行装置の対応表保 持手段が保持する対応表の一例を示す図である。  FIG. 10 is a diagram showing an example of the correspondence table held by the correspondence table holding means of the data selection execution device according to the sixth embodiment.
第 1 1図は、 上記実施の形態 6 によるデータ選択実行装置の画面表示 例を示す図である。  FIG. 11 is a diagram showing a screen display example of the data selection execution device according to the sixth embodiment.
第 1 2図は、 本発明の実施の形態 7によるデータ選択実行装置の構成 を示すブロ ック図である。  FIG. 12 is a block diagram showing a configuration of a data selection execution device according to Embodiment 7 of the present invention.
第 1 3図は、 本発明の実施の形態 8によるデータ選択実行装置の構成 を示すブロ ック図である。  FIG. 13 is a block diagram showing a configuration of a data selection execution device according to Embodiment 8 of the present invention.
第 1 4図は、 上記実施の形態 8によるデータ選択実行装置の動作を説 明するための図である。  FIG. 14 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
第 1 5図は、 上記実施の形態 8によるデータ選択実行装置の動作を説 明するための図である。  FIG. 15 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
第 1 6図は、 上記実施の形態 8によるデータ選択実行装置の動作を説 明するための図である。  FIG. 16 is a diagram for explaining the operation of the data selection execution device according to the eighth embodiment.
第 1 7図は、 上記実施の形態 8 によるデータ選択実行装置の動作を説 明するための図である。 FIG. 17 illustrates the operation of the data selection execution device according to the eighth embodiment. It is a figure for clarification.
第 1 8図は、 本発明の実施の形態 9による映像表示装置の構成を示す ブロ ック図である。  FIG. 18 is a block diagram showing a configuration of a video display device according to Embodiment 9 of the present invention.
第 1 9図は、 上記実施の形態 9 による 3次元表示に関する概念図であ る。  FIG. 19 is a conceptual diagram relating to three-dimensional display according to the ninth embodiment.
第 2 0図は、 上記実施の形態 9 による 3次元表示に必要な情報に関す る説明図である。  FIG. 20 is an explanatory diagram of information necessary for three-dimensional display according to the ninth embodiment.
第 2 1 図は、 上記実施の形態 9 によるチヤンネル選択方法に関する説 明図である。  FIG. 21 is an explanatory diagram of the channel selection method according to the ninth embodiment.
第 2 2図は、 上記実施の形態 9 によるチャネル選択の判断基準に関す る説明図である。  FIG. 22 is an explanatory diagram of a criterion for channel selection according to the ninth embodiment.
第 2 3図は、 上記実施の形態 9による透視投影変換とァフィ ン変換と の相違に関する説明図である。  FIG. 23 is an explanatory diagram regarding the difference between the perspective projection conversion and the affinity conversion according to the ninth embodiment.
第 2 4図は、 本発明の実施の形態 1 0による映像表示装置の構成を示 すブロ ック図である。  FIG. 24 is a block diagram showing a configuration of a video display device according to Embodiment 10 of the present invention.
第 2 5図は、 上記実施の形態 1 0 による部分映像のメモリ保持に関す る説明図である。  FIG. 25 is an explanatory diagram relating to the memory retention of the partial video according to the tenth embodiment.
第 2 6図は、 本発明の実施の形態 1 1 による映像表示装置の構成を示 すブロ ック図である。  FIG. 26 is a block diagram showing a configuration of a video display device according to Embodiment 11 of the present invention.
第 2 7図は、 上記実施の形態 1 1 による 3次元情報の生成に関する説 明図である。  FIG. 27 is an explanatory diagram relating to the generation of three-dimensional information according to Embodiment 11 described above.
第 2 8図は、 本発明の実施の形態 1 2による映像表示装置の構成を示 すブロ ック図である。  FIG. 28 is a block diagram showing a configuration of a video display device according to Embodiment 12 of the present invention.
第 2 9図は、 上記実施の形態 9〜 1 1 による映像切り替え手法に関す る説明図である。  FIG. 29 is an explanatory diagram of a video switching method according to the ninth to eleventh embodiments.
第 3 0図は、 上記実施の形態 1 2 による映像切り替え手法に関する説 明図である。  FIG. 30 is an explanatory diagram relating to the video switching method according to Embodiment 12 above.
第 3 1 図は、 本発明の実施の形態 1 3 によるチャネル選択装置の構成 を示すブロ ック図である。 第 3 2図は、 上記実施の形態 1 3によるチヤンネル選択装置の対応表 保持手段が保持する対応表の一例を示す図である。 FIG. 31 is a block diagram showing a configuration of a channel selection device according to Embodiment 13 of the present invention. FIG. 32 is a diagram showing an example of a correspondence table held by the correspondence table holding means of the channel selection device according to the embodiment 13;
第 3 3図は、 上記実施の形態 1 3 による 3次元表示に必要な情報に関 する説明図である。 発明を実施するための最良の形態  FIG. 33 is an explanatory diagram of information necessary for three-dimensional display according to the first embodiment 13. BEST MODE FOR CARRYING OUT THE INVENTION
実施の形態 1 . Embodiment 1
第 1 図は本発明の実施の形態 1 によるプログラム選択実行装置の構成 を示すブロ ック図である。  FIG. 1 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 1 of the present invention.
第 1 図において、 1 0 1 は 3次元仮想空間内の 3次元回転体物体を回 転させるための指示を入力する回転指示入力手段、 1 0 2は 3次元回転 体物体を回転させるパラメータを保持するパラメータ保持手段、 1 0 3 は回転指示入力手段 1 0 1 からの回転指示制御信号に基づき、 パラメ一 タ保持手段 1 0 2から変更前パラメータを読み込み、 パラメータを変更 し変更後パラメータと してパラメータ保持手段 1 0 2に記録し、 カウン タ制御信号を出力するパラメータ変更手段である。本実施の形態 1 では、 これら回転指示入力手段 1 0 1 , パラメータ保持手段 1 0 2, 及びパラ メータ変更手段 1 0 3が回転表示制御手段と して機能する。 1 0 4は 3 次元回転体物体を含む 3次元仮想空間を構成する物体の座標情報を保持 する 3次元モデル座標保持手段、 1 0 5はパラメータ保持手段 1 0 2か らパラメータ情報を読み込み、 3次元モデル座標保持手段 1 0 4から 3 次元モデル座標を読み込んで座標変換を行い、 変更後モデル座標を出力 する座標変換手段、 1 0 6は座標変換手段 1 0 5から出力された変更後 モデル座標と視点座標とを用いて、 3次元回転体物体を含む 3次元仮想 空間の表示画面への透視変換を行い、 投影面座標を出力する透視変換手 段である。 1 0 7は透視変換手段 1 0 6から投影面座標を読み込んで、 隠れて表示されない領域を排除し、 表示される領域のみを抽出して奥行 き情報, および陰面処理後ラスタ情報を出力する陰面処理手段、 1 0 8 は陰面処理手段 1 0 7によ り抽出された奥行き情報を保持する奥行き情 報保持手段、 1 0 9は各面に貼り付けるテクスチャを保持するテクスチ ャ保持手段である。 本実施の形態で 3次元回転体物体に貼り付けるテク スチヤは、対応するプログラムであることを識別するための画像であり、 プログラム名やプログラムに対応したアイ コン画像等を用いる。 1 1 0 は陰面処理手段 1 0 7によ り奥行き情報が考慮された陰面処理後ラスタ 情報に対し、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に 基づいて、 テクスチャ保持手段 1 0 9から読み込んだテクスチャを貼り 付けるテク スチャマ ッ ピング手段である。 1 1 1 はテクスチャマ ツ ピン グ手段 1 1 0が出力するテクスチャマッピング後フレーム情報に、 奥行 き情報保持手段 1 0 8によ り保持された奥行き情報に基づいて、 各画素 の色や明る さなどすベての画素情報を描画する レンダリ ング手段、 1 1 2はレンダリ ング手段 1 1 1 によ り描画されたフレーム情報を保持する フ レームノく ッファ、 1 1 3はフ レームノくッ ファ 1 1 2に保持されたフ レ ーム情報を所定のタイ ミ ングで出力して表示する画面表示手段である。 本実施の形態 1では、 これら 3次元モデル座標保持手段 1 0 4〜画面表 示手段 1 1 3が、 複数の面が中心軸に対して一定の間隔で配置された 3 次元回転体物体の上記各面にそれぞれプログラムの内容を示すテクスチ ャを貼り付けたもの (選択用オブジェク ト) を 3次元仮想空間内に配置 した画像を表示画面上に表示する選択用オブジェク ト表示手段と して機 能する。 また、 1 1 4はパラメータ変更手段 1 0 3からのカウンタ制御 信号によ り カウンタを増やす力ゥンタ手段、 1 1 5は使用者が、 選択す るプログラムを決定して入力する選択入力手段、 1 1 6はカウンタ手段 1 1 4からのカウン ト情報と選択入力手段 1 1 5からの選択制御信号と に基づいて、 選択された面を判定する選択面判定手段、 1 1 7は 3次元 回転体物体を構成する各面とプロ グラム との対応関係 (面一プロ グラム 対応情報) , 及び各面とテクスチャ との対応関係 (面一テクスチャ対応 情報) を示す対応表を保持する対応表保持手段である。 第 3図は対応表 保持手段 1 1 7が保持する対応表の一例を示す図である。 1 1 8は選択 面判定手段 1 1 6が出力する選択面情報から、 対応表保持手段 1 1 7か ら読み取った対応情報 (面一プログラム対応情報) を参照して、 実行す べきプログラムを決定するプログラム決定手段、 1 1 9はプログラム決 定手段 1 1 8 によ り選択された選択プログラム情報に基づきプログラム を実行するプログラム実行手段である。 In FIG. 1, 101 is a rotation instruction input means for inputting an instruction to rotate a three-dimensional rotating object in a three-dimensional virtual space, and 102 is a parameter for rotating the three-dimensional rotating object. The parameter holding means 103 reads the pre-change parameter from the parameter holding means 102 based on the rotation command control signal from the rotation command input means 101, changes the parameter, and sets it as the changed parameter. This is a parameter changing unit that records the parameter in the parameter holding unit 102 and outputs a counter control signal. In the first embodiment, the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 function as rotation display control means. Numeral 104 denotes a three-dimensional model coordinate holding means for holding coordinate information of an object constituting a three-dimensional virtual space including a three-dimensional rotating object, and numeral 105 reads parameter information from the parameter holding means 102, and Coordinate conversion means for reading the three-dimensional model coordinates from the four-dimensional model coordinate holding means 104, performing coordinate conversion, and outputting the changed model coordinates, and 106, the changed model coordinates output from the coordinate conversion means 105. This is a perspective transformation method that performs perspective transformation to a display screen in a three-dimensional virtual space including a three-dimensional rotating object using the coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Reference numeral 107 denotes a projection surface which reads projection plane coordinates from the perspective transformation means 106, excludes hidden and not displayed regions, extracts only displayed regions, and outputs depth information and raster information after hidden surface processing. The processing means 108 is a depth information holding depth information extracted by the hidden surface processing means 107. Information holding means 109 is a texture holding means for holding a texture to be attached to each surface. In this embodiment, the texture to be attached to the three-dimensional rotating object is an image for identifying a corresponding program, and uses a program name, an icon image corresponding to the program, or the like. Reference numeral 110 denotes a texture holding unit based on the depth information held by the depth information holding unit 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing unit 107. This is a texture mapping method to paste the texture read from 09. Reference numeral 111 denotes the color and brightness of each pixel based on the frame information after texture mapping output by the texture mapping means 110 and the depth information held by the depth information holding means 108. Rendering means for rendering all pixel information, etc., 1 and 2 are frame buffers which hold the frame information drawn by the rendering means 1 1 1, and 1 13 are frame buffers 1 This is a screen display means for outputting and displaying the frame information held in 12 at a predetermined timing. In the first embodiment, the three-dimensional model coordinate holding means 104 to the screen display means 113 are used for the three-dimensional rotating object in which a plurality of surfaces are arranged at regular intervals with respect to the central axis. Functions as a selection object display means for displaying an image in which a texture indicating the program contents is pasted on each surface (selection object) placed in a three-dimensional virtual space on a display screen I do. Also, 114 is a counter means for increasing the counter by a counter control signal from the parameter changing means 103, 115 is a selection input means for determining and inputting a program to be selected by the user, 1 16 is a selected surface judging means for judging the selected surface based on the count information from the counter means 114 and the selection control signal from the selection input means 115, and 117 is a three-dimensional rotating body. Correspondence table holding means that holds a correspondence table indicating correspondence between each surface composing the object and the program (plane-to-program correspondence information) and correspondence relation between each surface and texture (plane-to-texture correspondence information). is there. FIG. 3 is a diagram showing an example of the correspondence table held by the correspondence table holding means 117. 1 1 8 is the correspondence table holding means 1 1 7 Determining means to determine the program to be executed by referring to the corresponding information (face-to-face program corresponding information) read from the computer, and 119 is based on the selected program information selected by the program determining means 118 It is a program execution means that executes a program.
次に本実施の形態 1 によるプログラム選択実行装置の動作について説 明する。 本実施の形態 1 によるプログラム選択実行装置は、 3次元仮想 空間内に配置した 3次元回転体物体の各面にプログラムを割り 当てて回 転させ、 使用者による所定の操作が行われた際に、 使用者の視点に対し て最も正面を向いている面に対応づけられたプロダラムを起動するもの である。  Next, the operation of the program selection and execution device according to the first embodiment will be described. The program selection and execution device according to the first embodiment assigns a program to each surface of a three-dimensional rotating object placed in a three-dimensional virtual space, rotates the surface, and performs a predetermined operation by a user. It activates a program associated with the surface that faces the front most from the user's viewpoint.
本実施の形態 1 によるプロダラム選択実行装置において、 プログラム 選択動作モー ドが開始すると、 3次元モデル座標保持手段 1 0 4に保持 された 3次元回転体物体の 3次元仮想空間内における初期座標が読み出 され、 透視変換手段 1 0 6力 この初期座標と視点座標とを用いて、 3 次元回転体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標を出力する。 すなわち、 プログラム選択動作モー ドの初期表 示動作時には、 座標変換手段 1 0 5は、 3次元モデル座標保持手段 1 0 4から読み出された初期座標の座標を変換せずにそのまま透視変換手段 1 0 6に出力する。 陰面処理手段 1 0 7は透視変換手段 1 0 6から投影 面座標を読み込んで、 隠れて表示されない領域を排除し、 表示される領 域のみを抽出して奥行き情報, および陰面処理後ラスタ情報を出力する。 テクスチャマッピング手段 1 1 0は陰面処理手段 1 0 7によ り奥行き情 報が考慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に基づいて、 テクスチヤ保持手段 1 0 9 から読み込んだテクスチャを貼り付ける。 ここで、 3次元回転体物体の 各面とテクスチャとの対応関係は、 対応表保持手段 1 1 7から対応情報 (面一テクスチャ対応情報) を読み出すことによって得る。 レンダリ ン グ手段 1 1 1 はテクスチャマツ ビング手段 1 1 0が出力するテクスチヤ マッピング後フレーム情報に、 奥行き情報保持手段 1 0 8によ り保持さ れた奥行き情報に基づいて、 各画素の色や明るさなどすベての画素情報 を描画する。 レンダリ ング手段 1 1 1 によ り描画されたフ レ一ム情報は フ レームバッ ファ 1 1 2に保持され、 画面表示手段 1 1 3はフ レームバ ッファ 1 1 2に保持されたフ レーム情報を所定のタイ ミ ングで読み出し て画面の表示を行う。 これによ り 、 プログラム選択動作モー ドの初期状 態の画面が表示される。 In the program selection execution device according to the first embodiment, when the program selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read. Perspective transformation means 106 Force Performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs projection plane coordinates. That is, at the time of the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without transforming them. 0 Output to 6. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and invisible area, extracts only the displayed area, and obtains depth information and raster information after hidden surface processing. Output. Based on the depth information held by the depth information holding unit 108, the texture mapping unit 110 responds to the hidden surface processed raster information in which the depth information is considered by the hidden surface processing unit 107. Paste the texture read from the texture holding means 109. Here, the correspondence between each surface of the three-dimensional rotator object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 117. The rendering means 111 is held in the frame information after texture mapping output by the texture matching means 110, and is held by the depth information holding means 108. Based on the obtained depth information, all pixel information such as color and brightness of each pixel is drawn. The frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 sets the frame information held in the frame buffer 112 to a predetermined value. Read at the timing of and display the screen. As a result, the screen in the initial state of the program selection operation mode is displayed.
第 2図は本実施の形態 1 によるプログラム選択実行装置において 3次 元仮想空間内に配置する 3次元回転体物体の一例を示す図である。 本発 明において 3次元仮想空間内に配置する 3次元回転体物体は複数の面よ り構成され、 各面が中心軸に対して一定の間隔で配置された 3次元物体 である。 第 2図では 3次元回転体物体を構成する面が 6面であり 、 第 2 ( a )図は回転の中心軸が 3次元仮想空間内において横方向に配置され、 第 2 ( b ) 図は回転の中心軸が 3次元仮想空間内において縦方向に配置 されたものを示している。.  FIG. 2 is a diagram showing an example of a three-dimensional rotator object arranged in a three-dimensional virtual space in the program selection and execution device according to the first embodiment. In the present invention, the three-dimensional rotating object placed in the three-dimensional virtual space is composed of a plurality of surfaces, and each surface is a three-dimensional object arranged at regular intervals with respect to the central axis. In FIG. 2, there are six surfaces constituting the three-dimensional rotating object, and in FIG. 2 (a), the central axis of rotation is arranged in the horizontal direction in the three-dimensional virtual space, and in FIG. 2 (b), The figure shows an example in which the central axis of rotation is arranged vertically in a three-dimensional virtual space. .
初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 1 よ り回転指示制御信号を入力する と、 パラメータ変更手段 1 0 3は回 転指示入力手段 1 0 1からの回転指示制御信号に基づき、 パラメータ保 持手段 1 0 2から変更前パラメータ (こ こでは初期状態のパラメータ) を読み込み、 パラメータを変更し変更後パラメータと してパラメ一タ保 持手段 1 0 2に記録し、 力ゥンタ手段 1 1 4に対し力ゥンタ制御信号を 出力する。 座標変換手段 1 0 5は、 パラメ一タ保持手段 1 0 2に記録さ れた変更後パラメータを読み出し、 3次元モデル座標保持手段 1 0 4か ら読み出した初期座標の座標を変更後パラメータを用いて変換して得ら れる変更後モデル座標を透視変換手段 1 0 6に出力する。 透視変換手段 1 0 6は、 この変更後モデル座標と視点座標とを用いて、 3次元回転体 物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標 を出力する。 この後、 陰面処理手段 1 0 7, テク スチャマッピング手段 1 1 0, レンダリ ング手段 1 1 1 , フ レームバッファ 1 1 2 , 及び画面 表示手段 1 1 3が上記プログラム選択動作モー ドの初期表示動作時と同 様の処理を行い、 回転指示制御信号入力後の画面が表示される。 例えば 3次元回転体物体が第 2図に示す形状のものである場合、 初期状態にお いて面 1が正面を向いて表示されていたものが、 正方向の回転指示制御 信号を入力すると、 第 2図中の矢印の方向に回転し面 2が正面を向く画 像が表示され、 負方向の回転指示制御信号を入力する と、 第 2図中の矢 印とは逆の方向に回転し面 6が正面を向く画像が表示される。 When the user inputs a rotation instruction control signal from the rotation instruction input means 101 while the initial screen is displayed, the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction. Based on the control signal, the parameter before change (here, the parameter in the initial state) is read from the parameter holding means 102, the parameter is changed, and the changed parameter is recorded in the parameter holding means 102 as the changed parameter. It outputs a power counter control signal to the power counter means 114. The coordinate transformation means 105 reads the changed parameters recorded in the parameter holding means 102 and uses the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 as the changed parameters. The modified model coordinates obtained by the transformation are output to the perspective transformation means 106. The perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the program selection operation mode. Same as time The same process is performed, and the screen after inputting the rotation instruction control signal is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, what was displayed with the surface 1 facing the front in the initial state is changed to a positive direction rotation input control signal. (2) An image is displayed in which the screen rotates in the direction of the arrow in FIG. 2 and the surface (2) faces the front. An image with 6 facing front is displayed.
ここで、 回転指示入力手段 1 0 1 と しては、 リモコンやキーボー ドの カーソルキーの操作を 3次元回転体物体の回転に対応づける、 あるいは マウスの動きを 3次元回転体物体の回転に対応づけるよ うにすればよい c 例えば、 3次元回転体物体が第 2 ( a ) 図に示したものであれば、 リモ コンゃキ一ボ一 ドの上下カーソルキーを 3次元回転体物体の上方向 (第 2 ( a ) 図中の矢印とは逆の方向) , 及び下方向 (第 2 ( a ) 図中の矢 印の方向) の回転に対応づける、 あるいはマウスの前後の動きを 3次元 回転体物体の上方向, 及び下方向の回転に対応づけるよ うにすればよい。 その他、 マイ クロ ソフ ト社のインテ リマウスのよ うにホイールと呼ばれ る回転式のボタンを備えたマウスで操作するものであれば、 ホイールの 前後の回転を 3次元回転体物体の上方向, 及び下方向の回転に対応づけ るよ うにすればよい。 また、 トラ ックボールで操作するものであれば、 トラックボールの前後の回転を 3次元回転体物体の上方向, 及び下方向 の回転に対応づけるよ うにすればよい。 また、 音声認識を用いた入力手 段で操作する ものであれば、 「う え」, 「した」、 あるいはそれに類する音 声入力を 3次元回転体物体の上方向, 及び下方向の回転に対応づけるよ うにすればよレ、。 Here, as the rotation instruction input means 101, the operation of the cursor keys on the remote control or the keyboard corresponds to the rotation of the three-dimensional rotating object, or the movement of the mouse corresponds to the rotation of the three-dimensional rotating object. it vMedia.Creating characterize c for example, if the three-dimensional rotating body object are shown in. 2 (a) to FIG., the direction on the three-dimensional rotating body object up and down cursor keys on the remote Konyakiichi baud de (The direction opposite to the arrow in Fig. 2 (a)), and the downward (the direction of the arrow in Fig. 2 (a)) rotation, or the three-dimensional rotation of the mouse back and forth. What is necessary is to correspond to the upward and downward rotation of the body object. In addition, if the mouse is operated with a mouse equipped with a rotary button called a wheel, such as an IntelliMouse of Microsoft Corporation, the front and rear rotation of the wheel is performed in the upward direction of the three-dimensional rotating object, and What is necessary is just to correspond to downward rotation. If the player operates with a trackball, the front and rear rotation of the trackball may be associated with the upward and downward rotation of the three-dimensional rotating object. In addition, if the input is operated by means of input using voice recognition, “yes”, “do”, or similar voice input can be used to rotate the three-dimensional rotating object upward and downward. I'll do it.
回転指示制御信号入力動作時に力 ゥンタ手段 1 1 4ではパラメ一タ変 更手段 1 0 3が出力するカウンタ制御信号によ り カウン ト動作を行う。 具体的には例えば、 回転指示入力手段 1 0 1 から正方向の回転指示制御 信号を入力すると、 パラメ一タ変更手段 1 0 3はカウンタ手段 1 1 4の カウン ト値を 1インク リ メン トするカウンタ制御信号を出力し、 回転指 示入力手段 1 0 1 から負方向の回転指示制御信号を入力すると、 パラメ ータ変更手段 1 0 3はカウンタ手段 1 1 4のカウン ト値を 1デク リ メ ン トするカウンタ制御信号を出力し、 カウンタ手段 1 1 4は、 このカウン タ制御信号を受けて自己が保持する力ゥン ト値を変化させる。 At the time of the rotation instruction control signal input operation, the force counter means 114 performs the counting operation by the counter control signal output from the parameter changing means 103. Specifically, for example, when a positive rotation instruction control signal is input from the rotation instruction input means 101, the parameter changing means 103 increments the count value of the counter means 114 by one. When a counter control signal is output and a negative rotation instruction control signal is input from the rotation instruction input means 101, The counter changing means 103 outputs a counter control signal for decrementing the count value of the counter means 114 by one, and the counter means 114 receives the counter control signal and holds it by itself. Change the force value to be applied.
起動を所望するプログラムが表示された面が正面を向いた状態でユー ザが選択入力手段 1 1 5 よ り選択制御信号を入力する と、 選択面判定手 段 1 1 6は、 カウンタ手段 1 1 4からその時点のカウン ト値をカウン ト 情報と して取得し、 このカウン ト情報に基づいて選択制御信号が入力さ れた時に正面を向いている面を判定し、 この面を選択面情報と して出力 する。 例えば 3次元回転体物体が第 2図に示す形状のものである場合、 選択面判定手段 1 1 6は、 初期状態 (カウン ト値が「0」) あるいは力ゥ ン ト値を 6で割った余りが「 0」であれば正面を向いている面は面 1 であ ると判定し、カウン ト値を 6で割った余り が「 1」, 「 2」, 「 3」, 「 4」, 「 5」 であれば正面を向いている面はそれぞれ面 2, 面 3, 面 4, 面 5, 面 6 であると判定し、 カウン ト値を 6で割った余り力;「一 1」, 「一 2」, 「一 3」, 「― 4」, 「一 5」であれば正面を向いている面はそれぞれ面 6, 面 5, 面 4, 面 3 , 面 2であると判定する。  When the user inputs a selection control signal from the selection input unit 115 while the surface on which the program desired to be started is displayed is facing the front, the selection surface determination unit 116 is a counter unit. The count value at that point is obtained from 4 as count information, and based on this count information, the face facing forward when the selection control signal is input is determined, and this face is selected as the selected face information. Is output as. For example, if the three-dimensional rotating object has the shape shown in FIG. 2, the selected surface determination means 1 16 divides the initial state (the count value is “0”) or the force point value by 6. If the remainder is “0”, it is determined that the surface facing the front is surface 1, and the remainder obtained by dividing the count value by 6 is “1,” “2,” “3,” “4,” If it is “5”, the faces facing the front are determined to be face 2, face 3, face 4, face 5, and face 6, respectively, and the surplus force obtained by dividing the count value by 6; If “one 2”, “one 3”, “−4”, and “one 5”, the faces facing the front are determined to be face 6, face 5, face 4, face 3, and face 2, respectively.
プログラム決定手段 1 1 8は、 選択面判定手段 1 1 6から選択面情報 を取得し、 対応表保持手段 1 1 7に保持された面一プログラム対応情報 を参照して、 選択面情報で示される面に対応するプログラムを選択プロ グラム情報と して出力する。  The program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
プログラム実行手段 1 1 9は、 プログラム決定手段 1 1 8から入力さ れる選択プログラム情報で特定されたプログラムを実行する。  The program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
このよ うに本実施の形態 1 によるプログラム選択実行装置では、 3次 元仮想空間内に配置した 3次元回転体物体の各面にそれぞれプログラム 内容を示すテクスチャを貼り付けたもの (選択用オブジェク ト) を画面 上に表示し、 使用者が所定の操作によ り指示をするこ とによ り 3次元回 転体物体を回転させると と もに回転指示操作を何回繰り返したかをカウ ン ト しておき、 使用者による所定の選択操作が行われた際に、 使用者の 視点に対して最も正面を向いている面をカウン ト値よ り判定し、 その面 に対応づけられたプログラムを対応表を参照して選択してプログラムを 起動する構成と したから、 3次元仮想空間における 3次元回転体物体を 用いることによ り、 現実世界の円筒状の回転体を転がすイメージを連想 させることが可能であり、 パソコ ンに慣れていない使用者にもなじみ易 い直感的な操作環境を実現することができる。 As described above, the program selection and execution device according to the first embodiment is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object placed in the three-dimensional virtual space (selection object). Is displayed on the screen, and the user gives an instruction by a predetermined operation to rotate the three-dimensional rotating object, and counts how many times the rotation instruction operation is repeated. In addition, when a predetermined selection operation is performed by the user, the surface facing the front of the user's viewpoint is determined from the count value based on the count value. The configuration is such that the program associated with the object is selected by referring to the correspondence table and the program is activated.Thus, by using the three-dimensional rotating object in the three-dimensional virtual space, the cylindrical rotating object in the real world is used. It is possible to remind the user of the image of rolling, so that an intuitive operation environment that is easy to be used by a user who is not used to a personal computer can be realized.
なお、 本実施の形態 1 によるプログラム選択実行装置において 3次元 仮想空間内に配置する 3次元回転体物体の例と して、 3次元回転体物体 を構成する面が 6面であり、 回転の中心軸が 3次元仮想空間内において 横方向、 あるいは縦方向に配置されたものを示したが、 3次元回転体物 体を構成する面の数は 6面に限る ものではなく 、 2〜 5面, あるレ、は 7面 以上であってもよく 、 また、 対応させるプログラム数に合わせて表示す る回転体を変更しても構わない。 また、 回転体の面の数よ り もプロダラ ム数が多い場合には、 所定のタイ ミ ングで面に貼り付けるプログラム情 報を順次切り替えることによ りすべてのプログラムを選択可能なよ うに してもよいし、 よく用いるプログラムなど、 特定のプログラムのみを選 択して表示するよ うにしてもよい。 また、 回転の中心軸を 3次元仮想空 間内において斜め方向等に配置してもよい。  Note that, as an example of the three-dimensional rotator object arranged in the three-dimensional virtual space in the program selection and execution device according to the first embodiment, there are six surfaces constituting the three-dimensional rotator object, and the center of rotation is Although the axes are shown arranged in the horizontal or vertical direction in the three-dimensional virtual space, the number of surfaces constituting the three-dimensional rotating object is not limited to six, but is two to five. There may be seven or more screens, and the displayed rotating body may be changed according to the number of programs to be supported. When the number of programs is larger than the number of faces of the rotating body, all programs can be selected by switching the program information to be pasted on the faces at a predetermined timing. Alternatively, only specific programs such as frequently used programs may be selected and displayed. Further, the center axis of rotation may be arranged obliquely in the three-dimensional virtual space.
実施の形態 2 . Embodiment 2
第 4図は本発明の実施の形態 2によるプログラム選択実行装置の構成 を示すブロ ック図である。  FIG. 4 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 2 of the present invention.
第 4図において第 1図と同一符号は同一又は相当部分である。 1 2 0 は 3次元仮想空間内の 3次元回転体物体を回転させるよ うにパラメータ を順次変更する回転角変化パターンを保持し、 座標変換手段 1 2 1 から の要求に応じて、 変更したパラメータを順次出力する回転角変化パター ン保持手段である。 本実施の形態 2では、 この回転角変化パターン保持 手段 1 2 0が回転表示制御手段と して機能する。座標変換手段 1 2 1 は、 画面表示手段 1 1 3が出力する表示終了信号を受けて回転角変化パター ン保持手段 1 2 0に対し変更したパラメータ情報の出力を要求し、 この 要求に応じて回転角変化パターン保持手段 1 2 0が出力する変更したパ ラメータ情報を用いて 3次元モデル座標の座標変換を行い、 変換後モデ ル座標を出力する と と もに、 座標変換を行う毎にカウンタ手段に対し力 ゥンタ制御信号を出力する。 4, the same reference numerals as those in FIG. 1 denote the same or corresponding parts. 12 0 holds a rotation angle change pattern for sequentially changing parameters so as to rotate the 3D rotating object in the 3D virtual space, and according to a request from the coordinate conversion means 12 1, the changed parameters are stored. It is a rotation angle change pattern holding means for sequentially outputting. In the second embodiment, the rotation angle change pattern holding means 120 functions as rotation display control means. Upon receiving the display end signal output from the screen display means 113, the coordinate transformation means 121 requests the rotation angle change pattern holding means 120 to output the changed parameter information. Rotation angle change pattern holding means 1 The coordinate conversion of the three-dimensional model coordinates is performed using the parameter information, the converted model coordinates are output, and a force counter control signal is output to the counter means every time the coordinate conversion is performed.
次に本実施の形態 2によるプコグラム選択実行装置の動作について説 明する。 本実施の形態 2によるプログラム選択実行装置は、 回転指示を 使用者が入力する代わり に、 所定の回転角速度で自動的に回転させるよ うにしたものである。  Next, the operation of the program selection execution device according to the second embodiment will be described. The program selection and execution device according to the second embodiment is configured to automatically rotate at a predetermined rotation angular velocity instead of inputting a rotation instruction by a user.
本実施の形態 2によるプロダラム選択実行装置において、 プログラム 選択動作モー ドが開始すると、 3次元モデル座標保持手段 1 0 4に保持 された 3次元回転体物体の 3次元仮想空間内における初期座標が読み出 され、 透視変換手段 1 0 6力 この初期座標と視点座標とを用いて、 3 次元回転体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標を出力する。 すなわち、 プログラム選択動作モー ドの初期表 示動作時には、 座標変換手段 1 2 1 は、 3次元モデル座標保持手段 1 0 4から読み出された初期座標の座標を変換せずにそのまま透視変換手段 1 0 6に出力する。 陰面処理手段 1 0 7は透視変換手段 1 0 6から投影 面座標を読み込んで、 隠れて表示されない領域を排除し、 表示される領 域のみを抽出して奥行き情報、および陰面処理後ラスタ情報を出力する。 テクスチャマツピング手段 1 1 0は陰面処理手段 1 0 7によ り奥行き情 報が考慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に基づいて、 テクスチャ保持手段 1 0 9 から読みこんだテクスチャを貼り付ける。 こ こで、 3次元回転体物体の 各面とテクスチャ との対応関係は、 対応表保持手段 1 1 7から対応情報 (面一テクスチャ対応情報) を読み出すことによって得る。 レンダリ ン グ手段 1 1 1 はテクスチヤマッ ビング手段 1 1 0が出力するテクスチヤ マッピング後フ レーム情報に、 奥行き情報保持手段 1 0 8によ り保持さ れた奥行き情報に基づいて、 各画素の色や明るさなどすベての画素情報 を描画する。 レンダリ ング手段 1 1 1 によ り描画されたフ レーム情報は フ レームバッファ 1 1 2に保持される。 画面表示手段 1 1 3はフ レーム バッファ 1 1 2に保持されたフレーム情報を所定のタイ ミ ングで読み出 して画面の表示 (プログラム選択動作モー ドの初期状態の画像の表示) を行い、 表示動作が完了すると、 座標変換手段 1 2 1 に対し表示終了信 号を出す。 In the program selection execution device according to the second embodiment, when the program selection operation mode is started, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read. Perspective transformation means 106 Force Performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs projection plane coordinates. That is, at the time of the initial display operation in the program selection operation mode, the coordinate conversion means 121 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without transforming them. 0 Output to 6. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and invisible area, extracts only the displayed area, and obtains depth information and raster information after hidden surface processing. Output. The texture mapping means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107. Paste the texture read from the texture holding means 109. Here, the correspondence between each surface of the three-dimensional rotating object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding means 117. The rendering means 111 is based on the frame information after texture mapping output from the texture mapping means 110 and the color and the color of each pixel based on the depth information held by the depth information holding means 108. Draws all pixel information such as brightness. Frame information drawn by the rendering means 111 is held in the frame buffer 112. Screen display means 1 1 3 is frame The frame information stored in the buffer 112 is read out at a predetermined timing and displayed on the screen (display of the image in the initial state of the program selection operation mode). When the display operation is completed, the coordinate conversion means is executed. A display end signal is sent to 1 2 1.
座標変換手段 1 2 1 は画面表示手段 1 1 3から表示終了信号を受ける と、 回転角変化パターン保持手段 1 2 0に対しパラメータを出力するよ う要求する。 回転角変化パターン保持手段 1 2 0は座標変換手段 1 2 1 からの要求に応じて、 保持している回転角変化パターンに基づいて、 3 次元回転体物体がある面を正面に向けた状態から隣接する他の面を正面 に向けた状態となるまで回転するよ うに変更されたパラメータを出力す る。 座標変換手段 1 2 1 は、 回転角変化パターン保持手段 1 2 0が出力 する変更されたパラメ一タを受け、 3次元モデル座標保持手段 1 0 4か ら読み出した初期座標の座標を変更後パラメータを用いて変換して得ら れる変更後モデル座標を透視変換手段 1 0 6 に出力すると と もに、 カウ ンタ手段 1 1 4に対しカウンタ制御信号を出力する。 カウンタ手段 1 1 4では座標変換手段 1 2 1 が出力する力ゥンタ制御信号によ りカウン ト 動作を行う。 透視変換手段 1 0 6は、 この変更後モデル座標と視点座標 とを用いて、 3次元回転体物体を含む 3次元仮想空間の表示画面への透 視変換を行い、 投影面座標を出力する。 この後、 陰面処理手段 1 0 7, テクスチャマツピング手段 1 1 0, レンダリ ング手段 1 1 1 , フ レーム バッファ 1 1 2, 及び画面表示手段 1 1 3が上記プログラム選択動作モ 一ドの初期状態の画像の表示動作時と同様の処理を行い、 3次元回転体 物体が初期状態から所定角度回転した状態の画面が表示される。 例えば 3次元回転体物体が第 2図に示す形状のものである場合、 初期状態にお いて面 1 が正面を向いて表示されていたものが、 第 2図中の矢印の方向 に回転し面 2が正面を向く画像が表示される。 画像表示動作が完了する と画面表示手段 1 1 3は座標変換手段 1 2 1 に対し表示終了信号を出す。 これによ り上記座標変換, 透視変換, 陰面処理, テクスチャマッピング, レンダリ ング, 及び画面表示の処理が繰り返され、 画面上には、 各面に プログラム内容を示すテクスチャが貼り付けられた 3次元回転体物体が 自動的に回転する画像が表示される。 Upon receiving the display end signal from the screen display means 113, the coordinate conversion means 121 requests the rotation angle change pattern holding means 120 to output a parameter. In response to a request from the coordinate conversion means 122, the rotation angle change pattern holding means 120, based on the held rotation angle change pattern, changes the state in which the surface with the three-dimensional rotating object is directed to the front. Outputs the parameter changed so that it rotates until the other adjacent surface faces the front. The coordinate conversion means 122 receives the changed parameter output from the rotation angle change pattern holding means 120, and changes the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 after the changed parameter. The modified model coordinates obtained by the transformation are output to the perspective transformation means 106 and a counter control signal is outputted to the counter means 114. The counter means 114 performs a count operation in accordance with the force counter control signal output from the coordinate conversion means 121. The perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are in the initial state of the program selection operation mode. Performs the same processing as when the image is displayed, and displays a screen in which the three-dimensional rotating object has been rotated by a predetermined angle from the initial state. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, the surface 1 displayed in the initial state facing the front will rotate in the direction of the arrow in Fig. 2 and the surface will rotate. An image with 2 facing front is displayed. When the image display operation is completed, the screen display means 113 sends a display end signal to the coordinate conversion means 122. As a result, the above coordinate transformation, perspective transformation, hidden surface processing, texture mapping, rendering, and screen display processing are repeated. An image in which the 3D rotating object with the texture indicating the program contents is automatically rotated is displayed.
起動を所望するプログラムが表示された面が正面を向いた状態でユー ザが選択入力手段 1 1 5 よ り選択制御信号を入力したときの、 選択面判 定手段 1 1 6 , プログラム決定手段 1 1 8, 及びプログラム実行手段 1 1 9の動作は、 上記実施の形態 1 によるプログラム選択実行装置の場合 と同様である。 すると、 選択面判定手段 1 1 6は、 力ゥンタ手段 1 1 4 からその時点のカウン ト値をカウン ト情報と して取得し、 このカウン ト 情報に基づいて選択制御信号が入力された時に正面を向いている面を判 定し、 この面を選択面情報と して出力する。 プログラム決定手段 1 1 8 は、 選択面判定手段 1 1 6から選択面情報を取得し、 対応表保持手段 1 1 7に保持された面一プログラム対応情報を参照して、 選択面情報で示 される面に対応するプログラムを選択プログラム情報と して出力する。 プログラム実行手段 1 1 9は、 プログラム決定手段 1 1 8から入力され る選択プログラム情報で特定されたプログラムを実行する。  When the user inputs a selection control signal from the selection input means 1 1 5 with the surface on which the program desired to be started is displayed facing the front, the selection plane determination means 1 1 6, the program determination means 1 The operations of the program execution means 118 and the program execution means 119 are the same as those of the program selection execution device according to the first embodiment. Then, the selected surface determination means 1 16 obtains the current count value as the count information from the force counter means 114, and when the selection control signal is input based on this count information, the front face is determined. Judgment is performed on the surface that faces, and this surface is output as selected surface information. The program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 to indicate the selected plane information. The program corresponding to the surface to be output is output as selected program information. The program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
このよ うに本実施の形態 2によるプログラム選択実行装置では、 3次 元仮想空間内に配置した 3次元回転体物体の各面にそれぞれプログラム 内容を示すテクスチャを貼り付けたもの (選択用オブジェク ト) を画面 上に表示し、 3次元回転体物体がある面を正面に向けた状態から隣接す る他の面を正面に向けた状態となるまで回転するよ う にパラメ一タを自 動的に変更することを繰り返すことによ り、 3次元回転体物体を画面上 で自動的に回転させると と もに、 パラメ一タの変更を何回繰り返したか をカウン ト しておき、 使用者による所定の選択操作が行われた際に、 使 用者の視点に対して最も正面を向いている面をカウン ト値よ り判定し、 その面に対応づけられたプログラムを対応表を参照して選択してプログ ラムを起動する構成と したから、 3次元仮想空間における 3次元回転体 物体を用いることによ り、 現実世界の円筒状の回転体を転がすイメージ を連想させることが可能であり、 バソコンに慣れていない使用者にもな じみ易い直感的な操作環境を実現するこ とができ、 また、 3次元回転体 物体は自動的に回転するので、 使用者はプログラムの選択のみに注意す ればよく 、 操作をよ り簡便にできる。 As described above, the program selection execution device according to the second embodiment is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and the parameters are automatically changed so that the three-dimensional rotating object rotates from a state in which one surface faces front to a state in which another adjacent surface faces front. By repeating the change, the three-dimensional rotating object is automatically rotated on the screen, and the number of times the parameter change is repeated is counted, and the user determines the number of times. When the selection operation is performed, the surface facing the user's viewpoint is determined from the count value, and the program associated with that surface is selected with reference to the correspondence table. And start the program Therefore, by using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate the image of rolling a cylindrical rotating body in the real world. A familiar and intuitive operation environment can be realized. Since the object rotates automatically, the user only has to pay attention to the selection of the program, and the operation can be further simplified.
なお、 上記実施の形態 2では、 回転角の変化パターンと して常に一定 の回転角で変化するものについて示したが、 3次元回転体物体の面が正 面を向いた時点で、 回転を一時停止し、 一定時間経過後、 回転角を変化 させるよ うな回転角変化パターンと してもよい。  In the above-described second embodiment, a pattern in which the rotation angle changes at a constant rotation angle is shown. However, when the surface of the three-dimensional rotating object faces the front, the rotation is temporarily stopped. The rotation angle may be changed so that the rotation angle is changed after a certain period of time.
また、 上記実施の形態 1 によるプログラム選択実行装置の手動による 回転指示を行うための手段 (回転指示入力手段 1 0 1, パラメータ保持 手段 1 0 2 , パラメータ変更手段 1 0 3 ) をも備えたものと し、 通常は 使用者の操作に応じて回転させ、 使用者が所定の時間操作しなかった場 合はタイマーを起動させ所定時間を計測し、 超えた場合は自動的に回転 を開始する構成と してもよい。 かかる構成と した場合に、 さらに自動回 転を開始した後、 使用者の操作に応じて回転を停止、 プログラムを選択 する構成と してもよい。  Further, means for manually instructing the rotation of the program selection and execution device according to the first embodiment (rotation instruction input means 101, parameter holding means 102, parameter changing means 103) are also provided. Normally, rotation is performed according to the user's operation, and if the user has not operated for a predetermined time, a timer is started and the predetermined time is measured, and if it exceeds, the rotation is automatically started. It may be. In such a configuration, after the automatic rotation is further started, the rotation may be stopped according to the operation of the user, and the program may be selected.
実施の形態 3 . Embodiment 3.
第 5図は本発明の実施の形態 3 によるプログラム選択実行装置の構成 を示すブ口 ック図である。  FIG. 5 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 3 of the present invention.
第 5図において第 1図と同一符号は同一又は相当部分である。 1 2 2 は陰面処理手段 1 0 7によ り抽出された奥行き情報を保持する奥行き情 報保持手段であり 、 1 2 3は奥行き情報保持手段 1 2 2からの奥行き情 報と選択入力手段 1 1 5からの選択制御信号とに基づいて、 選択された 面を判定する選択面判定手段である。  In FIG. 5, the same reference numerals as those in FIG. 1 denote the same or corresponding parts. 1 2 2 is depth information holding means for holding the depth information extracted by the hidden surface processing means 107, and 1 2 3 is the depth information from the depth information holding means 1 2 2 and the selection input means 1 This is a selected surface determination means for determining the selected surface based on the selection control signal from 15.
次に本実施の形態 3によるプログラム選択実行装置の動作について説 明する。 上記実施の形態 1 によるプログラム選択実行装置では回転指示 の回数をカウン トすることによ り選択される面(正面を向いた面)を判定 するよ うにしたが、本実施の形態 3によるプログラム選択実行装置では、 回転指示の力ゥン ト値の代わり に、 陰面処理の際に得られる奥行き情報 に基づいて、 使用者の視点に対し、 最も正面を向いている面を判定する よ うにしたものである。 本実施の形態 3によるプログラム選択実行装置において、 プログラム 選択動作モー ドの初期状態の画面の表示, 及び回転指示制御信号の入力 による動作は、 上記実施の形態 1 によるプログラム選択実行装置と全く 同様であるので、 説明を省略する。 Next, the operation of the program selection and execution device according to the third embodiment will be described. In the program selection execution device according to the first embodiment, the surface to be selected (the surface facing front) is determined by counting the number of rotation instructions. The execution device determines the most frontal surface from the user's viewpoint based on depth information obtained during hidden surface processing, instead of the force command value of the rotation instruction. It is. In the program selection and execution device according to the third embodiment, the display of the screen in the initial state of the program selection operation mode and the operation by inputting the rotation instruction control signal are exactly the same as those of the program selection and execution device according to the first embodiment. Explanations are omitted.
本実施の形態 3 によるプログラム選択実行装置において、 起動を所望 するプロダラムが表示された面が正面を向いた状態でユーザが選択入力 手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手段 1 2 3は、 奥行き情報保持手段 1 2 2からその時点の奥行き情報を取得し、 この奥 行き情報に基づいて選択制御信号が入力された時に正面を向いている面 を判定し、 この面を選択面情報と して出力する。 例えば 3次元回転体物 体が第 2図に示す形状のものである場合、 選択面判定手段 1 2 3は、 奥 行き情報で最も手前に配置される面が最も正面を向いている面であると 判定する。  In the program selection and execution device according to the third embodiment, when the user inputs a selection control signal from the selection input means 115 while the surface on which the program desired to be activated is displayed faces forward, the selection surface determination means 1 2 3 obtains the current depth information from the depth information holding means 1 2 2, determines the face facing forward when the selection control signal is input based on the depth information, and determines this face. Output as selected plane information. For example, when the three-dimensional rotating object has the shape shown in FIG. 2, the selected surface determination unit 123 determines that the surface located closest to the depth information is the surface facing the front most. Is determined.
プログラム決定手段 1 1 8は、 選択面判定手段 1 2 3から選択面情報 を取得し、 対応表保持手段 1 1 7に保持された面一プログラム対応情報 を参照して、 選択面情報で示される面に対応するプログラムを選択プロ グラム情報と して出力する。  The program determining means 1 18 obtains the selected plane information from the selected plane determining means 1 2 3 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
プログラム実行手段 1 1 9は、 プログラム決定手段 1 1 8から入力さ れる選択プログラム情報で特定されたプログラムを実行する。  The program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
このよ うに本実施の形態 3によるプログラム選択実行装置では、 3次 元仮想空間内に配置した 3次元回転体物体の各面にそれぞれプログラム 内容を示すテクスチャを貼り付けたもの (選択用オブジェク ト) を画面 上に表示し、 使用者が所定の操作によ り指示をすることによ り 3次元回 転体物体を回転させ、 使用者による所定の選択操作が行われた際に、 使 用者の視点に対して最も正面を向いている面を陰面処理の際に得られる 奥行き情報に基づいて判定し、 その面に対応づけられたプログラムを対 応表を参照して選択してプログラムを起動する構成と したから、 3次元 仮想空間における 3次元回転体物体を用いることによ り 、 現実世界の円 筒状の回転体を転がすイメージを連想させることが可能であり、 バソコ ンに慣れていない使用者にもなじみ易い直感的な操作環境を実現するこ とができる。 As described above, in the program selection execution device according to the third embodiment, the texture indicating the program content is pasted on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and the user gives an instruction by a predetermined operation to rotate the three-dimensional rotating object, and when a predetermined selection operation is performed by the user, the user Is determined based on the depth information obtained during hidden surface processing, and the program associated with that surface is selected by referring to the correspondence table and the program is started. By using a three-dimensional rotating object in a three-dimensional virtual space, it is possible to associate the image of rolling a cylindrical rotating body in the real world. It is possible to realize an intuitive operation environment that is easy to be used by a user who is not used to the operation.
なお、 本実施の形態 3では、 選択用ォブジェク トが 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1, パラメータ保持手段 1 0 2, パラメ一タ変更手段 1 0 3を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2によるプログラム選択実行装置のよ うに回転角変化パターン保持手段 1 2 0を設け、 回転表示制御を自動で行う よ う にしても良いことは言う までもなレ、。  In the third embodiment, the selection object provides a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the central axis in the three-dimensional virtual space. In this embodiment, the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the means for manually inputting the rotation instruction has been described. Needless to say, the rotation angle change pattern holding means 120 may be provided as in the selection execution device, and the rotation display control may be automatically performed.
実施の形態 4 · Embodiment 4
第 6図は本発明の実施の形態 4によるプログラム選択実行装置の構成 を示すブロ ック図である。  FIG. 6 is a block diagram showing a configuration of a program selection and execution device according to a fourth embodiment of the present invention.
第 6図において第 1図と同一符号は同一又は相当部分である。 1 2 4 は回転指示入力手段 1 0 1 からの回転指示制御信号に基づき、 パラメ一 タ保持手段 1 0 2から変更前パラメータを読み込み、 パラメータを変更 し変更後パラメータ と してパラメ一タ保持手段 1 0 2に記録し、 回転角 情報を出力するパラメータ変更手段である。 1 2 5はパラメータ変更手 段 1 2 4からの回転角情報, 選択入力手段 1 1 5からの選択制御信号, 及ぴ回転角一面対応保持手段 1 2 6からの回転角一面対応情報とに基づ いて、 選択された面を判定する選択面判定手段である。  6, the same reference numerals as those in FIG. 1 denote the same or corresponding parts. 1 2 4 reads the parameter before change from the parameter holding means 102 based on the rotation instruction control signal from the rotation instruction input means 101, changes the parameter, and changes the parameter as the parameter after change. This is a parameter changing means that records the information in 102 and outputs the rotation angle information. 125 is based on the rotation angle information from the parameter change means 124, the selection control signal from the selection input means 115, and the rotation angle one-plane correspondence information from the rotation angle one-plane correspondence holding means 126. Thus, it is a selected surface determining means for determining the selected surface.
次に本実施の形態 4によるプログラム選択実行装置の動作について説 明する。 上記実施の形態 1 によるプログラム選択実行装置では回転指示 の回数をカウン トすることによ り選択される面(正面を向いた面)を判定 するよ うにしたが、本実施の形態 4によるプログラム選択実行装置では、 回転指示の力ゥン ト値の代わり に、 回転角と面インデックスとの対応関 係から、 使用者の視点に対し、 最も正面を向いている面を判定するよ う にしたものである。  Next, the operation of the program selection and execution device according to the fourth embodiment will be described. In the program selection execution device according to the first embodiment, the surface to be selected (the surface facing front) is determined by counting the number of rotation instructions. In the execution device, instead of the force point value of the rotation instruction, the surface facing the front of the user is determined from the correspondence between the rotation angle and the surface index. It is.
本実施の形態 4によるプログラム選択実行装置において、 プログラム 選択動作モー ドの初期状態の画面の表示動作は、 上記実施の形態 1 によ るプログラム選択実行装置と全く 同様であるので、 説明を省略する。 初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 1 よ り回転指示制御信号を入力する と、 パラメータ変更手段 1 2 4は回 転指示入力手段 1 0 1からの回転指示制御信号に基づき、 パラメータ保 持手段 1 0 2から変更前パラメータ (ここでは初期状態のパラメ一タ) を読み込み、 パラメータを変更し変更後パラメータ と してパラメ一タ保 持手段 1 0 2に記録する。 こ こで、 上記実施の形態 1 によるプログラム 選択実行装置ではパラメータ変更手段がカウンタ手段 1 1 4に対しカウ ンタ制御信号を出力するよ うにしていたが、 本実施の形態 4によるプロ グラム選択実行装置ではパラメ一タ変更手段 1 2 4は選択面判定手段 1 2 5に対し 3次元回転体物体が初期状態から何度回転したかを示す回転 角情報を出力する。 この後の、 座標変換手段 1 0 5, 透視変換手段 1 0 6, 陰面処理手段 1 0 7, テクスチャマツ ビング手段 1 1 0, レンダリ ング手段 1 1 1 , フ レームバッファ 1 1 2 , 及び画面表示手段 1 1 3が 上記実施の形態 1 によるプログラム選択実行装置と同様の処理を行い、 回転指示制御信号入力後の画面が表示される。 In the program selection and execution device according to the fourth embodiment, the program The display operation of the screen in the initial state of the selection operation mode is exactly the same as that of the program selection execution device according to the first embodiment, and therefore the description is omitted. When the user inputs a rotation instruction control signal from the rotation instruction input unit 101 while the initial screen is displayed, the parameter changing unit 124 changes the rotation instruction from the rotation instruction input unit 101 to the rotation instruction. Based on the control signal, the parameter before the change (here, the parameter in the initial state) is read from the parameter holding means 102, the parameter is changed, and the parameter is recorded in the parameter holding means 102 as the changed parameter. I do. Here, in the program selection and execution device according to the first embodiment, the parameter changing means outputs the counter control signal to the counter means 114. However, the program selection and execution according to the fourth embodiment is performed. In the apparatus, the parameter changing means 124 outputs rotation angle information indicating how many times the three-dimensional rotating object has rotated from the initial state to the selected plane determining means 125. After this, coordinate transformation means 105, perspective transformation means 106, hidden surface processing means 107, texture matching means 110, rendering means 111, frame buffer 112, and screen display The means 113 performs the same processing as the program selection and execution device according to the first embodiment, and the screen after the rotation instruction control signal is input is displayed.
本実施の形態 4によるプログラム選択実行装置において、 起動を所望 するプログラムが表示された面が正面を向いた状態でユーザが選択入力 手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手段 1 2 5は、 パラメータ変更手段 1 2 4からその時点の回転角情報を取得し、 回転角 一面対応保持手段 1 2 6に保持された回転角—面対応情報を参照して、 選択制御信号が入力された時に正面を向いている面を判定し、 この面を 選択面情報と して出力する。  In the program selection execution device according to the fourth embodiment, when the user inputs a selection control signal from the selection input means 115 while the surface on which the program desired to be activated is displayed faces forward, the selection surface determination means 1 2 5 obtains the rotation angle information at that time from the parameter changing means 1 2 4, and refers to the rotation angle-surface correspondence information held by the rotation angle one-side correspondence holding means 1 26 to obtain the selection control signal. When input, it determines the surface facing forward and outputs this surface as selected surface information.
第 7図は本実施の形態 4によるプログラム選択実行装置において、 正 面を向いている面を判定する方法の一例を説明するための図である。 第 FIG. 7 is a diagram for explaining an example of a method of determining a face facing the front in the program selection execution device according to the fourth embodiment. No.
7図では 3次元回転体物体が第 2図に示す形状のものである場合の判定 の例を示しており 、 3次元回転体物体の断面を示している。 本実施の形 態 4によるプログラム選択実行装置では、 例えば、 第 7 ( a ) 図に示す よ うに、 初期状態における回転の軸から面 1 に対する垂線を角度の基準 線と定め、 この回転の軸から面 1 に対する垂線が基準線となす角度を回 転角と して検出し、 回転角と面の対応情報を参照して、 正面を向いてい る面を判定する。 パラメータ変更手段 1 2 4は、 回転の軸から面 1 に対 する垂線が基準線となす角度である回転角を検出し、 これを回転角情報 と して選択面判定手段 1 2 5に対して出力する。 第 2図に示す 3次元回 転体物体は、 6面体であり、 ある面が正面を向いた状態から 6 0度回転 すると次の面が正面を向く。 そして初期状態から 3 6 ◦度回転すると一 回転して初期状態(回転角 0度)と なる。 この場合、 回転角一面対応保持 手段 1 2 6に保持される回転角一面対応情報は、 0度〜 3 6 0度の回転 角について 6 0度ずつに等分した 6つの範囲に分けて、 それぞれの範囲 に対して面 1 〜面 6 を対応付けた情報であればよい。 具体的には、 第 7 ( b ) 図に示すよ うに、 回転角 0度以上 3 0度未満, 及び 3 3 0度以上 3 6 0度 ( 0度) 未満には面 1 を、 回転角 3 0度以上 9 0度未満には面 2を、 回転角 9 0度以上 1 5 0度未満には面 3を、 回転角 1 5 0度以上 2 1 0度未満には面 4を、回転角 2 1 0度以上 2 7 0度未満には面 5を、 回転角 2 7 0度以上 3 3 0度未満には面 6 を、 それぞれ対応付けた情報 とすればよレ、。 FIG. 7 shows an example of determination when the three-dimensional rotating object has the shape shown in FIG. 2, and shows a cross section of the three-dimensional rotating object. In the program selection execution device according to the fourth embodiment, for example, as shown in FIG. Thus, the perpendicular from the axis of rotation to the surface 1 in the initial state is determined as the reference line for the angle, and the angle formed by the perpendicular from the axis of rotation to the surface 1 as the reference line is detected as the rotation angle. Refer to the surface correspondence information to determine the surface facing the front. The parameter changing means 124 detects the rotation angle, which is the angle between the axis of rotation and the perpendicular to surface 1 as the reference line, and uses this as rotation angle information to the selected plane determination means 125. Output. The three-dimensional rotating object shown in Fig. 2 is a hexahedron, and when it rotates 60 degrees from the state where one surface faces the front, the next surface faces the front. Then, when it is rotated by 36 ° from the initial state, it makes one rotation and becomes the initial state (rotation angle 0 °). In this case, the rotation angle one-sided correspondence information held in the rotation angle one-sided correspondence holding means 1 26 is divided into six ranges equally divided into 60 degrees for the rotation angle of 0 to 360 degrees. Any information may be used as long as plane 1 to plane 6 are associated with the range of. Specifically, as shown in Fig. 7 (b), the surface 1 is rotated when the rotation angle is 0 ° or more and less than 30 °, and the rotation angle is 3 ° or more and less than 360 ° (0 °). If the angle is between 0 ° and less than 90 °, face 2; if the rotation angle is between 90 ° and less than 150 °, face 3; if the rotation angle is between 150 ° and less than 210 °, face 4; The surface 5 may be associated with the angle of 210 ° or more and less than 270 °, and the surface 6 may be associated with the rotation angle of 270 ° or more and less than 330 °.
プログラム決定手段 1 1 8は、 選択面判定手段 1 2 5から選択面情報 を取得し、 対応表保持手段 1 1 7に保持された面一プログラム対応情報 を参照して、 選択面情報で示される面に対応するプログラムを選択プロ グラム情報と して出力する。  The program determining means 1 18 acquires the selected plane information from the selected plane determining means 1 25 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17 and is indicated by the selected plane information. Outputs the program corresponding to the surface as selected program information.
プ口グラム実行手段 1 1 9は、 プログラム決定手段 1 1 8から入力さ れる選択プログラム情報で特定されたプログラムを実行する。  The program executing means 1 19 executes the program specified by the selected program information input from the program determining means 1 18.
このよ うに本実施の形態 4によるプログラム選択実行装置では、 3次 元仮想空間内に配置した 3次元回転体物体の各面にそれぞれプログラム 内容を示すテクスチャを貼り付けたもの (選択用オブジェク ト) を画面 上に表示し、 使用者が所定の操作によ り指示をするこ とによ り 3次元回 転体物体を回転させ、 使用者による所定の選択操作が行われた際に、 使 用者の視点に対して最も正面を向いている面を 3次元回転体物体が初期 状態から何度回転したかを示す回転角情報に基づいて判定し、 その面に 対応づけられたプログラムを対応表を参照して選択してプログラムを起 動する構成と したから、 3次元仮想空間における 3次元回転体物体を用 いることによ り、 現実世界の円筒状の回転体を転がすイメージを連想さ せることが可能であり、 バソコンに慣れていない使用者にもなじみ易い 直感的な操作環境を実現することができる。 As described above, the program selection and execution device according to the fourth embodiment is obtained by pasting the texture indicating the program contents on each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space (selection object). Is displayed on the screen, and when the user gives an instruction by a predetermined operation, the three-dimensional rotating object is rotated, and when the user performs a predetermined selection operation, the user can use the object. The plane facing the user's viewpoint is determined based on the rotation angle information indicating how many times the 3D rotating object has rotated from the initial state, and the program associated with that plane is determined. Since the configuration is such that the program is selected by referring to the table, the image of rolling a cylindrical rotating body in the real world is associated with the use of a three-dimensional rotating body in a three-dimensional virtual space. This makes it possible to realize an intuitive operation environment that is easy to be used by users who are not used to bath control.
なお、 本実施の形態 4では、 選択用ォブジェク 卜が 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1, パラメ一タ保持手段 1 0 2, パラメ一タ変更手段 1 2 4を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2によるプログラム選択実行装置のよ う に回転角変化パターン保持手段 1 2 0を設け、 回転表示制御を自動で行う よ うにしても良いことは言う までもない。  In the fourth embodiment, the selection object is a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the center axis in the three-dimensional virtual space. In this embodiment, a rotation instruction input means 101, a parameter holding means 102, and a parameter changing means 124 are provided, that is, a means for manually inputting a rotation instruction. It is needless to say that the rotation angle change pattern holding means 120 may be provided as in the program selection and execution device according to the above, and the rotation display control may be automatically performed.
実施の形態 5 . Embodiment 5
第 8図は本発明の実施の形態 5 によるプログラム選択実行装置の構成 を示すブロ ック図である。  FIG. 8 is a block diagram showing a configuration of a program selection and execution device according to Embodiment 5 of the present invention.
第 8図において第 1図と同一符号は同一又は相当部分である。 1 2 7 はプログラム決定手段 1 1 8によ り選択された選択プログラム情報に基 づきプログラムを実行するプログラム実行手段であり、 本実施の形態 5 では、 プログラム実行画面情報が画面表示切り替え手段 1 2 8に対し出 力される。 画面表示切り替え手段 1 2 8はプログラム実行手段 1 2 7が 出力するプログラム実行画面情報を受け、 フ レームバッファ 1 1 2から のフ レーム情報と切り替え, 又は合成して画面表示手段 1 1 3に対し出 力するものである。  8, the same reference numerals as those in FIG. 1 denote the same or corresponding parts. Reference numeral 127 denotes a program executing means for executing a program based on the selected program information selected by the program determining means 118. In the fifth embodiment, the program execution screen information is changed to a screen display switching means. 8 is output. The screen display switching means 1 28 receives the program execution screen information output from the program execution means 127 and switches or combines the frame information with the frame information from the frame buffer 112 to the screen display means 113. Output.
次に本実施の形態 5によるプログラム選択実行装置の動作について説 明する。 本実施の形態 5によるプロ グラム選択実行装置は、 プログラム が実行時に表示画面を有する場合、 プログラムが選択された際に、 3次 元仮想空間の表示を切り替えて、 プログラム実行画面を表示するよ うに したものである。 Next, the operation of the program selection and execution device according to the fifth embodiment will be described. When the program has a display screen at the time of execution, the program selection execution device according to the fifth embodiment is configured to perform a tertiary program when the program is selected. The display of the original virtual space is switched to display the program execution screen.
本実施の形態 5によるプログラム選択実行装置において、 プログラム 選択動作モー ドの初期状態の画面の表示, 及び回転指示制御信号の入力 による動作は、 上記実施の形態 1 によるプログラム選択実行装置と全く 同様であるので、 説明を省略する。  In the program selection and execution device according to the fifth embodiment, the display of the screen in the initial state of the program selection operation mode and the operation by inputting the rotation instruction control signal are exactly the same as those of the program selection and execution device according to the first embodiment. Explanations are omitted.
本実施の形態 5によるプログラム選択実行装置において、 起動を所望 するプロダラムが表示された面が正面を向いた状態でユーザが選択入力 手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手段 1 1 6は、 カウンタ手段 1 1 4からその時点の力ゥン ト値をカウン ト情報と して取 得し、 このカ ウン ト情報に基づいて選択制御信号が入力された時に正面 を向いている面を判定し、 この面を選択面情報と して出力する。 プログ ラム決定手段 1 1 8は、選択面判定手段 1 1 6から選択面情報を取得し、 対応表保持手段 1 1 7に保持された面一プログラム対応情報を参照して. 選択面情報で示される面に対応するプログラムを選択プログラム情報と して出力する。 プログラム実行手段 1 2 7は、 プログラム決定手段 1 1 8から入力される選択プログラム情報で特定されたプログラムを実行す る。 このときプログラム実行手段 1 2 7はプロダラムの実行画面情報を 画面表示切り替え手段 1 2 8に対して出力する。 画面表示切り替え手段 1 2 8はプログラム実行手段 1 2 7が出力するプログラム実行画面情報 を受け、 フ レームバッ ファ 1 1 2からのフ レーム情報と切り替えて画面 表示手段 1 1 3に対し出力する。  In the program selection and execution device according to the fifth embodiment, when the user inputs a selection control signal from the selection input means 115 while the surface on which the program desired to be activated is displayed faces forward, the selection surface determination means 1 16 obtains the current count value from the counter means 114 as count information, and turns to the front when a selection control signal is input based on this count information. The selected plane is determined, and this plane is output as selected plane information. The program determination means 1 18 acquires the selected plane information from the selected plane determination means 1 16 and refers to the plane-to-program correspondence information held in the correspondence table holding means 1 17. The program corresponding to the surface to be output is output as selected program information. The program executing means 127 executes the program specified by the selected program information input from the program determining means 118. At this time, the program execution means 127 outputs the execution screen information of the program to the screen display switching means 128. The screen display switching means 1 28 receives the program execution screen information output from the program execution means 127 and switches to the frame information from the frame buffer 112 to output to the screen display means 113.
このよ うに本実施の形態 5によるプログラム選択実行装置では、 3次 元仮想空間内に配置した 3次元回転体物体の各面にそれぞれプログラム 内容を示すテクスチャを貼り付けたものを画面上に表示し、 使用者が所 定の操作によ り指示をすることによ り 3次元回転体物体を回転させ、 使 用者による所定の選択操作が行われた際に、 使用者の視点に対して最も 正面を向いている面を判定し、 その面に対応づけられたプログラムを対 応表を参照して選択してプログラムを起動すると と もに、 プログラムが 実行時に表示画面を有する場合に、 プログラムが選択された際に、 3次 元仮想空間の表示に替えて、 プログラム実行画面を表示する構成と した から、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状の回転体を転がすィメ一ジを連想させることが可能で あり、 また、 選択したプログラムの実行画面が表示されるので、 容易に 選択の確認ができ、 バソコンに慣れていない使用者にもなじみ易い直感 的な操作環境を実現することができる。 As described above, the program selection and execution device according to the fifth embodiment displays on the screen a texture in which the program contents are pasted on each surface of the three-dimensional rotating object placed in the three-dimensional virtual space. Then, the user rotates the three-dimensional rotating object by giving an instruction by a predetermined operation, and when a predetermined selection operation is performed by the user, the object is most viewed from the viewpoint of the user. Judge the surface facing the front, select the program associated with that surface with reference to the correspondence table, start the program, and when the program starts When a program is selected when the program has a display screen at the time of execution, instead of displaying the 3D virtual space, the program execution screen is displayed, so the 3D rotating object in the 3D virtual space can be displayed. By using it, it is possible to remind the user of the image of rolling a cylindrical rotating body in the real world, and since the execution screen of the selected program is displayed, the selection can be easily confirmed. In addition, an intuitive operating environment that can be easily used by users unfamiliar with Bascon can be realized.
なお、 上記実施の形態 5では、 プログラム実行画面を表示する際に、 3次元仮想空間の表示に替えて、 プログラム実行画面を全画面表示する ものについて示したが、 全画面表示に切り替えるのではなく 、 3次元仮 想空間が表示されている画面上に 2次元矩形領域(ゥィ ン ドウ)を別途作 成し、 3次元仮想空間と併せて表示するよ うにしてもよい。  In the fifth embodiment, when the program execution screen is displayed, the program execution screen is displayed in full screen in place of the display of the three-dimensional virtual space. Alternatively, a two-dimensional rectangular area (window) may be separately created on the screen on which the three-dimensional virtual space is displayed, and displayed together with the three-dimensional virtual space.
また、 表示の切り替え方法と して、 プログラム実行画面をテクスチャ と して貼り付けた矩形物体を生成し、 選択された時点での 3次元回転体 物体の面の表示から、 全画面表示に対応する位置まで、 途中を補間して アニメーショ ン表示して画面表示を切り替えるよ うにしても良い。  Also, as a method of switching the display, a rectangular object with the program execution screen pasted as a texture is generated, and from the display of the 3D rotating object surface at the time of selection, it corresponds to full screen display The screen display may be switched by animating and displaying the animation by interpolating the way to the position.
また、 本実施の形態 5では、 選択用ォブジェク 卜が 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1 , パラメータ保持手段 1 0 2, パラメータ変更手段 1 0 3 を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2によるプログラム選択実行装置のよ うに回転角変化パターン保持手段 1 2 0を設け、 回転表示制御を自動で行う よ うにしても良いことは言う までもなレ、。  Further, in the fifth embodiment, the selection object provides a rotation display control unit that supplies a rotation display control signal for displaying an image that rotates around the central axis in the three-dimensional virtual space. In this example, the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed. It goes without saying that the rotation angle change pattern holding means 120 may be provided as in the case of the apparatus, and the rotation display control may be automatically performed.
また、 本実施の形態 5では、 選択面判定手段 1 1 6が力ゥンタ手段 1 1 4の出力するカウン ト情報に基づいて表示画面上において正面を向い ている面を判定する ものについて示したが、 実施の形態 3によるプログ ラム選択実行装置のよ うに奥行き情報に基づいて表示画面上において正 面を向いている面を判定する構成, あるいは実施の形態 4によるプログ ラム選択実行装置のよ うに回転角情報に基づいて表示画面上において正 面を向いている面を判定する構成と しても良いことは言うまでもない。 実施の形態 6 . Further, in the fifth embodiment, the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the force counter unit 114. The configuration in which the face facing the front on the display screen is determined based on the depth information as in the program selection execution device according to the third embodiment, or the program according to the fourth embodiment. Needless to say, a configuration may be adopted in which the face facing the front on the display screen is determined based on the rotation angle information as in the ram selection execution device. Embodiment 6
第 9図は本発明の実施の形態 6 によるデータ選択実行装置の構成を示 すブロック図である。  FIG. 9 is a block diagram showing a configuration of a data selection execution device according to Embodiment 6 of the present invention.
第 9図において、 1 0 1 は 3次元仮想空間内の 3次元回転体物体を回 転させるための指示を入力する回転指示入力手段、 1 0 2は 3次元回転 体物体を回転させるパラメータを保持するパラメータ保持手段、 1 0 3 は回転指示入力手段 1 0 1 からの回転指示制御信号に基づき、 パラメ一 タ保持手段 1 0 2から変更前パラメータを読み込み、 パラメータを変更 し変更後パラメータ と してパラメータ保持手段 1 0 2 に記録し、 カウン タ制御信号を出力するパラメータ変更手段である。 1 0 4は 3次元回転 体物体を含む 3次元仮想空間を構成する物体の座標情報を保持する 3次 元モデル座標保持手段、 1 0 5はパラメータ保持手段 1 0 2からパラメ ータ情報を読み込み、 3次元モデル座標保持手段 1 0 4から 3次元モデ ル座標を読み込んで座標変換を行い、 変更後モデル座標を出力する座標 変換手段、 1 0 6は座標変換手段 1 0 5から出力された変更後モデル座 標と視点座標とを用いて、 3次元回転体物体を含む 3次元仮想空間の表 示画面への透視変換を行い、投影面座標を出力する透視変換手段である。 1 0 7は透視変換手段 1 0 6から投影面座標を読み込んで、 隠れて表示 されない領域を排除し、 表示される領域のみを抽出して奥行き情報, お よび陰面処理後ラスタ情報を出力する陰面処理手段、 1 0 8は陰面処理 手段 1 0 7によ り抽出された奥行き情報を保持する奥行き情報保持手段、 1 0 9は各面に貼り付けるテクスチャを保持するテクスチヤ保持手段で ある。 本実施の形態 6で 3次元回転体物体に貼り付けるテクスチャは、 対応するデータであることを識別するための画像であり、 例えば、 音楽 データであるなら楽曲名, あるいは演奏者や作曲者の名前等、 データの 名前を表示した画像や、 別途備えたデータベースを検索して取得した演 奏家ゃ作曲者の画像, あるいは楽曲を想起させる画像等、 デ一タに対応 したアイコン画像を用いた画像等を用いれば良く 、 動画等の画像データ であるならデータの最初の部分や代表部分の画像を用いた画像等を用い れば良い。 1 1 0は陰面処理手段 1 0 7によ り奥行き情報が考慮された 陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8によ り保持さ れた奥行き情報に基づいて、 テクスチャ保持手段 1 0 9から読み込んだ テクスチャを貼り付けるテクスチャマッ ピング手段である。 1 1 1はテ タスチヤマッピング手段 1 1 0が出力するテクスチャマッピング後フレ ーム情報に、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に 基づいて、 各画素の色や明るさなどすベての画素情報を描画するレンダ リ ング手段、 1 1 2はレンダリ ング手段 1 1 1 によ り描画されたフ レー ム情報を保持するフ レームバッ ファ、 1 1 3はフ レームバッファ 1 1 2 に保持されたフレーム情報を所定のタイ ミ ングで出力して表示する画面 表示手段である。 また、 1 1 4はパラメータ変更手段 1 0 3からのカウ ンタ制御信号によ りカウンタを増やす力ゥンタ手段、 1 1 5は使用者が、 選択するプログラムを決定して入力する選択入力手段、 1 1 6はカウン タ手段 1 1 4からのカウン ト情報と選択入力手段 1 1 5からの選択制御 信号とに基づいて、 選択された面を判定する選択面判定手段、 1 2 9は 3次元回転体物体を構成する各面とデータ との対応関係 (面一データ対 応情報) , データ とプログラムとの対応関係 (データ一プログラム対応 情報) , 及び各面とテクスチャ との対応関係 (面一テク スチャ対応情報) を示す対応表を保持する対応表保持手段である。 第 1 0図は対応表保持 手段 1 2 9が保持する対応表の一例を示す図である。 1 3 0は選択面判 定手段 1 1 6が出力する選択面情報から、 対応表保持手段 1 2 9から読 み取った対応情報 (面一データ対応情報) を参照して、 選択されたデー タを判定し選択データ情報を出力するデータ決定手段、 1 3 1 はデータ 決定手段 1 3 0が出力する選択データ情報から、 対応表保持手段 1 2 9 から読み取った対応情報 (データープログラム対応情報) を参照して、 実行すべきプログラムを決定するプログラム決定手段、 1 3 2はプログ ラム決定手段 1 3 1 によ り選択された選択プログラム情報に基づきプロ グラムを実行するプログラム実行手段である。 In FIG. 9, 101 is a rotation instruction input means for inputting an instruction for rotating a three-dimensional rotating object in the three-dimensional virtual space, and 102 is a parameter for rotating the three-dimensional rotating object. The parameter holding means 103 reads the pre-change parameter from the parameter holding means 102 based on the rotation instruction control signal from the rotation instruction input means 101, changes the parameter, and sets it as the post-change parameter. This is a parameter changing means for recording in the parameter holding means 102 and outputting a counter control signal. 104 is a three-dimensional model coordinate holding means for holding the coordinate information of the objects constituting the three-dimensional virtual space including the three-dimensional rotating object, and 105 is reading the parameter information from the parameter holding means 102 The coordinate conversion means reads the three-dimensional model coordinates from the three-dimensional model coordinate holding means 104, converts the coordinates, and outputs the changed model coordinates, and 106 designates the change output from the coordinate conversion means 105. This is a perspective transformation unit that performs perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object using the rear model coordinates and viewpoint coordinates, and outputs projection plane coordinates. Reference numeral 107 denotes a projection surface which reads projection plane coordinates from the perspective transformation means 106, excludes hidden and not displayed regions, extracts only the displayed regions, and outputs depth information and raster information after hidden surface processing. The processing means, 108 is a depth information holding means for holding the depth information extracted by the hidden surface processing means 107, and 109 is a texture holding means for holding a texture to be attached to each surface. The texture to be attached to the three-dimensional rotating object in the sixth embodiment is an image for identifying the corresponding data. For example, if the data is music data, the name of a song, or the name of a player or a composer Supports data such as images displaying data names, images of performers and composers obtained by searching a separately provided database, or images reminiscent of music. An image using the icon image described above may be used, and if the image data is a moving image or the like, an image using the image of the first portion or the representative portion of the data may be used. Reference numeral 110 denotes a texture holding unit based on the depth information held by the depth information holding unit 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing unit 107. This is a texture mapping unit that pastes the texture read from 109. Reference numeral 111 denotes the color and brightness of each pixel based on the frame information after texture mapping output by the texture mapping means 110 and the depth information held by the depth information holding means 108. Rendering means for rendering all pixel information, etc., 112, a frame buffer for holding frame information drawn by the rendering means 111, and 113, a frame buffer 1 This is a screen display means for outputting and displaying the frame information stored in 12 at a predetermined timing. Further, 114 is a counter means for increasing the counter by a counter control signal from the parameter changing means 103, 115 is a selection input means for determining and inputting a program to be selected by the user, 1 16 is a selected surface judging means for judging the selected surface based on the count information from the counter means 114 and the selection control signal from the selection input means 115. Correspondence between each surface and the data that compose the body object (plane-to-data correspondence information), Correspondence between data and program (data-to-program correspondence information), and correspondence between each surface and texture (plane-to-technology) (Correspondence information). FIG. 10 is a diagram showing an example of the correspondence table held by the correspondence table holding means 1229. 13 0 refers to the correspondence information read from the correspondence table holding means 12 9 from the selected plane information output by the selected plane determination means 1 16 and the selected data by referring to the correspondence information (plane-to-data correspondence information). Data decision means for judging data and outputting selected data information. 13 1 is the correspondence information read from the correspondence table holding means 1 29 from the selected data information outputted by the data decision means 130 (data program correspondence information). , A program deciding means for deciding a program to be executed, and 132 are programs based on the selected program information selected by the program deciding means 13 1 It is a program executing means for executing a program.
次に本実施の形態 6によるデータ選択実行装置の動作について説明す る。 本実施の形態 6によるデータ選択実行装置は、 3次元仮想空間内に 配置した 3次元回転物体の各面にワープロや表計算などのアプリ ケーシ ヨ ンデータや、 映像や音楽などのマルチメディアデータを割り 当てて回 転させ、 使用者による所定の操作が行われた際に、 使用者の視点に対し て最も正面を向いている面に対応づけられたデータを処理するプロダラ ムを起動し、 選択されたデータを開く ものである。  Next, the operation of the data selection execution device according to the sixth embodiment will be described. The data selection execution device according to the sixth embodiment assigns application data such as word processors and spreadsheets and multimedia data such as video and music to each surface of a three-dimensional rotating object placed in a three-dimensional virtual space. When a predetermined operation is performed by the user, a program that processes the data associated with the surface that is most frontal with respect to the user's viewpoint is started and selected. It opens the data that was created.
本実施の形態 6 によるデータ選択実行装置において、 データ選択動作 モー ドが開始する と、 3次元モデル座標保持手段 1 0 4に保持された 3 次元回転体物体の 3次元仮想空間内における初期座標が読み出され、 透 視変換手段 1 0 6が、 この初期座標と視点座標とを用いて、 3次元回転 体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座 標を出力する。 すなわち、 プログラム選択動作モー ドの初期表示動作時 には、 座標変換手段 1 0 5は、 3次元モデル座標保持手段 1 0 4から読 み出された初期座標の座標を変換せずにそのまま透視変換手段 1 0 6に 出力する。 陰面処理手段 1 0 7は透視変換手段 1 0 6から投影面座標を 読み込んで、 隠れて表示されない領域を排除し、 表示される領域のみを 抽出して奥行き情報、 および陰面処理後ラスタ情報を出力する。 テクス チヤマツビング手段 1 1 0は陰面処理手段 1 0 7によ り奥行き情報が考 慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に基づいて、 テクスチヤ保持手段 1 0 9から読 み込んだテクスチャを貼り付ける。 ここで、 3次元回転体物体の各面と テクスチャとの対応関係は、 対応表保持手段 1 2 9から対応情報 (面一 テクスチャ対応情報) を読み出すこ とによって得る。 レンダリ ング手段 1 1 1 はテクスチャマッピング手段 1 1 0が出力するテクスチャマツピ ング後フレーム情報に、 奥行き情報保持手段 1 0 8によ り保持された奥 行き情報に基づいて、 各画素の色や明るさなどすベての画素情報を描画 する。 レンダリ ング手段 1 1 1 によ り描画されたフレーム情報はフレー ムバッファ 1 1 2に保持され、 画面表示手段 1 1 3はフ レームバッファ 1 1 2に保持されたフ レーム情報を所定のタイ ミ ングで読み出して画面 の表示を行う。 これによ り、 データ選択動作モー ドの初期状態の画面が 表示される。 In the data selection execution device according to the sixth embodiment, when the data selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are changed. Using the initial coordinates and the viewpoint coordinates, the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object, and converts the projection plane coordinates. Output. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do. The texture matching means 110 is based on the depth information held by the depth information holding means 108, based on the depth information held by the depth information holding means 108, with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107. Paste the texture read from the texture holding means 109. Here, the correspondence between each surface of the three-dimensional rotator object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 129. The rendering means 1 1 1 1 outputs the color and the color of each pixel based on the frame information after texture mapping output from the texture mapping means 1 10 and the depth information held by the depth information holding means 1 08. Draws all pixel information such as brightness. The frame information drawn by the rendering means 1 1 1 The screen display means 113 reads out the frame information held in the frame buffer 112 at a predetermined timing and displays the screen. As a result, the screen in the initial state of the data selection operation mode is displayed.
初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 1 よ り回転指示制御信号を入力する と、 パラメータ変更手段 1 0 3は回 転指示入力手段 1 0 1からの回転指示制御信号に基づき、 パラメ一タ保 持手段 1 0 2から変更前パラメータ (こ こでは初期状態のパラメータ) を読み込み、 パラメータを変更し変更後パラメータと してパラメータ保 持手段 1 0 2 に記録し、 力ゥンタ手段 1 1 4に対し力ゥンタ制御信号を 出力する。 座標変換手段 1 0 5は、 パラメ一タ保持手段 1 0 2に記録さ れた変更後パラメ一タを読み出し、 3次元モデル座標保持手段 1 0 4か ら読み出した初期座標の座標を変更後パラメータを用いて変換して得ら れる変更後モデル座標を透視変換手段 1 0 6に出力する。 透視変換手段 1 0 6は、 この変更後モデル座標と視点座標とを用いて、 3次元回転体 物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標 を出力する。 この後、 陰面処理手段 1 0 7, テクスチャマツビング手段 1 1 0, レンダリ ング手段 1 1 1 , フ レームバッファ 1 1 2 , 及び画面 表示手段 1 1 3が上記データ選択動作モー ドの初期表示動作時と同様の 処理を行い、 回転指示制御信号入力後の画面が表示される。 例えば 3次 元回転体物体が第 2図に示す形状のものである場合、 初期状態において 面 1が正面を向いて表示されていたものが、 正方向の回転指示制御信号 を入力する と、 第 2図中の矢印の方向に回転し面 2が正面を向く画像が 表示され、 負方向の回転指示制御信号を入力すると、 第 2図中の矢印と は逆の方向に回転し面 6が正面を向く 画像が表示される。  When the user inputs a rotation instruction control signal from the rotation instruction input means 101 while the initial screen is displayed, the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction. Based on the control signal, the parameter before change (here, the parameter in the initial state) is read from the parameter holding means 102, the parameter is changed, and the changed parameter is recorded in the parameter holding means 102 as the changed parameter. It outputs a power counter control signal to the power counter means 114. The coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and reads the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 after the changed parameters. Then, the modified model coordinates obtained by the transformation are output to the perspective transformation means 106. The perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 perform the initial display operation in the data selection operation mode. The same process is performed as before, and the screen after the rotation instruction control signal is input is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, the surface 1 was displayed facing the front in the initial state. (2) An image is displayed in which the screen rotates in the direction of the arrow in FIG. 2 and the surface (2) faces the front. The image facing is displayed.
回転指示入力手段 1 0 1 については、 上記実施の形態 1 と同様、 リ モ コンゃキーボー ドのカーソルキーの操作やマウスの動きなどを 3次元回 転体物体の回転に対応づけるよ う にすればよい。  Regarding the rotation instruction input means 101, as in the first embodiment, the operation of the cursor keys on the remote control keyboard and the movement of the mouse are made to correspond to the rotation of the three-dimensional rotating object. I just need.
回転指示制御信号入力動作時に力ゥンタ手段 1 1 4ではパラメータ変 更手段 1 0 3が出力するカウンタ制御信号によ りカウン ト動作を行う。 具体的には例えば、 回転指示入力手段 1 0 1から正方向の回転指示制御 信号を入力すると、 パラメ一タ変更手段 1 0 3は力ゥンタ手段 1 1 4の カウン ト値を 1インク リ メ ン トするカウンタ制御信号を出力し、 回転指 示入力手段 1 0 1から負方向の回転指示制御信号を入力すると、 パラメ —タ変更手段 1 0 3は力ゥンタ手段 1 1 4のカウン ト値を 1デク リ メ ン トするカウンタ制御信号を出力し、 カウンタ手段 1 1 4は、 このカウン タ制御信号を受けて自己が保持する力ゥン ト値を変化させる。 When the rotation instruction control signal is input, parameter change is The counting operation is performed by the counter control signal output from the changing means 103. Specifically, for example, when a rotation instruction control signal in the forward direction is input from the rotation instruction input means 101, the parameter changing means 103 increases the count value of the force counter means 114 by one increment. When a counter control signal is output and a negative rotation instruction control signal is input from the rotation instruction input means 101, the parameter changing means 103 changes the count value of the power counter means 114 to 1 A counter control signal to be decremented is output, and the counter means 114 receives this counter control signal and changes the force value held by itself.
処理を所望するデータが表示された面が正面を向いた状態でユーザが 選択入力手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手段 1 1 6は、 カウンタ手段 1 1 4からその時点のカウン ト値をカウン ト情報 と して取得し、 このカウン ト情報に基づいて選択制御信号が入力された 時に正面を向いている面を判定し、 この面を選択面情報と して出力する。 データ決定手段 1 3 0は、 選択面判定手段 1 1 6から選択面情報を取 得し、 対応表保持手段 1 2 9に保持された面一デ一タ対応情報を参照し て、 選択面情報で示される面に対応するデータを選択データ情報と して 出力する。 プログラム決定手段 1 3 1 は、 データ決定手段 1 3 0から選 択データ情報を取得し、 対応表保持手段 1 2 9に保持されたデータープ 口グラム対応情報を参照して、 選択データ情報で示されるデータを処理 するプログラムを選択プログラム情報と して出力する。  When the user inputs a selection control signal from the selection input means 1 15 with the surface on which the data to be processed is displayed facing front, the selection surface determination means 1 16 The count value at the time is acquired as count information, and based on this count information, the face facing forward when a selection control signal is input is determined, and this face is output as selected face information. I do. The data determination means 130 obtains the selected plane information from the selected plane determination means 1 16 and refers to the plane data correspondence information held in the correspondence table holding means 1 29 to select the selected plane information. The data corresponding to the surface indicated by is output as selected data information. The program deciding means 13 1 acquires the selected data information from the data deciding means 130 and refers to the data program correspondence information held in the correspondence table holding means 1 29 to indicate the selected data information by the selected data information. Outputs the data processing program as selected program information.
プロダラム実行手段 1 3 2は、 プロダラム決定手段 1 3 1 から入力さ れる選択プログラム情報で特定されたプログラムを実行する。  The program executing means 13 2 executes the program specified by the selected program information input from the program determining means 13 1.
このよ うに本実施の形態 6によるデータ選択実行装置では、 3次元仮 想空間内に配置した 3次元回転体物体の各面にそれぞれデータ内容を示 すテクスチャを貼り付けたものを画面上に表示し、 使用者が所定の操作 によ り指示をすることによ り 3次元回転体物体を回転させると と もに回 転指示操作を何回繰り返したかを力 ゥン ト しておき、 使用者による所定 の選択操作が行われた際に、 使用者の視点に対して最も正面を向いてい る面をカウン ト値よ り判定し、 その面に対応づけられたデータを対応表 を参照して選択し、 この選択されたデータを処理するプログラムを起動 して選択データを開く構成と したから、 3次元仮想空間における 3次元 回転体物体を用いることによ り 、 現実世界の円筒状の回転体を転がすィ メ一ジを連想させることが可能であり、 バソコンに慣れていない使用者 にもなじみ易い直感的な操作環境を実現することができる。 As described above, the data selection execution device according to the sixth embodiment displays on the screen a texture representing the data content attached to each surface of the three-dimensional rotating object placed in the three-dimensional virtual space. The user rotates the three-dimensional rotating object by giving an instruction according to a predetermined operation, and also keeps track of how many times the rotation instruction operation has been repeated. When a predetermined selection operation is performed by the user, the surface that faces the front of the user's viewpoint is determined based on the count value, and the data associated with that surface is determined in the correspondence table. , And a program for processing the selected data is started to open the selected data. Therefore, by using a three-dimensional rotating object in a three-dimensional virtual space, a real-world cylindrical object can be obtained. It is possible to remind the user of the image of rolling a rotating body, thereby realizing an intuitive operating environment that is easy to use even for a user who is not used to a computer.
なお、 上記実施の形態 6では、 対応するデータであることを識別する ための画像 (テクスチャ) を 3次元回転体物体の面に貼り付けることに よってのみ表示しているが、 3次元回転体物体の面にはデータの名前等、 文字による情報を表示したテクスチャを貼り付け、 3次元回転体物体の 面のうち正面を向いている面については、 アイ コン画像や、 動画中から 取り出した静止画像等を用いて作成したテクスチャを、 第 1 1図に示す よ うに表示画面 2 0 0上に 3次元回転体物体と と もに表示するよ うにし ても良い。  In the sixth embodiment, the image (texture) for identifying the corresponding data is displayed only by pasting it on the surface of the three-dimensional rotating object. A texture that displays textual information such as the name of the data is pasted on the surface of the 3D object.For the surface of the 3D rotating object facing the front, an icon image or a still image extracted from the movie The texture created by using such a method may be displayed together with a three-dimensional rotating object on the display screen 200 as shown in FIG.
また、 本実施の形態 6では、 選択用オブジェク トが 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1, パラメータ保持手段 1 0 2, パラメータ変更手段 1 0 3を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2のよ うに回転角変化パターン保持手段 1 2 0を設け、 回転表示制御を 自動で行う よ うにしても良いこ とは言う までもない。  Further, in the sixth embodiment, a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation, In this case, the rotation instruction input means 101, the parameter holding means 102, and the parameter change means 103 are provided, that is, the rotation instruction input is performed manually. Needless to say, the angle change pattern holding means 120 may be provided to automatically perform the rotation display control.
また、 本実施の形態 6では、 選択面判定手段 1 1 6がカウンタ手段 1 1 4の出力するカウン ト情報に基づいて表示画面上において正面を向い ている面を判定するものについて示したが、 実施の形態 3のよ う に奥行 き情報に基づいて表示画面上において正面を向いている面を判定する構 成, あるいは実施の形態 4のよ うに回転角情報に基づいて表示画面上に おいて正面を向いている面を判定する構成と しても良いことは言うまで もない。  Further, in the sixth embodiment, the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the counter unit 114. A configuration in which the face facing the front is determined on the display screen based on the depth information as in Embodiment 3, or on the display screen based on the rotation angle information as in Embodiment 4. It goes without saying that a configuration may be adopted in which the surface facing the front is determined.
実施の形態 7 . Embodiment 7
第 1 2図は本発明の実施の形態 7 ;こよるデータ選択実行装置の構成を 示すブロ ック図である。 FIG. 12 shows Embodiment 7 of the present invention; FIG.
第 1 2図において第 9図と同一符号は同一又は相当部分である。 1 3 はプロダラム決定手段 1 3 1 が出力する選択プログラム情報が示すプ 口グラムを起動し、 データ決定手段 1 3 0が出力する選択データ情報が 示す動画像データを再生してテクスチャ保持手段 1 3 5に対して出力す る動画像再生手段である。  In FIG. 12, the same reference numerals as those in FIG. 9 denote the same or corresponding parts. 13 starts the program indicated by the selected program information output by the program determining means 13 1, reproduces the moving image data indicated by the selected data information output by the data determining means 13, and reproduces the texture holding means 13. This is a moving image playback means that outputs to 5.
本実施の形態 7によるデータ選択実行装置は、 選択する候補のデータ が動画像の場合、 動画像データをテクスチャと して対応する面に貼り付 けるものであ り、 さ らに、 正面を向いている面は動画像表示を行い、 正 面を向いていない面に関しては、 動画像のうちのある画面を静止画像と して貼り付けるよ うにしたものである。  In the data selection execution device according to the seventh embodiment, when the candidate data to be selected is a moving image, the moving image data is pasted as a texture on a corresponding surface, and the data is further turned to the front. The moving face is displayed as a moving picture, and the face not facing the front face is pasted as a still picture from a moving picture.
これによ り 、 ある時点で選択可能な面がどれかを判断するのに、 面に 貼り付けた画像が動いているかどうかで使用者は容易に判断可能である。 次に本実施の形態 7によるデータ選択実行装置の動作について説明す る。 本実施の形態 7によるデータ選択実行装置は、 選択する候補のデー タが動画像の場合、 動画像データをテクスチヤと して対応する面に貼り 付けるよ うにしたものである。  This allows the user to easily determine which of the surfaces can be selected at any one time, based on whether the image pasted on the surface is moving. Next, the operation of the data selection execution device according to the seventh embodiment will be described. In the data selection execution device according to the seventh embodiment, when the candidate data to be selected is a moving image, the moving image data is pasted on the corresponding surface as a texture.
本実施の形態 7によるデータ選択実行装置において、 データ選択動作 モー ドが開始する と、 3次元モデル座標保持手段 1 0 4に保持された 3 次元回転体物体の 3次元仮想空間内における初期座標が読み出され、 透 視変換手段 1 0 6が、 この初期座標と視点座標とを用いて、 3次元回転 体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座 標を出力する。 すなわち、 プログラム選択動作モー ドの初期表示動作時 には、 座標変換手段 1 0 5は、 3次元モデル座標保持手段 1 0 4から読 み出された初期座標の座標を変換せずにそのまま透視変換手段 1 0 6 に 出力する。 陰面処理手段 1 0 7は透視変換手段 1 0 6から投影面座標を 読み込んで、 隠れて表示されない領域を排除し、 表示される領域のみを 抽出して奥行き情報、 および陰面処理後ラスタ情報を出力する。 テクス チヤマツビング手段 1 1 0は陰面処理手段 1 0 7によ り奥行き情報が考 慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報に基づいて、 テクスチヤ保持手段 1 3 5から読 み込んだテクスチャを貼り付ける。 In the data selection execution device according to the seventh embodiment, when the data selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are changed. Using the initial coordinates and the viewpoint coordinates, the perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object, and converts the projection plane coordinates. Output. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do. The texture rubbing means 110 is considered depth information by the hidden surface processing means 107. Based on the depth information held by the depth information holding unit 108, the texture read from the texture holding unit 135 is pasted on the considered rasterized surface raster information.
ここで本実施の形態 7では、 動画像再生手段 1 3 4が、 3次元回転体 物体の各面に内容を表示すべき全てのデータについて、 対応表保持手段 1 2 9に保持される面一データ対応情報, 及びデータ一プログラム対応 情報を参照してこれを再生し、 正面を向いていない面に関しては各デー タの動画像のうちのある画面を静止画像と してテクスチャ保持手段 1 3 5に対し出力し、 正面を向く面に関してはデータを再生し続けて動画像 をテクスチャ保持手段 1 3 5に対して出力する。 例えば 3次元回転体物 体が第 2図に示す形状のものである場合、 初期表示状態では、 動画像再 生手段 1 3 4は、 面 2〜面 6に関しては各データの動画像のう ちのある 画面を静止画像と してテクスチャ保持手段 1 3 5に対し出力し、 面 1 に 関してはデータを再生し続けて動画像をテク スチャ保持手段 1 3 5に対 し出力する。  Here, in the seventh embodiment, the moving image reproducing means 134 outputs the data stored in the correspondence table holding means 129 for all data whose contents are to be displayed on each surface of the three-dimensional rotating object. It refers to the data correspondence information and the data-to-program correspondence information and reproduces it. On the surface that does not face the front, a certain screen in the moving image of each data is regarded as a still image and the texture holding means 1 3 5 For the surface facing the front, the data is continuously reproduced and the moving image is output to the texture holding means 135. For example, when the three-dimensional rotating object has the shape shown in FIG. 2, in the initial display state, the moving image reproducing means 134 outputs the moving image of each data with respect to the surfaces 2 to 6. A certain screen is output to the texture holding means 135 as a still image, and for the surface 1, data is continuously reproduced and a moving image is output to the texture holding means 135.
3次元回転体物体の各面とテクスチャ との対応関係は、 対応表保持手 段 1 2 9から対応情報 (面一テクスチヤ対応情報) を読み出すことによ つて得る。 レンダリ ング手段 1 1 1 はテクスチャマッ ピング手段 1 1 0 が出力するテクスチヤマッピング後フレーム情報に、 奥行き情報保持手 段 1 0 8によ り保持された奥行き情報に基づいて、 各画素の色や明るさ などすベての画素情報を描画する。 レンダリ ング手段 1 1 1 によ り描画 されたフ レーム情報はフ レームバッ ファ 1 1 2に保持され、 画面表示手 段 1 1 3はフ レームバッファ 1 1 2に保持されたフ レーム情報を所定の タイ ミ ングで読み出して画面の表示を行う。 これによ り、 データ選択動 作モー ドの初期状態の画面が表示される。  The correspondence between each surface of the three-dimensional rotating object and the texture can be obtained by reading the correspondence information (plane-texture correspondence information) from the correspondence table holding means 1229. The rendering means 1 1 1 1 and the color 2 of each pixel are based on the frame information after the texture mapping output by the texture mapping means 1 10 and the depth information held by the depth information holding means 1 08. Draws all pixel information such as brightness. The frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 stores the frame information held in the frame buffer 112 in a predetermined manner. Read it out at the timing and display the screen. As a result, the screen in the initial state of the data selection operation mode is displayed.
初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 1 よ り回転指示制御信号を入力する と、 パラメータ変更手段 1 0 3は回 転指示入力手段 1 0 1からの回転指示制御信号に基づき、 パラメ一タ保 持手段 1 0 2から変更前パラメータ (こ こでは初期状態のパラメータ) を読み込み、 パラメータを変更し変更後パラメータと してパラメータ保 持手段 1 0 2 に記録し、 カウンタ手段 1 1 4に対しカウンタ制御信号を 出力する。 座標変換手段 1 0 5は、 パラメ一タ保持手段 1 0 2に記録さ れた変更後パラメータを読み出し、 3次元モデル座標保持手段 1 0 4か ら読み出した初期座標の座標を変更後パラメータを用いて変換して得ら れる変更後モデル座標を透視変換手段 1 0 6に出力する。 透視変換手段 1 0 6は、 この変更後モデル座標と視点座標とを用いて、 3次元回転体 物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標 を出力する。 この後、 陰面処理手段 1 0 7, テクスチャマツビング手段 1 1 0 , レンダリ ング手段 1 1 1 , フ レームバッファ 1 1 2, 及び画面 表示手段 1 1 3が上記データ選択動作モー ドの初期表示動作時と同様の 処理を行い、 回転指示制御信号入力後の画面が表示される。 例えば 3次 元回転体物体が第 2図に示す形状のものである場合、 初期状態において 面 1が正面を向いて表示されていたものが、 正方向の回転指示制御信号 を入力すると、 第 2図中の矢印の方向に回転し面 2が正面を向く 画像が 表示され、 負方向の回転指示制御信号を入力すると、 第 2図中の矢印と は逆の方向に回転し面 6が正面を向く画像が表示される。 こ こで、 面 2 が正面を向く ときは、 動画像再生手段 1 3 4は、 面 1 , 及び面 3〜面 6 に関しては各データの動画像のう ちのある画面を静止画像と してテクス チヤ保持手段 1 3 5に対し出力し、 面 2に関してはデータを再生し続け て動画像をテクスチヤ保持手段 1 3 5 に対し出力する。 また、 面 6が正 面を向く ときは、 動画像再生手段 1 3 4は、 面 1〜面 5 に関しては各デ —タの動画像のうちのある画面を静止画像と してテクスチヤ保持手段 1 3 5に対し出力し、 面 6に関してはデータを再生し続けて動画像をテク スチヤ保持手段 1 3 5に対し出力する。 When the user inputs a rotation instruction control signal from the rotation instruction input means 101 while the initial screen is displayed, the parameter changing means 103 changes the rotation instruction from the rotation instruction input means 101 to the rotation instruction. Based on the control signal, the parameter before changing from the parameter holding means 102 (here, the parameter in the initial state) Is read, the parameter is changed, the changed parameter is recorded in the parameter holding means 102, and the counter control signal is output to the counter means 114. The coordinate transformation means 105 reads the changed parameters recorded in the parameter holding means 102 and uses the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 as the changed parameters. The modified model coordinates obtained by the transformation are output to the perspective transformation means 106. The perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the data selection operation mode. Performs the same processing as before, and displays the screen after the rotation instruction control signal is input. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, when the surface 1 is displayed facing the front in the initial state, when the rotation direction control signal in the forward direction is input, the second An image is displayed in which the image rotates in the direction of the arrow in the figure and the surface 2 faces the front.When a negative rotation instruction control signal is input, the image rotates in the opposite direction to the arrow in FIG. The facing image is displayed. Here, when the plane 2 faces front, the moving image reproducing means 134 sets the screen including the moving image of each data as a still image for the plane 1 and the planes 3 to 6. The data is output to the texture holding means 135 and the moving image is output to the texture holding means 135 while the data is continuously reproduced on the surface 2. When the surface 6 faces the front, the moving image reproducing means 13 4 sets the texture holding means 1 as a still image on a screen of the moving image of each data for the surfaces 1 to 5. The data is output to the texture holding unit 135 while the data is continuously reproduced on the surface 6 and the moving image is output.
回転指示入力手段 1 0 1 については、 上記実施の形態 1 と同様、 リ モ コンゃキ一ボー ドのカーソルキ一の操作やマウスの動きなどを 3次元回 転体物体の回転に対応づけるよ うにすればよい。  As in the first embodiment, the rotation instruction input means 101 is configured so that the operation of the cursor key on the remote control board, the movement of the mouse, and the like correspond to the rotation of the three-dimensional rotating object. do it.
回転指示制御信号入力動作時に力 ゥンタ手段 1 1 4ではパラメータ変 更手段 1 0 3が出力するカウンタ制御信号によ りカウン ト動作を行う。 具体的には例えば、 回転指示入力手段 1 0 1 から正方向の回転指示制御 信号を入力すると、 パラメータ変更手段 1 0 3は力ゥンタ手段 1 1 4の カウン ト値を 1 インク リメ ン トする力ゥンタ制御信号を出力し、 回転指 示入力手段 1 0 1 から負方向の回転指示制御信号を入力すると、 パラメ ータ変更手段 1 0 3はカウンタ手段 1 1 4のカウン ト値を 1デク リ メ ン トするカウンタ制御信号を出力し、 カウンタ手段 1 1 4は、 このカウン タ制御信号を受けて自己が保持する力ゥン ト値を変化させる。 When the rotation instruction control signal is input, the parameter The counting operation is performed by the counter control signal output from the changing means 103. Specifically, for example, when a rotation instruction control signal in the forward direction is input from the rotation instruction input means 101, the parameter changing means 103 changes the force for incrementing the count value of the force counter means 114 by one. When a counter control signal is output and a negative rotation instruction control signal is input from the rotation instruction input means 101, the parameter changing means 103 decrements the count value of the counter means 114 by 1 decrement. The counter means 114 receives the counter control signal and changes the force count value held by itself.
処理を所望するデータが表示された面が正面を向いた状態 (動画が表 示された状態) でユーザが選択入力手段 1 1 5 よ り選択制御信号を入力 すると、 選択面判定手段 1 1 6は、 カウンタ手段 1 1 4からその時点の カウン ト値をカウン ト情報と して取得し、 このカウン ト情報に基づいて 選択制御信号が入力された時に正面を向いている面を判定し、 この面を 選択面情報と して出力する。  When the user inputs a selection control signal from the selection input unit 115 while the surface on which the data desired to be processed is displayed is facing forward (a state in which a moving image is displayed), the selected surface determination unit 116 is selected. Obtains the count value at that time as the count information from the counter means 114, determines the face facing the front when the selection control signal is input based on this count information, and Outputs the plane as selected plane information.
データ決定手段 1 3 0は、 選択面判定手段 1 1 6から選択面情報を取 得し、 対応表保持手段 1 2 9に保持された面一データ対応情報を参照し て、 選択面情報で示される面に対応するデータを選択データ情報と して 出力する。 プログラム決定手段 1 3 1 は、 データ決定手段 1 3 0から選 択データ情報を取得し、 対応表保持手段 1 2 9に保持されたデータープ ログラム対応情報を参照して、 選択データ情報で示されるデータを処理 するプログラムを選択プログラム情報と して出力する。  The data determination means 130 obtains the selected plane information from the selected plane determination means 1 16 and refers to the plane-to-data correspondence information held in the correspondence table holding means 1 29 to indicate the selected plane information. The data corresponding to the surface to be output is output as selected data information. The program determining means 13 1 obtains the selected data information from the data determining means 130, refers to the data program correspondence information held in the correspondence table holding means 1 29, and refers to the data indicated by the selected data information. The program that processes is output as selected program information.
動画像再生手段 1 3 4は、 プログラム決定手段 1 3 1 から入力される 選択プロダラム情報で特定されたプログラムを実行し、 選択されたデー タを再生する。  The moving image reproducing means 13 4 executes the program specified by the selected program information input from the program determining means 13 1, and reproduces the selected data.
このよ うに本実施の形態 7によるデータ選択実行装置では、 3次元仮 想空間内に配置した 3次元回転体物体の各面に、 表示画面上で正面を向 く面には対応するデータを再生した動画像のテクスチヤを、 表示画面上 で正面を向く 面以外の面には対応するデータの静止画像のテクスチャを それぞれ貼り付けたものを画面上に表示し、 使用者が所定の操作によ り 指示をするこ とによ り 3次元回転体物体を回転させると と もに回転指示 操作を何回繰り返したかをカウン ト しておき、 使用者による所定の選択 操作が行われた際に、 使用者の視点に対して最も正面を向いている面を カウン ト値よ り判定し、 その面に対応づけられたデータを対応表を参照 して選択し、 この選択されたデータを処理するプログラムを起動して選 択データを開く構成と したから、 3次元仮想空間における 3次元回転体 物体を用いることによ り、 現実世界の円筒状の回転体を転がすィメージ を連想させる ことが可能であり 、 また、 ある時点で選択可能な面がどれ かを判断するのに、 面に貼り付けた画像が動いているかどうかで容易に 判断可能であり、 パソコンに慣れていない使用者にもなじみ易い直感的 な操作環境を実現することができる。 As described above, in the data selection execution device according to the seventh embodiment, the data corresponding to each surface of the three-dimensional rotating object arranged in the three-dimensional virtual space is reproduced on the surface facing the front on the display screen. The texture of the selected moving image is displayed on the screen with the texture of the still image of the corresponding data pasted on the screen other than the face facing the front on the display screen, and the user performs a predetermined operation. By giving an instruction, the three-dimensional rotating object is rotated, and how many times the rotation instruction operation is repeated is counted, and used when the user performs a predetermined selection operation. The face facing the viewer's viewpoint is determined from the count value, the data associated with that face is selected with reference to the correspondence table, and a program for processing the selected data is selected. Since it is configured to start and open the selection data, it is possible to associate the image of rolling a cylindrical rotating body in the real world by using a three-dimensional rotating object in a three-dimensional virtual space. In addition, in order to determine which surface can be selected at a given time, it is easy to determine whether the image pasted on the surface is moving, and it is intuitive that it is easy for users who are not used to PC to use Operation ring Environment can be realized.
なお、 本実施の形態 7では、 選択用オブジェク トが 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1, パラメータ保持手段 1 0 2 , パラメータ変更手段 1 0 3を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2のよ うに回転角変化パターン保持手段を設け、 回転表示制御を自動で 行う よ うにしても良いことは言うまでもない。  Note that, in the seventh embodiment, a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation is provided. In this embodiment, the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed. Needless to say, an angle change pattern holding unit may be provided to automatically perform the rotation display control.
また、 本実施の形態 7では、 選択面判定手段 1 1 6がカウンタ手段 1 1 4の出力するカウン ト情報に基づいて表示画面上において正面を向い ている面を判定するものについて示したが、 実施の形態 3のよ うに奥行 き情報に基づいて表示画面上において正面を向いている面を判定する構 成、 あるいは実施の形態 4のよ うに回転角情報に基づいて表示画面上に おいて正面を向いている面を判定する構成と しても良いことは言うまで もなレヽ。  Further, in the seventh embodiment, the selection surface determination unit 1 16 determines the front facing surface on the display screen based on the count information output from the counter unit 114. A configuration in which the front facing surface is determined on the display screen based on the depth information as in Embodiment 3, or a front surface on the display screen based on the rotation angle information as in Embodiment 4. Needless to say, it may be configured to judge the surface facing the surface.
実施の形態 8 . Embodiment 8
第 1 3図は本発明の実施の形態 8 によるデータ選択実行装置の構成を 示すブロ ック図である。  FIG. 13 is a block diagram showing a configuration of a data selection execution device according to Embodiment 8 of the present invention.
第 1 3図において第 9図と同一符号は同一又は相当部分である。 1 3 6は選択面判定手段 1 1 6からの、 現在選択可能な面 (正面を向いてい ると判定された面) を示す選択面情報を受け、 3次元回転体物体が回転 することで次に選択可能な面となる面が何であるかを判定し、 この次に 選択可能な面となる面を示す次選択面情報を出力する次選択面判定手段. 1 3 7は選択面判定手段 1 1 6からの選択面情報を受け、 対応表保持手 段 1 2 9から読み取った対応情報 (面一データ対応情報) を参照して現 在選択可能な面に対応するデータを判定し選択データ情報を出力する第 1 のデータ決定手段、 1 3 8は第 1 のデータ決定手段 1 3 7が出力する 選択データ情報から、対応表保持手段 1 2 9から読み取った対応情報(デ —タープログラム対応情報) を参照して、 実行すべきプログラムを決定 する第 1 のプログラム決定手段、 1 3 9は第 1 のプログラム決定手段 1 3 8が出力する選択プログラム情報が示すプログラムを起動し、 第 1 の データ決定手段 1 3 7が出力する選択データ情報が示すデータを再生し て再生データ 1 を出力するデータ再生手段である。 1 4 0は次選択面判 定手段 1 3 6からの次選択面情報を受け、 対応表保持手段 1 2 9から読 み取った対応情報 (面一データ対応情報) を参照して次に選択可能な面 に対応するデータを判定し次選択データ情報を出力する第 2のデータ決 定手段、 1 4 1 は第 2のデータ決定手段 1 4 0が出力する次選択データ 情報から、 対応表保持手段 1 2 9から読み取った対応情報 (データープ ログラム対応情報) を参照して、 実行すべきプログラムを決定する第 2 のプロダラム決定手段、 1 4 2は第 2のプロダラム決定手段 1 4 1が出 力する選択プログラム情報が示すプログラムを起動し、 第 2のデータ決 定手段 1 4 0が出力する次選択データ情報が示すデータを再生して再生 データ 2を出力する次データ再生手段である。 1 4 3は再生データ 1 と 再生データ 2を入力し、 3次元回転体物体の回転に応じて混合データを 作成して出力する ミキシング手段、 1 4 4はミ キシング手段 1 4 3から の混合データを画像表示, または音声表示するデータ出力手段である。 次に本実施の形態 8によるデータ選択実行装置の動作について説明す る。 本実施の形態 8によるデータ選択実行装置は、 選択対象データが音 声/音楽データや動画像データ、 あるいは動画像データに付随した音声/ 音楽データなど時間変化を伴うデータの場合に、 ある時点で正面を向い ている面に対応するデータから次の面のデータへ切り替わる際に、 回転 角度に応じて音量、 輝度レベルの混合比のパターンに基づいてフエ一ド イン、 フエ一 ドアウ トで切り替えるよ うにしたものである。 In FIG. 13, the same reference numerals as those in FIG. 9 denote the same or corresponding parts. 13 6 receives the selected surface information indicating the currently selectable surface (the surface determined to be facing the front) from the selected surface determination means 1 16 and selects the next by rotating the three-dimensional rotating object. Next selection surface determination means for determining what surface is a possible surface, and outputting next selection surface information indicating the next surface to be selected. 1 3 7 is a selection surface determination means 1 1 6 The selected plane information is received from, and the correspondence information (plane-to-data correspondence information) read from the correspondence table holding means 12 is referred to to determine the data corresponding to the currently selectable plane and the selected data information is output The first data decision means 1338 converts correspondence information (data program correspondence information) read from the correspondence table holding means 1229 from the selected data information output by the first data decision means 1337. The first program deciding means to refer to and decide the program to be executed 13 9 starts the program indicated by the selected program information output by the first program determining means 13 8 and reproduces the data indicated by the selected data information output by the first data determining means 13 This is a data reproducing means for outputting 1. 14 0 receives the next selected surface information from the next selected surface determination means 13 6, and then selects the next by referring to the correspondence information (plane-to-data correspondence information) read from the correspondence table holding means 12 9 The second data decision means for judging the data corresponding to the possible surface and outputting the next selection data information, and 141 is a correspondence table from the next selection data information output by the second data decision means 140 The second program determining means for determining the program to be executed by referring to the corresponding information (data program corresponding information) read from the means 12 9, the output of the second program determining means 14 2 is 14 2 This is a next data reproducing unit that starts a program indicated by the selected program information to be reproduced, reproduces data indicated by the next selected data information output by the second data determining unit 140, and outputs reproduced data 2. Mixing means for inputting reproduction data 1 and reproduction data 2 to create and output mixed data according to the rotation of the three-dimensional rotating object, and mixing data from mixing means 1 and 4 Is a data output unit that displays images or sounds. Next, the operation of the data selection execution device according to the eighth embodiment will be described. In the data selection execution device according to the eighth embodiment, the data to be selected is a sound. In the case of data that changes over time, such as voice / music data, moving image data, or voice / music data attached to moving image data, from the data corresponding to the front facing at one point to the data of the next surface At the time of switching, switching is made between feed-in and feed-out based on the pattern of the mixing ratio of the volume and the brightness level according to the rotation angle.
本実施の形態 8によるデータ選択実行装置において、 データ選択動作 モー ドが開始すると、 3次元モデル座標保持手段 1 0 4に保持された 3 次元回転体物体の 3次元仮想空間内における初期座標が読み出され、 透 視変換手段 1 0 6が、 この初期座標と視点座標とを用いて、 3次元回転 体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座 標を出力する。 すなわち、 プログラム選択動作モー ドの初期表示動作時 には、 座標変換手段 1 0 5は、 3次元モデル座標保持手段 1 0 4から読 み出された初期座標の座標を変換せずにそのまま透視変換手段 1 0 6 に 出力する。 陰面処理手段 1 0 7 は透視変換手段 1 0 6から投影面座標を 読み込んで、 隠れて表示されない領域を排除し、 表示される領域のみを 抽出して奥行き情報, および陰面処理後ラスタ情報を出力する。 テクス チヤマツビング手段 1 1 0は陰面処理手段 1 0 7によ り奥行き情報が考 慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手段 1 0 8 によ り保持された奥行き情報に基づいて、 テクスチヤ保持手段 1 0 9から読 み込んだテクスチャを貼り付ける。 3次元回転体物体の各面とテクスチ ャとの対応関係は、 対応表保持手段 1 2 9から対応情報 (面一テクスチ ャ対応情報) を読み出すことによって得る。 レンダリ ング手段 1 1 1 は テクスチャマツ ピング手段 1 1 0 が出力するテクスチャマツ ピング後フ レーム情報に、 奥行き情報保持手段 1 0 8によ り保持された奥行き情報 に基づいて、 各画素の色や明るさなどすベての画素情報を描画する。 レ ンダリ ング手段 1 1 1 によ り描画されたフ レーム情報はフ レームバッフ ァ 1 1 2に保持され、 画面表示手段 1 1 3はフ レームバッファ 1 1 2に 保持されたフレーム情報を所定のタイ ミ ングで読み出して画面の表示を 行う。 これにより、 データ選択動作モー ドの初期状態の画面が表示され る。 In the data selection execution device according to the eighth embodiment, when the data selection operation mode starts, the initial coordinates of the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space are read. The perspective transformation means 106 performs perspective transformation to the display screen of the three-dimensional virtual space including the three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and outputs the projection plane coordinates. I do. That is, during the initial display operation in the program selection operation mode, the coordinate conversion means 105 does not convert the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 without performing a perspective transformation. Output to means 106. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes hidden and invisible areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing. I do. The texture matching means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107. Paste the texture read from the texture holding means 109. The correspondence between each surface of the three-dimensional rotating object and the texture is obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding unit 129. The rendering means 111 is based on the frame information after texture mapping output by the texture mapping means 110 and the color and the pixel of each pixel based on the depth information held by the depth information holding means 108. All pixel information such as brightness is drawn. The frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 sets the frame information held in the frame buffer 112 to a predetermined time. Read it out by mining and display the screen. This displays the screen in the initial state of the data selection operation mode. You.
ここで本実施の形態 8では、 データ再生手段 1 3 9, 及び次データ再 生手段 1 4 2がそれぞれ、 3次元回転体物体を構成する面のう ち、 正面 を向く 面に対応するデータ, 及び次に正面を向く 面に対応するデータを 再生し、 ミキシング手段 1 4 3に対して出力する。 例えば 3次元回転体 物体が第 2図に示す形状のものである場合、 初期表示状態では、 データ 再生手段 1 3 9は面 1 に対応するデータを、 次データ再生手段 1 4 2は 面 2に対応するデータをそれぞれ再生し、 ミキシング手段 1 4 3に対し て出力する。 ミキシング手段 1 4 3は、 初期表示状態では、 面 1 に対応 するデータの再生信号を最大, 面 2に対応するデータの再生信号を最小 とする混合率の合成信号を出力する。 すなわち、 初期表示状態では、 面 1 に対応するデータの再生信号のみがデータ出力手段 1 4 4に出力され, データ出力手段 1 4 4はこの再生信号を画像表示又は音声表示する。 画 像表示の方法と しては、 例えば、 第 1 1 図に示すよ うに、 表示画面 2 0 0上に 3次元回転体物体と と もに表示する。  Here, in the eighth embodiment, each of the data reproducing means 13 9 and the next data reproducing means 14 2 has data corresponding to the surface facing the front of the surfaces constituting the three-dimensional rotating object. Then, the data corresponding to the surface facing the front is reproduced and output to the mixing means 144. For example, when the three-dimensional rotating object has the shape shown in FIG. 2, in the initial display state, the data reproducing means 1339 stores the data corresponding to the surface 1 and the next data reproducing means 14 42 stores the data corresponding to the surface 2. The corresponding data is reproduced and output to the mixing means 144. In the initial display state, the mixing means 144 outputs a composite signal having a mixing ratio that maximizes the reproduction signal of the data corresponding to the surface 1 and minimizes the reproduction signal of the data corresponding to the surface 2. That is, in the initial display state, only the reproduction signal of the data corresponding to the surface 1 is output to the data output means 144, and the data output means 144 displays the reproduction signal as an image or a sound. As a method of displaying an image, for example, as shown in FIG. 11, a three-dimensional rotating object is displayed on a display screen 200.
初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 1 よ り回転指示制御信号を入力する と、 パラメ一タ変更手段 1 0 3は回 転指示入力手段 1 0 1 からの回転指示制御信号に基づき、 パラメータ保 持手段 1 0 2から変更前パラメータ (ここでは初期状態のパラメ一タ) を読み込み、 パラメータを変更し変更後パラメータと してパラメ一タ保 持手段 1 0 2 に記録し、 カウンタ手段 1 1 4に対しカウンタ制御信号を 出力する。  When the user inputs a rotation instruction control signal from the rotation instruction input unit 101 while the initial screen is displayed, the parameter changing unit 103 changes the parameter from the rotation instruction input unit 101 to the rotation instruction input signal 101. Based on the rotation instruction control signal, the parameter before change (here, the parameter in the initial state) is read from the parameter holding means 102, the parameter is changed, and the parameter is held as the changed parameter. And outputs a counter control signal to the counter means 114.
回転指示入力手段 1 0 1 については、 上記実施の形態 1 と同様、 リモ コンゃキーボー ドのカーソルキーの操作やマウスの動きなどを 3次元回 転体物体の回転に対応づけるよ うにすればよレ、。  Regarding the rotation instruction input means 101, as in the first embodiment, the operation of the cursor keys on the remote control keyboard or the movement of the mouse may be associated with the rotation of the three-dimensional rotating object. Les ,.
座標変換手段 1 0 5は、 パラメータ保持手段 1 0 2に記録された変更 後パラメータを読み出し、 3次元モデル座標保持手段 1 0 4から読み出 した初期座標の座標を変更後パラメータを用いて変換して得られる変更 後モデル座標を透視変換手段 1 0 6に出力する。透視変換手段 1 0 6は、 この変更後モデル座標と視点座標とを用いて、 3次元回転体物体を含む 3次元仮想空間の表示画面への透視変換を行い、投影面座標を出力する。 この後、 陰面処理手段 1 0 7, テク スチャマッピング手段 1 1 0, レン ダリ ング手段 1 1 1 , フ レームバッファ 1 1 2, 及び画面表示手段 1 1 3が上記データ選択動作モー ドの初期表示動作時と同様の処理を行い、 回転指示制御信号入力後の画面が表示される。 The coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and converts the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 using the changed parameters. The modified model coordinates obtained by the above are output to the perspective transformation means 106. The perspective transformation means 106 is Using the changed model coordinates and viewpoint coordinates, perspective transformation to a display screen of a three-dimensional virtual space including a three-dimensional rotating object is performed, and projection plane coordinates are output. Thereafter, the hidden surface processing means 107, the texture mapping means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are initially displayed in the data selection operation mode. The same processing as during operation is performed, and the screen after the rotation instruction control signal is input is displayed.
例えば 3次元回転体物体が第 2 ( a ) 図に示す形状のものである場合、 第 1 4 ( a ) 図に示すよ うに初期状態 (時刻 t 0 ) において面 1 が正面 を向いて表示されていたものが、 回転指示制御信号の入力によ り時刻 t 1 において面 2が正面を向く 画像が表示される。 このとき、 本実施の形 態 8では、 ミ キシング手段 1 4 3が、 初期表示状態では、 面 1 に対応す るデータの再生信号を最大, 面 2に対応するデータの再生信号を最小と する混合率の合成信号を出力していたものを、 時刻 t 1 において面 1 に 対応するデータの再生信号を最小, 面 2に対応するデータの再生信号を 最大とする混合率の合成信号を出力するよ うに、 面 1 に対応するデータ の再生信号の混合率を徐々に下げる と と も面 2に対応するデータの再生 信号の混合率を徐々に上げる。 これによ り 、 第 1 4 ( b ) 図に示すよ う に、 面 1 に対応するデータの再生信号の表示と面 2に対応するデータの 再生信号の表示がク ロスフェー ドして切り替わる。 面 1 に対応するデー タの再生信号の表示出力が 0になる と、 データ再生手段 1 3 9は再生す るデータを面 1 に対応するデータから面 2に対応するデータに切り替え、 次データ再生手段 1 4 2は再生するデータを面 2に対応するデータから 面 3に対応するデータに切り替える。 そして、 ミキシング手段 1 4 3は、 面 2が正面を向いた状態から面 3が正面を向いた状態へ切り替わる画像 の表示に合わせて、 面 2に対応するデータの再生信号の表示と面 3に対 応するデータの再生信号の表示がク ロスフェー ドして切り替わるよ うに 合成信号を出力する。 このよ う な動作を繰り返すこ と によ り 、 表示画面 上に 3次元回転体物体の各面にそれぞれデータ内容を示すテクスチャを 貼り付けたもの (選択用ォブジェク ト) を表示すると と もに、 正面を向 /07307 For example, if the three-dimensional rotating object has the shape shown in Fig. 2 (a), the surface 1 is displayed facing forward in the initial state (time t0) as shown in Fig. 14 (a). However, an image in which the surface 2 faces front at time t 1 is displayed by the input of the rotation instruction control signal. At this time, in Embodiment 8, the mixing means 144 sets the reproduction signal of the data corresponding to the surface 1 to the maximum and the reproduction signal of the data corresponding to the surface 2 to the minimum in the initial display state. At time t 1, a composite signal with a mixing ratio that minimizes the reproduction signal of the data corresponding to surface 1 and maximizes the reproduction signal of the data corresponding to surface 2 at time t 1 As described above, the mixing ratio of the reproduction signal of the data corresponding to the surface 1 is gradually reduced, and the mixing ratio of the reproduction signal of the data corresponding to the surface 2 is gradually increased. As a result, as shown in FIG. 14 (b), the display of the reproduced signal of the data corresponding to the surface 1 and the display of the reproduced signal of the data corresponding to the surface 2 are switched by cross-fading. When the display output of the reproduction signal of the data corresponding to the surface 1 becomes 0, the data reproducing means 1 39 switches the data to be reproduced from the data corresponding to the surface 1 to the data corresponding to the surface 2, and reproduces the next data. Means 142 switches the data to be reproduced from data corresponding to surface 2 to data corresponding to surface 3. Then, the mixing means 1 4 3 displays the reproduced signal of the data corresponding to the surface 2 and the display 3 The composite signal is output so that the display of the playback signal of the corresponding data is switched by crossfading. By repeating such an operation, a texture (selection object) in which the data contents are pasted on each surface of the three-dimensional rotating object is displayed on the display screen. Facing the front / 07307
56 く面に対応づけられた音楽データや動画像データを途切れることなく補 助表示することができる。  56 Auxiliary display of music data and moving image data associated with masks is possible without interruption.
処理を所望するデータが表示された面が正面を向いた状態でユーザが 選択入力手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手段 1 1 6は、 その時点で出力していた選択面情報で示される面が実際に選択 されたことを示す選択表示信号を出力する。 第 1のデータ決定手段 1 3 7, 第 1のプログラム決定手段 1 3 8は、 選択表示信号をデータ再生手 段 1 3 9に伝達する。 選択表示信号を受け取ったデータ再生手段 1 3 9 は、 現在実行中のプログラムを用いて、 選択されたデータを最初から再 生しなおし、 再生データを選択表示信号と と もにミキシング手段 1 4 3 に出力する。 ミキシング手段 1 4 3は選択表示信号を受け取ると、 再生 データ 1 と再生データ 2の混合を止め、 再生データ 1 と選択表示信号を データ出力手段 1 4 4に出力する。 データ出力手段 1 4 4は選択表示信 号を受け取ると、 画面表示を選択用オブジェク 卜が表示された画面から デ一タ表示用の画面に切り替えて再生データ 1の表示を行う。  When the user inputs a selection control signal from the selection input unit 1 15 with the surface on which the data desired to be processed is facing forward, the selection surface determination unit 1 16 outputs at that time. A selection display signal indicating that the plane indicated by the selected plane information has been actually selected is output. The first data deciding means 13 7 and the first program deciding means 13 8 transmit the selected display signal to the data reproducing means 13 9. The data reproducing means 13 9 having received the selection display signal reproduces the selected data from the beginning by using the currently executing program, and mixes the reproduced data together with the selection display signal 14 3 Output to Upon receiving the selection display signal, the mixing means 144 stops mixing the reproduction data 1 and the reproduction data 2, and outputs the reproduction data 1 and the selection display signal to the data output means 144. When the data output means 144 receives the selection display signal, it switches the screen display from the screen on which the selection object is displayed to the data display screen and displays the reproduction data 1.
このよ うに本実施の形態 8によるデータ選択実行装置では、 3次元仮 想空間内に配置した 3次元回転体物体の各面にそれぞれデータ内容を示 すテクスチャを貼り付けたものを画面上に表示し、 正面を向く 面に対応 づけられた音楽データや動画像データを途切れることなく補助表示し、 使用者による所定の選択操作が行われた際に、 使用者の視点に対して最 も正面を向いている面に対応づけられたデータを再生する構成と したか ら、 3次元仮想空間における 3次元回転体物体を用いることによ り、 現 実世界の円筒状の回転体を転がすィメ一ジを連想させることが可能であ り、 バソコンに慣れていない使用者にもなじみ易い直感的な操作環境を 実現することができ、 また、 選択用オブジェク ト と と もに捕助表示され る音楽データや動画像データが途切れることがないため、 使用者が快適 にデータ選択をすることができるデータ選択実行装置を実現できる。 なお、 本実施の形態 8では、 選択面に対応づけられたデータの再生信 号の表示と次選択面に対応づけられたデータの再生信号の表示をク ロス フェー ドで切り替えるものについて示したが、 第 1 4 ( c ) 図に示すよ うに、 選択面に対応づけられたデータの再生信号の表示をフエ一 ドアゥ ト した後に次選択面に対応づけられたデータの再生信号の表示をフエ一 ドインするよ うにしてもよい。 この場合は、 2つのデータを同時に再生 する必要がないので、 データ決定手段, プログラム決定手段, データ再 生装置を 2重持ちする必要がない。 As described above, the data selection execution device according to the eighth embodiment displays on the screen a texture indicating the data content attached to each surface of the three-dimensional rotating object placed in the three-dimensional virtual space. The music data and moving image data associated with the surface facing the front are displayed as an auxiliary display without interruption, and when a user performs a predetermined selection operation, the user faces the user's viewpoint most Since the system is configured to reproduce the data associated with the facing surface, the three-dimensional rotating object in the three-dimensional virtual space is used to roll the cylindrical rotating body in the real world. Music that can be reminiscent of a virtual computer, provides an intuitive operating environment that is easy for users unfamiliar with Bascon, and music that is displayed as an aid with selection objects. Data and dynamics Since the image data is not interrupted, a data selection execution device that allows the user to select data comfortably can be realized. In the eighth embodiment, the display of the reproduced signal of the data associated with the selected surface and the display of the reproduced signal of the data associated with the next selected surface are crossed. As shown in Fig. 14 (c), the display of the data reproduced signal of the data associated with the selected surface is faded out, and the display is switched to the next selected surface, as shown in Fig. 14 (c). The display of the data reproduction signal may be fed in. In this case, there is no need to reproduce the two data at the same time, so there is no need to double the data decision means, program decision means, and data reproduction device.
また、 本実施の形態 8では、 選択用オブジェク トが 3次元仮想空間内 で上記中心軸を回転の中心と して回転する画像を表示するための回転表 示制御信号を与える回転表示制御手段と して回転指示入力手段 1 0 1, パラメータ保持手段 1 0 2, パラメータ変更手段 1 0 3 を備えたもの、 すなわち手動で回転指示入力を行う ものについて示したが、 実施の形態 2のよ うに回転角変化パターン保持手段を設け、 回転表示制御を自動で 行う よ うにしても良いことは言うまでもない。  Also, in the eighth embodiment, a rotation display control means for providing a rotation display control signal for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as a center of rotation, In this embodiment, the rotation instruction input means 101, the parameter holding means 102, and the parameter changing means 103 are provided, that is, the rotation instruction input is manually performed. Needless to say, an angle change pattern holding unit may be provided to automatically perform the rotation display control.
また、 本実施の形態 8では、 選択面判定手段 1 1 6が力ゥンタ手段 1 1 4の出力するカウン ト情報に基づいて表示画面上において正面を向い ている面を判定するものについて示したが、 実施の形態 3のよ うに奥行 き情報に基づいて表示画面上において正面を向いている面を判定する構 成, あるいは実施の形態 4のよ うに回転角情報に基づいて表示画面上に おいて正面を向いている面を判定する構成と しても良いことは言うまで もなレヽ。  In the eighth embodiment, the selection surface determination unit 116 determines the surface facing the front on the display screen based on the count information output from the force counter unit 114. However, a configuration in which the front facing surface is determined on the display screen based on the depth information as in the third embodiment, or on the display screen based on the rotation angle information as in the fourth embodiment. Needless to say, a configuration may be adopted in which the face facing the front is determined.
また、 最近では、 信号処理技術を応用して、 通常の音声を発展させ、 3次元空間における音源位置を考慮してスピー力よ り 出力することによ り、 あたかも頭上から音が聞こえたり、 右から左へ音が移動するよ う に 聞こえたりする、 いわゆる 3次元サウン ドが実用化されており、 本実施 の形態 8によるデータ選択実行装置において、 この 3次元サウン ドの技 術を応用して、 3次元回転体物体の面に対応させた音声データの再生音 源位置を 3次元回転体物体の回転に対応させて移動させるよ うにしても よく 、 このよ うに音源位置が移動する再生音を聞く ことによってユーザ は、 現時点で選択可能な面がどの面であるかを容易に認識できる。 CT 99 07307 Also, recently, by applying signal processing technology to develop normal speech and taking into account the sound source position in three-dimensional space and outputting it with speed, sound can be heard as if overhead, A so-called three-dimensional sound that sounds as if moving from left to right has been put to practical use. In the data selection execution device according to the eighth embodiment, the technology of the three-dimensional sound is applied. Alternatively, the reproduction sound source of the audio data corresponding to the surface of the three-dimensional rotator object may be moved in accordance with the rotation of the three-dimensional rotator object. By listening to, the user can easily recognize which plane is currently selectable. CT 99 07307
58 第 1 5図は、 本実施の形態 8によるデータ選択実行装置において 3次 元サゥン ドの技術を応用したときの再生音表示の切り替えの動作を説明 するための図であり、 図において、 上段は 3次元回転体物体の表示画面 上での見え方を示す。 この例では 3次元回転体物体を構成する面の数が 6 面であり、 回転の中心軸を 3次元仮想空間内において縦方向に配置し、 3次元回転体物体を、 回転の中心軸方向から見たときに (第 1 5図の下 段の図を参照) 、 時計と逆方向に回転させる場合を示している。 図に示 すよ うに、 第 1 5 ( a ) 図の時点 (初期状態) では、 面 1 に対応する音 声データ (第 1 3図中の再生データ 1 に相当) の音源位置が画面の中央 にあり、 面 2に対応する音声データ (第 1 3図中の再生データ 2に相当) の音源位置が画面に向かって左側の空間にあるよ うに音声表示される。 そして、 3次元回転体物体の回転に合わせて、 第 1 5 ( b ) 図に示すよ うに、 面 1 に対応する音声データの音源位置が画面に向かって右側の空 間に移動して行き、 面 2に対応する音声データの音源位置が画面の中央 に向かってく るよ うに音声表示における音源位置がコン トロールされ、 第 1 5 ( c ) 図の時点 (面 2が正面を向いた状態) では、 面 2に対応す る音声データの音源位置が画面の中央にあり、 面 1 に対応する音声デー タの音源位置が画面に向かって右側の空間にあるよ う に音声表示される。 このよ うに 3次元回転体物体の各面に対応する音声データを 3次元回転 体物体の回転に合わせて音源位置が移動するよ うに再生表示することに よ り、 ユーザは、 どの音声データが選択可能な状態にあるかを立体的な 音声によって容易に認識できる。 なお、 音声表示制御における音源位置 の決定方法と しては、 第 1 6図に示すよ うに、 回転軸と面の中央とを結 ぶ直線の延長線上の所定距離の位置に該面に対応する音声データの音源 を配置する方法が考えられるが、 これ以外の方法であってもよく 、 例え ば、 第 1 7図に示すよ うに、 回転軸と面の中央とを結ぶ直線の延長線上 の所定距離の位置から表示画面に平行な直線上に投影して該面に対応す る音声データの音源を配置するよ う にしても良い。  58 FIG. 15 is a diagram for explaining the operation of switching the reproduction sound display when the technique of the three-dimensional sound is applied in the data selection execution device according to the eighth embodiment. Indicates how the 3D rotating object appears on the display screen. In this example, the number of surfaces that make up the three-dimensional rotator object is six, the central axis of rotation is arranged vertically in the three-dimensional virtual space, and the three-dimensional rotator object is moved from the central axis direction of rotation. When viewed (see the lower part of Fig. 15), it shows a case where the watch is turned in the opposite direction to the clock. As shown in the figure, at the time point (initial state) in Fig. 15 (a), the sound source position of the sound data corresponding to surface 1 (corresponding to the reproduced data 1 in Fig. 13) is at the center of the screen. The sound is displayed as if the sound source position of the audio data corresponding to surface 2 (corresponding to playback data 2 in Fig. 13) is in the space on the left side when viewed from the screen. Then, along with the rotation of the three-dimensional rotating object, as shown in Fig. 15 (b), the sound source position of the audio data corresponding to surface 1 moves to the right space toward the screen, The sound source position in the audio display is controlled so that the sound source position of the sound data corresponding to surface 2 is toward the center of the screen, and at the time shown in Fig. 15 (c) (surface 2 is facing front), The sound source position of the audio data corresponding to surface 2 is displayed at the center of the screen, and the sound source position of the audio data corresponding to surface 1 is displayed as sound in the space on the right side of the screen. By reproducing and displaying the audio data corresponding to each surface of the three-dimensional rotating object in such a manner that the sound source position moves in accordance with the rotation of the three-dimensional rotating object, the user can select which audio data is selected. It is easy to recognize whether it is possible or not by three-dimensional sound. Note that the sound source position is determined in the audio display control by, as shown in FIG. 16, a position corresponding to a predetermined distance on an extension of a straight line connecting the rotation axis and the center of the surface. Although a method of arranging the sound source of the audio data is conceivable, other methods may be used. For example, as shown in FIG. 17, a predetermined line on the extension of a straight line connecting the rotation axis and the center of the plane is used. A sound source of audio data corresponding to the plane may be arranged by projecting from a distance position onto a straight line parallel to the display screen.
実施の形態 9 . 第 1 8図は本発明の実施の形態 9による映像表示装置の構成を示すブ 口ック図である。 Embodiment 9 FIG. 18 is a block diagram showing a configuration of a video display device according to Embodiment 9 of the present invention.
第 1 8図において、 1 1 0 1 は放送ゃネッ トワークを経由して伝送さ れる入力信号を受信し、 入力映像信号を出力する映像受信手段、 1 1 0 4は入力映像信号を保持するメモ リ手段、 1 1 0 3は入力映像信号をメ モリ手段 1 1 0 4に書き込むと共に、 入力映像信号からテクスチヤ と し て用いる領域を切り出す際の位置を示す領域切り出し情報に従ってメモ リ制御信号をメモリ手段 1 1 0 4に出力し、 メモリ手段 1 1 0 4から部 分映像信号を読み出すメモリ入出力制御手段、 1 1 0 2は 3次元座標情 報と領域切り 出し情報とから構成されるパラメータ情報から、 3次元座 標情報と領域切り出し情報とを分離し、 領域切り出し情報はメモリ入出 力制御手段 1 1 0 3に出力し、 3次元座標情報はオブジェク ト位置決定 手段 1 1 0 5 に出力するパラメータ分離手段、 1 1 0 5はパラメータ分 離手段 1 1 0 2からの 3次元座標情報に基づいて 3次元仮想空間に 3次 元オブジェク トを配置し、 3次元仮想空間における 3次元オブジェク ト のオブジェク ト座標情報を出力する と共に、 ユーザ入力に従って、 ォブ ジェク ト座標情報よ りオブジェク ト配置順序情報を出力するォブジェク ト位置決定手段、 1 1 1 0はオブジェク ト位置決定手段 1 1 0 5からの オブジェク ト配置順序情報に基づいて各ォブジェク トの位置を比較し、 所定の条件でオブジェク トを選択し、 選択オブジェク ト情報を出力する ォブジェク ト位置比較手段、 1 1 1 1 はォブジェク ト位置比較手段 1 1 1 0からの選択オブジェク ト情報とパラメ一タ分離手段 1 1 0 2からの チャンネル対応情報とから、 選択されたォブジェク トに対応するチャン ネルを決定し、 チャンネル情報を出力するチャンネル決定手段、 1 1 0 6はオブジェク ト位置決定手段 1 1 0 5からの 3次元オブジェク トのォ ブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ 投影面座標情報に変換する透視投影変換手段、 1 1 0 7は透視投影変換 手段 1 1 0 6からの投影面座標情報に基づいて、 メモ リ入出力制御手段 1 1 0 3から読み出した部分映像信号を 3次元オブジェク トの所定の面 307 In FIG. 18, reference numeral 1101 denotes a video receiving means for receiving an input signal transmitted via a broadcasting network and outputting an input video signal, and 1104 a memo for holding the input video signal. 1103 writes the input video signal to the memory means 1104, and stores the memory control signal in the memory according to the area cutout information indicating the position at which the area used as the texture is cut out from the input video signal. Means 1104, a memory input / output control means for outputting a partial video signal from the memory means 1104, and 1102 a parameter information comprising three-dimensional coordinate information and area cutout information From the 3D coordinate information and the region cutout information, the region cutout information is output to the memory input / output control unit 1103, and the 3D coordinate information is output to the object position determination unit 1105. Parameter separation means, 1105 places a three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information from the parameter separation means 111, and sets the object coordinate information of the three-dimensional object in the three-dimensional virtual space. Object position determining means for outputting object arrangement order information from object coordinate information in accordance with a user input, and outputting object arrangement information from object position determining means 1105 in accordance with user input. Object position comparing means for comparing the position of each object based on the order information, selecting an object under predetermined conditions, and outputting the selected object information, 1 1 1 1 is the object position comparing means 1 1 1 The channel corresponding to the selected object is determined from the selected object information from 0 and the channel correspondence information from the parameter separation means 1102. Channel determining means for determining and outputting channel information. 1106 perspectively projects the object coordinate information of the three-dimensional object from the object position determining means 1105 onto a display projection plane, and displays the information. Perspective projection conversion means for converting into projection plane coordinate information, 1107 is a partial image read out from memory input / output control means 1103 based on projection plane coordinate information from perspective projection conversion means 1106. The signal is transferred to the specified surface of the 3D object. 307
60 にテクスチャマッ ピングして、 3次元映像信号を生成出力するラスタラ ィズ手段、 1 1 0 8はラスタライズ手段 1 1 0 7からの 3次元映像信号 を保持し、 所定のタイ ミ ングで出力映像信号を出力するフレームメモ リ 手段、 1 1 0 9はフ レームメモ リ手段 1 1 0 8からの出力映像信号、 あ るいは映像受信手段 1 1 0 1 からの入力映像信号を表示する映像表示手 段である。  Rasterizing means for generating and outputting a 3D video signal by texture mapping to 60, 1108 holds the 3D video signal from the rasterizing means 1107, and outputs video at a predetermined timing A frame memory means for outputting a signal, 110 9 is a video display means for displaying an output video signal from the frame memory means 111 or an input video signal from the video receiving means 111. It is.
次に本実施の形態 9による映像表示装置の動作について説明する。 本 実施の形態 9による映像表示装置は、 放送ゃネッ トワークを経由 して伝 送される入力映像信号、 あるいはマルチ画面のよ うな入力映像信号から テクスチャと して用いる領域を切り 出して、 3次元仮想空間内に配置し た 3次元回転体物体の各面に上記テク スチャを貼り付けてチャンネル選 択を行う ものと した。  Next, the operation of the video display device according to Embodiment 9 will be described. The video display device according to Embodiment 9 cuts out an area used as a texture from an input video signal transmitted via a broadcast network or an input video signal such as a multi-screen, The above-mentioned texture is attached to each surface of the three-dimensional rotating object placed in the virtual space, and channel selection is performed.
本実施の形態 9による映像表示装置において、 映像受信手段 1 1 0 1 が入力する入力信号の初期チャンネルは複数の部分映像で構成されるマ ルチ画面映像である。  In the video display device according to the ninth embodiment, the initial channel of the input signal input to video receiving means 1101 is a multi-screen video composed of a plurality of partial videos.
まず分割画面やマルチ画面のよ う な入力信号が所定数で複数の独立し た映像から構成される入力信号が映像受信手段 1 1 0 1 に入力されると, 映像受信手段 1 1 0 1 から入力映像信号がメモ リ入出力制御手段 1 1 0 3に出力される。  First, when an input signal such as a split screen or a multi-screen having a predetermined number of input signals composed of a plurality of independent videos is input to the video receiving unit 1101, the video receiving unit 111 The input video signal is output to the memory input / output control means 1103.
メモリ入出力制御手段 1 1 0 3は、 領域切り 出し情報の切り 出し座標 に基づいて、 メモリ制御信号をメモリ手段 1 1 0 4に出力して、 メモリ 手段 1 1 0 4に保持された入力映像信号から部分映像信号を抽出し、 部 分映像信号をラスタライズ手段 1 1 0 7へ出力する。  The memory input / output control means 1103 outputs a memory control signal to the memory means 110 based on the cutout coordinates of the area cutout information, and the input image held in the memory means 1104. A partial video signal is extracted from the signal, and the partial video signal is output to the rasterizing means 1107.
ラスタライズ手段 1 1 0 7は、 部分映像信号を透視投影変換手段 1 1 0 6からの投影面座標情報に基づいて、 ディスプレイに透視投影した 3 次元オブジェク トにテクスチャと して貼り付ける。 この際、 ラスタライ ズ手段 1 1 0 7はマルチ画面を構成する部分映像の数だけ処理を繰り返 す必要があるので、 その回数分、 ラスタライズ手段 1 1 0 7から出力さ れるパラメ一タ出力制御情報をパラメータ分離手段 1 1 0 2に出力する c P99/07307 The rasterizing means 111 attaches the partial video signal to the three-dimensional object perspectively projected on the display as a texture based on the projection plane coordinate information from the perspective projection converting means 1106. At this time, the rasterizing means 1107 needs to repeat the processing for the number of partial images constituting the multi-screen. Output information to parameter separation means 1 1 0 2 c P99 / 07307
61 このよ うに、 3次元描画処理を繰り返して、 ラスタライズ手段 1 1 0 7 において生成された映像が 3次元映像信号と してフ レームメモリ手段 1 1 0 8に出力する。  61 In this way, the three-dimensional rendering process is repeated, and the video generated by the rasterizing means 1107 is output to the frame memory means 110 as a three-dimensional video signal.
フ レームメモリ手段 1 1 0 8では、 所定の表示タイ ミ ングで出力映像 信号を映像表示手段 1 1 0 9に出力し、 映像を視聴する。 このとき表示 される映像は、 入力映像信号から部分映像信号を分離して、 3次元仮想 空間内に配置した 3次元オブジェク 卜の各面にテクスチャと して貼り付 けられた 3次元回転体物体であり 、 第 2図に 3次元回転体物体の一例を 示す。  The frame memory unit 1108 outputs an output video signal to the video display unit 1109 at a predetermined display timing, and views the video. The video displayed at this time is a three-dimensional rotating object that separates the partial video signal from the input video signal and pastes it as a texture on each surface of the three-dimensional object placed in the three-dimensional virtual space. FIG. 2 shows an example of a three-dimensional rotating object.
一方、 選択ボタンが押されるなど、 ュ一ザ入力が生じた時点でォブジ ェク ト位置決定手段 1 1 0 5は、 3次元座標情報に基づいて、 3次元仮 想空間における 3次元オブジェク トの位置を決定し、 オブジェク ト座標 情報を透視投影変換手段 1 1 0 6 へ出力する。 透視投影変換手段 1 1 0 6はオブジェク ト座標情報をディスプレイ投影面上に透視投影変換し、 投影面座標情報と してラスタライズ手段 1 1 0 7へ出力する。  On the other hand, when a user input occurs, such as when a selection button is pressed, the object position determining means 1105 determines a 3D object in a 3D virtual space based on the 3D coordinate information. The position is determined, and the object coordinate information is output to the perspective projection conversion means 111. The perspective projection conversion means 1106 performs perspective projection conversion of the object coordinate information on the display projection plane, and outputs it to the rasterization means 11007 as projection plane coordinate information.
また、 表示を所望するチャンネルが表示された面が正面を向いた状態 でユーザが入力を行う と、 オブジェク ト位置決定手段 1 1 0 5はォブジ ェク ト配置順序情報をオブジェク ト位置比較手段 1 1 1 0に出力し、 ォ ブジェク ト間の位置関係を比較して、 所定の条件で選択されたオブジェ ク トを決定し、 選択オブジェク ト情報をチヤンネル決定手段 1 1 1 1 に 出力する。  Also, when the user performs an input in a state where the surface on which the channel desired to be displayed is facing front faces, the object position determining means 1105 compares the object arrangement order information with the object position comparing means 1. The selected object is determined under predetermined conditions by comparing the positional relationship between the objects, and the selected object information is output to the channel determining means 111.
チヤンネル決定手段 1 1 1 1 では、 パラメータ分離手段 1 1 0 2から 出力されたチャンネル対応情報を参照して、 オブジェク ト位置比較手段 The channel determining means 1 1 1 1 1 refers to the channel correspondence information output from the parameter separating means 1 1 0 2,
1 1 1 0から出力された選択オブジェク ト情報に対応するチャンネルを 決定し、 チヤンネル情報と して映像受信手段 1 1 0 1 に出力する。 The channel corresponding to the selected object information output from 111 is determined, and is output to the video receiving means 110 as channel information.
映像受信手段 1 1 0 1では、 チャ ンネル情報に基づいて、 受信チャン ネルを切り替えて入力映像信号を映像表示手段 1 1 0 9に出力する。 映像表示手段 1 1 0 9では、 入力映像信号の入力を受け付ける と、 フ レームメモ リ 手段 1 1 0 8からの出力映像信号の表示を中止し、 入力映 像信号に切り替えて表示する。 この場合、 表示される映像はユーザが選 択したチャンネルの全画面表示である。 The video receiving unit 1101 switches the receiving channel based on the channel information and outputs the input video signal to the video display unit 110. When the input of the input video signal is received, the video display means 1109 stops displaying the output video signal from the frame memory means 1108, and Switch to image signal and display. In this case, the displayed video is a full screen display of the channel selected by the user.
第 1 9図は本実施の形態 9による 3次元表示に関する概念図である。 第 1 9図において、 2 0 1 は 4分割マルチ画面の場合の入力映像信号、 2 0 2は 3次元仮想空間内に配置した 3次元回転体物体、 2 0 3は 3次 元回転体物体がディスプレイに映し出されたときのディスプレイ投影面 である。 本発明において 3次元仮想空間内に配置する 3次元回転体物体 は複数の面よ り構成され、 各面が中心軸に対して一定の間隔で配置され た 3次元物体である。 第 1 9図では 3次元回転体物体を構成する面が 4 面であり、 回転の中心軸が 3次元仮想空間内において縦方向に配置され たものを示している。  FIG. 19 is a conceptual diagram relating to three-dimensional display according to the ninth embodiment. In FIG. 19, reference numeral 201 denotes an input video signal in the case of a 4-split multi-screen, 202 denotes a three-dimensional rotating object arranged in a three-dimensional virtual space, and 203 denotes a three-dimensional rotating object. This is the display projection surface when projected on the display. In the present invention, a three-dimensional rotator object arranged in a three-dimensional virtual space is a three-dimensional object composed of a plurality of surfaces, each surface being arranged at regular intervals with respect to a central axis. In Fig. 19, the four planes constituting the three-dimensional rotating object are four planes, and the central axis of rotation is arranged in the vertical direction in the three-dimensional virtual space.
入力映像信号 2 0 1 が入力信号と して映像受信手段 1 1 0 1 に入力さ れると、 映像受信手段 1 1 0 1 は入力映像信号をメモリ入出力制御手段 1 1 0 3に出力する。 メモリ入出力制御手段 1 1 0 3から領域切り出し 情報に基づいて抽出された部分映像信号はラスタライズ手段 1 1 0 7に 出力し、 各部分映像をテクスチャと して 3次元オブジェク ト 2 0 2の各 面に貼り付ける。 ラスタライズ手段 1 1 0 7において生成された 3次元 オブジェク ト 2 0 2は、 ディスプレイ投影面 2 0 3に映し出される。 第 2 0図は本実施の形態 9における 3次元表示に必要な情報に関する 説明図を示す。 第 2 0 ( a ) 図は 4分割マルチ画面における入力映像で あり、 図の下部に各部分映像の分割境界に沿った切り 出し領域の頂点座 標 ( 1 ) を示している。 第 2 0 ( b ) 図は 3次元オブジェク トであり、 図の下部に 3次元オブジェク トの頂点座標 ( 2 ) と、 3次元オブジェク トの頂点座標と部分映像の領域切り出し座標との対応 ( 3 ) 、 さ らに透 視変換に必要な情報と して、 視点からディスプレイ投影面までの距離お よび視点から 3次元仮想空間の原点までの距離 ( 4 ) を示している。 第 1 8図よ り、 パラメ一タ分離手段 1 1 0 2に入力されるパラメータ 情報は、 上記第 2 0図 ( 1 ) 〜 ( 4 ) に示した座標情報および透視変換 用情報、 さ らに 3次元オブジェク 卜の各面に対応したチャンネル対応情 7 When the input video signal 201 is input as an input signal to the video receiving unit 1101, the video receiving unit 1101 outputs the input video signal to the memory input / output control unit 1103. The partial video signal extracted from the memory input / output control means 1103 based on the region cutout information is output to the rasterizing means 1107, and each partial video is used as a texture for each of the three-dimensional objects 202. Paste on the surface. The three-dimensional object 202 generated by the rasterizing means 111 is projected on a display projection surface 203. FIG. 20 is an explanatory diagram relating to information necessary for three-dimensional display according to the ninth embodiment. Fig. 20 (a) shows the input video in the 4-split multi-screen, and the lower part of the figure shows the vertex coordinates (1) of the cutout area along the division boundary of each partial video. Figure 20 (b) is a three-dimensional object, and the lower part of the figure shows the coordinates of the vertex coordinates of the three-dimensional object (2), and the correspondence between the coordinates of the vertex of the three-dimensional object and the coordinates of the region clipping of the partial image (3). In addition, the distance from the viewpoint to the display projection plane and the distance from the viewpoint to the origin of the 3D virtual space (4) are shown as information necessary for the perspective transformation. As shown in FIG. 18, the parameter information input to the parameter separating means 1102 includes the coordinate information and the perspective transformation information shown in FIGS. 20 (1) to (4), and Channel support information for each side of the 3D object 7
63 報とから構成されている。 そして、 パラメ一タ分離手段 1 1 0 2よ り、 3次元オブジェク項点座標 ( 2 ) 3次元オブジェク ト頂点座標と切り 出し座標との対応 ( 3 ) 、 さ らに透視変換用情報 (4 ) とから構成され る 3次元座標情報はオブジェク ト位置決定手段 1 1 0 5 出力される。 また、 切り出し座標 ( 1 ) は領域切り 出し情報と してメモ リ入出力制御 手段 1 1 0 3へ出力される。 そして、 チヤンネル対応情報はチヤンネル 決定手段 1 1 1 1に出力される。  63 reports. Then, by the parameter separating means 111, the three-dimensional object term coordinates (2) the correspondence between the three-dimensional object vertex coordinates and the cut-out coordinates (3), and the information for perspective transformation (4) The three-dimensional coordinate information composed of the following is output to the object position determining means 1105. The cutout coordinates (1) are output to the memory input / output control means 1103 as area cutout information. Then, the channel correspondence information is output to the channel determining means 1 1 1 1.
従って、 パラメ一タ分離手段 1 1 0 2から出力される 3次元座標情報 に関して、 パラメータ情報が時間変化に対応して値が変化する座標情報 を用意するこ とによ り 3次元アニメーショ ン表示を実現することが可能 である。  Therefore, regarding the three-dimensional coordinate information output from the parameter separating means 1102, the three-dimensional animation display can be performed by preparing the coordinate information in which the value of the parameter information changes with time. It is possible.
第 2 1図は本実施の形態 9によるチャンネル選択方法に関する説明図 である。 第 2 1図において、 2 0 4は 4分割マルチ画面の場合の入力映 像信号であり 、 4つの部分映像に対応した 3次元オブジェク トが円形状 に配置されて回転し 3次元アニメーショ ンが表示される。 2 0 5は 3次 元オブジェク トを上から見た図であり、図の左から右へ時間が経過する。 2 0 6はディプレイ投影面上の映像、 2 0 7は選択された映像である。 ステップ S 1において、 矢印の時点で選択ボタンが押された場合、 所 定の判断基準によ りチャンネルを選択する。 ステップ S 2において、 判 断基準と して、 視点から一番距離が近く 、 表示面積が大きいものを選択 する。 第 2 1 図では該当する部分映像は丸 1であり、 丸 1 に対応するチ ヤンネルに切り替えられた映像が表示される ( 2 0 7 )  FIG. 21 is an explanatory diagram relating to a channel selection method according to the ninth embodiment. In FIG. 21, reference numeral 204 denotes an input video signal in the case of a 4-split multi-screen, in which three-dimensional objects corresponding to four partial videos are arranged in a circle and rotated to display a three-dimensional animation. Is done. 205 is a view of the three-dimensional object viewed from above, and time elapses from left to right in the figure. Reference numeral 206 denotes an image on the display projection plane, and reference numeral 207 denotes a selected image. In step S1, if the selection button is pressed at the time of the arrow, a channel is selected according to predetermined criteria. In step S2, the one that is closest to the viewpoint and has the largest display area is selected as a judgment criterion. In Fig. 21 the corresponding partial image is circle 1 and the image switched to the channel corresponding to circle 1 is displayed (207)
第 2 2図は本実施の形態 9におけるチヤンネル選択の判断基準に関す る説明図である。 第 2 2 ( a ) 図はチャンネル選択の第 1の判断基準で あり、 第 2 2 ( b ) 図はチャンネル選択の第 2の判断基準である。 第 2 2図よ り、 2 0 8および 2 1 0は 3次元オブジェク トを上から見た図で あり、 2 0 9および 2 1 1はディスプレイ投影面上の映像である。  FIG. 22 is an explanatory diagram of a criterion for selecting a channel according to the ninth embodiment. FIG. 22 (a) is a first criterion for channel selection, and FIG. 22 (b) is a second criterion for channel selection. From FIG. 22, 208 and 210 are views of the three-dimensional object viewed from above, and 209 and 211 are images on the display projection surface.
第 2 2 ( a ) 図の判断基準は第 2 3図の説明と同様に、 視点からの距 離が一番近く 、 表示面積が大きいものに対応するチヤンネルを選択する ものである。 The judgment criterion in FIG. 22 (a) is the same as that described in FIG. Things.
第 2 2 ( b ) 図の判断基準は、 オブジェク トがディスプレイ面に対し てどの程度傾いている力 、 すなわちオブジェク 卜の基準位置とォブジェ ク トの中心とが構成する直線 (図では点線) と基準軸とが構成する角度 の絶対値で判断する場合である。 第 2 2 ( b ) 図よ り 、 P Qは基準軸、 Oは回転の中心、 A 1 は丸 1 の基準位置、 A 2は丸 2の基準位置、 A 3 は丸 3の基準位置、 A 4は丸 4の基準位置である。 そこで、 丸 1〜丸 4の うちどの面を選択するかの判断は、 角 A 1— O— P、 角 A 2—〇一 P、 角 A 3— O— P、 角 A 4—〇一P とを比較して、 一番小さい角度を選択 する。 第 2 2 ( b ) 図の場合、 角 A 1— O— Pの角度が一番小さいので、 丸 1が選択されること となる。  The criterion in Fig. 22 (b) is the degree to which the object is inclined with respect to the display surface, that is, the straight line (dotted line in the figure) formed by the reference position of the object and the center of the object. In this case, the judgment is made based on the absolute value of the angle formed by the reference axis. From Fig. 2 (b), PQ is the reference axis, O is the center of rotation, A1 is the reference position of circle 1, A2 is the reference position of circle 2, A3 is the reference position of circle 3, A4 Is the reference position of circle 4. Therefore, it is determined which of the circles 1 to 4 is to be selected, by determining the angles A 1—O—P, A 2—〇P, A 3—O—P, A 4—P Compare with and select the smallest angle. In the case of Fig. 2 2 (b), the angle A 1—O—P is the smallest, so circle 1 is selected.
なお、 透視投影変換手段 1 1 0 6の代わり にァフィ ン変換手段を用い て変換するこ とによ り、 透視投影変換手段 1 1 0 6では 3次元座標演算 を実行するのに対し、 ァフィン変換手段では 2次元座標演算を実行する ため演算量を削減することが可能である。  Note that by performing transformation using an affinity transformation unit instead of the perspective projection transformation unit 1106, the perspective projection transformation unit 1106 performs three-dimensional coordinate calculation, whereas the affine transformation is performed. Since the means performs two-dimensional coordinate calculation, the amount of calculation can be reduced.
第 2 3図は透視投影変換とァフィ ン変換との相違に関する説明図であ る。 第 2 3図よ り 、 2 1 2はテクスチャマッピングの元になる画像で、 説明を容易にするため格子状の絵柄で示した。 2 1 3は透視投影変換の 場合の画像、 2 1 4はァフィ ン変換の場合の画像である。 透視投影変換 の画像 2 1 3では格子の幅が手前ほど広く なっているのに対し、 ァフィ ン変換の画像 2 1 4では格子の幅がほぼ均等になっている。 従って、 透 視投影変換の場合の方がァフィ ン変換の場合よ り も奥行き感を表現する ことが可能であるが、 いずれもオブジェク トの概観からく る奥行き感は 保持することができる。  FIG. 23 is an explanatory diagram regarding the difference between perspective projection transformation and affinity transformation. According to FIG. 23, reference numeral 212 denotes an image serving as a source of texture mapping, which is shown by a grid-like pattern for easy explanation. 2 13 is an image in the case of perspective projection transformation, and 2 14 is an image in the case of affinity transformation. In the perspective projection transformation image 2 13, the grid width is wider toward the front, whereas in the affinity transformation image 2 14, the grid width is almost uniform. Therefore, the perspective projection transformation can express the sense of depth more than the affinity transformation, but in any case, the sense of depth from the overview of the object can be maintained.
このよ うに本実施の形態 9による映像表示装置では、 放送やネッ トヮ ークを経由して伝送される入力映像信号、 あるいは分割画面やマルチ画 面と呼ばれるよ うな入力信号が所定数で複数の独立した映像から構成さ れる入力映像信号に対して、 3次元オブジェク トの所定の面にテクスチ ャを貼り付けたものを表示し、 使用者による所定の選択操作が行われた 際の映像を表示する構成と したから、 リモコンなどによる番組選択の際、 カーソル移動による 目的番組の選択手順を省略することが可能で、 かつ 分割数が増えて 1番組あたり の部分映像が小さ く なった場合でも、 3次 元仮想空間において視点に近い場所にオブジェク トを配置することによ り拡大表現することが可能であり 、 見た目にわかりやすい映像表現が可 能となる。 As described above, in the video display device according to the ninth embodiment, a predetermined number of input video signals transmitted via broadcast or network, or a plurality of input video signals called split screens or multi-screens are provided. An input video signal composed of independent video is displayed with a texture pasted on a predetermined surface of a 3D object, and a predetermined selection operation is performed by the user. When selecting a program using a remote controller or the like, the procedure for selecting the target program by moving the cursor can be omitted, and the number of divisions increases to reduce the partial video per program. In such a case, the object can be enlarged and displayed by arranging the object near the viewpoint in the three-dimensional virtual space.
また、 所定の 3次元形状情報の 3次元オブジェク トの座標が時刻に応 じて変化することによ り 3次元アニメ ーショ ンの効果を得ることが可能 である。  Further, the effect of the three-dimensional animation can be obtained by changing the coordinates of the three-dimensional object of the predetermined three-dimensional shape information according to time.
また、 透視投影変換手段 1 1 0 6 の代わり にァフィ ン変換を用いるこ とによ り、 奥行き感をある程度維持しつつ、 演算量を低減することが可 能である。  Further, by using an affinity transformation instead of the perspective projection transformation means 1106, it is possible to reduce the amount of calculation while maintaining a certain sense of depth.
なお、 本実施の形態 9による映像表示装置において 3次元仮想空間内 に配置する 3次元回転体物体の例と して、 3次元回転体物体を構成する 面が 4面であり、 回転の中心軸が 3次元仮想空間内において縦方向に配 置されたものを示したが、 3次元回転体物体を構成する面の数は 4面に 限るものではなく、 1〜 3面、 あるレ、は 5面以上であってもよく 、 また、 対応させる入力映像信号に合わせて表示する回転体に変更しても構わな レ、。 また、 回転の中心軸を 3次元仮想空間内において横方向、 あるいは 斜め方向に配置してもよい。  As an example of the three-dimensional rotating object arranged in the three-dimensional virtual space in the video display device according to the ninth embodiment, the surface constituting the three-dimensional rotating object is four, and the central axis of rotation is Is shown in the vertical direction in the three-dimensional virtual space.However, the number of surfaces constituting the three-dimensional rotating object is not limited to four, but one to three. It may be more than the surface, or it may be changed to a rotating body to be displayed according to the input video signal to be supported. In addition, the center axis of rotation may be arranged in the horizontal or oblique direction in the three-dimensional virtual space.
また、 上記実施の形態 9の映像表示装置によれば、 パラメータ情報を パラメータ分離手段 1 1 0 2で領域切り 出し情報と 3次元座標情報とを 分離する構成と したがそれに限る ものではなく 、 パラメータ情報と領域 切り出し情報とが入力信号に多重化されて映像受信手段 1 1 0 1 に入力 し、 分離する構成であっても構わない。  Further, according to the video display device of the ninth embodiment, the parameter information is configured to be separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102. However, the present invention is not limited to this. Information and area cutout information may be multiplexed into an input signal, input to the video receiving means 1101, and separated.
実施の形態 1 0 . Embodiment 10
第 2 4図は発明の実施の形態 1 0による映像表示装置の構成を示すブ 口 ック図である。  FIG. 24 is a block diagram showing a configuration of a video display device according to Embodiment 10 of the present invention.
第 2 4図において第 1 8図と同一符号は同一又は相当部分である。 1 JP99/07307 In FIG. 24, the same reference numerals as those in FIG. 18 denote the same or corresponding parts. 1 JP99 / 07307
66  66
3 0 1 はパラメータ分離手段 1 1 0 2から出力された領域切り 出し情報 に基づいて、 映像受信手段 1 1 0 1 から出力される入力映像信号から領 域を分離して、 メモリ格納用映像信号を出力し、 メモリ入出力制御手段 1 1 0 3を経由 してメモリ手段 1 1 0 4に保持する領域分離手段である ( また、 上記実施の形態 9 と異なり 、 メモ リ手段 1 1 0 4には入力信号を 保持するのではなく 、 ラスタライズ手段 1 1 0 7のテクスチャマツピン グ処理に必要な部分映像信号のみを保持する。 3 0 1 separates the area from the input video signal output from the video receiving means 1 101 based on the area cutout information output from the parameter separating means 1 Is output to the memory means 1104 via the memory input / output control means 1103, and is held in the memory means 1104. ( Also, unlike the ninth embodiment, the memory Does not hold the input signal, but holds only the partial video signal necessary for the texture matching processing of the rasterizing means 107.
次に本実施の形態 1 0による映像表示装置の動作について説明する。 本実施の形態 1 0による映像表示装置は、 入力映像信号から領域を切り 出して、 3次元仮想空間内のオブジェク トの面に貼り付ける際に、 映像 全体をメモリ に保持するのではなく 、 切り 出した領域のみをメモリに保 持するよ うにした。  Next, the operation of the video display device according to the tenth embodiment will be described. The video display device according to Embodiment 10 cuts an area from an input video signal and pastes the area onto an object surface in the three-dimensional virtual space, instead of holding the entire video in a memory, Only the released area is retained in memory.
本実施の形態 1 0による映像表示装置において、 領域分離手段 1 3 0 1が追加された構造の動作以外は実施の形態 9 と同様であるので、 実施 の形態 9 と異なる部分についてのみ説明する。  The video display device according to the tenth embodiment is the same as the ninth embodiment except for the operation of the structure in which the area separating means 1301 is added. Therefore, only the parts different from the ninth embodiment will be described.
まず、 入力信号が映像受信手段 1 1 0 1 に入力されると、 入力映像信 号の各部分映像の分割境界に沿った切り出し領域の頂点座標を含むパラ メータ情報がパラメータ分離手段 1 1 0 2に入力される。 パラメータ分 離手段 1 1 0 2から出力される領域切り 出し情報は領域分離手段 1 3 0 1 に入力されると と もに、 メモ リ入出力制御手段 1 1 0 3に入力される。 そして、 領域分離手段 1 3 0 1 は、 映像受信手段 1 1 0 1 から出力され た入力映像信号を領域切り出し情報に従って領域を分離し、 メモ リ格納 用映像信号と してメモ リ入出力制御手段 1 1 0 3に出力する。  First, when the input signal is input to the video receiving unit 1101, parameter information including the vertex coordinates of the cutout area along the division boundary of each partial video of the input video signal is output to the parameter separating unit 1102. Is input to The area cutout information output from the parameter separation means 1102 is input to the area separation means 1301, and also to the memory input / output control means 110103. Then, the area separating means 1301 separates an area of the input video signal output from the video receiving means 1101 according to the area cutout information, and outputs the area as a memory storing video signal. Output to 1103.
メモ リ入出力制御手段 1 1 0 3は、 領域切り出し情報の切り出し座標 に基づいて、 メモ リ制御信号をメ モ リ手段 1 1 0 4に出力して、 メモ リ 手段 1 1 0 4に保持されたメモ リ格納用映像信号から部分映像信号を抽 出し、 ラスタライズ手段 1 1 0 7 へ出力する。  The memory input / output control means 110 outputs a memory control signal to the memory means 110 based on the cut-out coordinates of the area cut-out information, and is held in the memory means 110 A partial video signal is extracted from the video signal for memory storage and output to the rasterizing means 110 7.
ラスタライズ手段 1 1 0 7では、 上記実施の形態 9 と異なり、 メモリ 手段 1 1 0 4 には入力映像信号を全て保持するのではなく 、 ラスタライ ズ手段 1 1 0 7のテク スチャマッ ピング処理に必要な部分映像信号のみ を保持している。 Unlike the ninth embodiment, the rasterizing means 111 differs from the ninth embodiment in that the memory means 1104 does not hold all input video signals, It holds only the partial video signal necessary for the texture mapping process of the first means 107.
第 2 5図は本発明の実施の形態 1 0による部分映像のメモ リ保持に関 する説明図である。 第 2 5図よ り 2 1 5は 4分割マルチ画面の場合の入 力映像信号、 2 1 6はメモ リ に保持すべき部分映像信号、 2 1 7は 3次 元オブジェク トを上から見た図であり、図の左から右へ時間が経過する。 2 1 8はデイ スプレイ投影面上の映像である。  FIG. 25 is an explanatory diagram relating to memory retention of a partial video according to the tenth embodiment of the present invention. From Fig. 25, 2 15 is the input video signal in the case of 4-split multi-screen, 2 16 is the partial video signal to be held in memory, and 2 17 is the 3D object viewed from above It is a diagram, and time elapses from left to right in the diagram. 218 is an image on the display projection surface.
入力映像信号 2 1 5は、 パラメータ分離手段 1 1 0 2からの領域切り 出し情報よ り メモ リ手段 1 1 0 4に保持すべき部分映像信号 2 1 6に領 域を切り出して、切り出した領域のみをメモ リ手段 1 1 0 4に保持する。 メモ リ手段 1 1 0 4に保持された部分映像信号 2 1 6 は、 ラスタライズ 手段 1 1 0 4 に出力されて、 3次元オブジェク 卜の所定の面にテクスチ ャマッピング処理される。 従って、 ディスプレイ上では、 ディスプレイ 投影面上の映像 2 1 8のみがメモリ に格納されること となり、 映し出さ れない映像は保持されない。 つま り 、 3次元オブジェク トを上から見た 図 2 1 7の左端の図を例にとる と、 ディスプレイに表示されている丸 1 の画面はメモ リ手段 1 1 0 4に部分映像信号 2 1 6 と して保持されてお り、その他の丸 2から丸 4の画面はメ モ リ手段 1 1 0 4に保持されない。 このよ うに本実施の形態 1 0による映像表示装置では、 放送またはネ ッ トワークを経由 して伝送される、 所定数の部分映像から構成される入 力信号を映像受信手段 1 1 0 1 に受信して入力映像信号を出力し、 入力 映像信号から領域切り出し情報に従って領域を分離し、 3次元仮想空間 内のオブジェク トの面に貼り付ける際に、 映像全体をメモリ に保持する のではなく 、 切り 出した領域のみをメモリ に保持したから、 メモリ量の 低減を実現することができる。  The input video signal 2 15 is obtained by extracting the area into the partial video signal 2 16 to be held in the memory means 1 104 from the area extraction information from the parameter separation means 1 102 and extracting the area. Only in the memory means 1104. The partial video signal 2 16 held in the memory unit 110 4 is output to the rasterizing unit 110 4, and is subjected to a texture mapping process on a predetermined surface of the three-dimensional object. Therefore, on the display, only the image 218 on the display projection plane is stored in the memory, and the image not projected is not retained. In other words, taking the example of the left end of Fig. 217 when viewing the three-dimensional object from above, the screen of circle 1 displayed on the display shows the partial video signal 21 6 and the other screens from circle 2 to circle 4 are not stored in the memory means 1104. As described above, in the video display apparatus according to the tenth embodiment, the video receiving unit 1101 receives the input signal composed of the predetermined number of partial videos transmitted via the broadcast or the network. Output the input video signal, and separate the area from the input video signal according to the area cutout information.When pasting to the surface of the object in the three-dimensional virtual space, the whole video is not stored in the memory but cut out. Since only the extracted area is held in the memory, the amount of memory can be reduced.
なお、 上記実施の形態 1 0の映像表示装置によれば、 パラメータ情報 をパラメータ分離手段 1 1 0 2で領域切り 出し情報と 3次元座標情報と を分離する構成と したがそれに限る ものではなく 、 パラメータ情報と領 域切り出し情報とが入力信号に多重化されて映像受信手段 1 1 0 1 に入 力し、 分離する構成であっても構わない。 According to the video display device of Embodiment 10 described above, the parameter information is separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102, but the present invention is not limited to this. The parameter information and the region cutout information are multiplexed with the input signal and input to the video receiving unit 111. It may be a configuration that separates and separates.
実施の形態 1 1 . Embodiment 11 1.
第 2 6図は本発明の実施の形態 1 1 による映像表示装置の構成を示す ブロ ック図である。  FIG. 26 is a block diagram showing a configuration of a video display device according to Embodiment 11 of the present invention.
第 2 6図において第 1 8図と同一符号は同一又は相当部分である。 1 4 0 1 は実施の形態 9におけるパラメ一タ分離手段 1 1 0 2 と異なり、 3次元座標情報や領域切り出し情報を、 領域数情報に基づいて自動生成 するパラメ一タ生成手段、 1 4 0 2は映像受信手段 1 1 0 1 から入力さ れた、 複数の部分映像で構成されるマルチ画面映像である入力映像信号 を分析して部分映像の数を計測し、 その領域数情報をパラメータ生成手 段 1 4 0 1 に出力する映像分析手段である。  In FIG. 26, the same reference numerals as those in FIG. 18 denote the same or corresponding parts. Unlike the parameter separating means 1102 in the ninth embodiment, the parameter generating means 1401 automatically generates three-dimensional coordinate information and area cutout information based on the number of areas. Numeral 2 analyzes the input video signal, which is a multi-screen video composed of a plurality of partial videos, input from the video receiving means 111, measures the number of partial videos, and generates information on the number of areas as a parameter. This is a video analysis means that outputs to the means 1401.
次に本実施の形態 1 1 による映像表示装置の動作について説明する。 本実施の形態 1 1 による映像表示装置はマルチ画面で伝送される映像の 分割数を受信後に認識して、 分割数に応じて 3次元オブジェク トの形状 情報を生成するよ うにしたものである。  Next, the operation of the video display device according to Embodiment 11 will be described. The video display device according to Embodiment 11 recognizes the number of divisions of an image transmitted on a multi-screen after receiving it, and generates shape information of a three-dimensional object according to the number of divisions.
本実施の形態 1 1 による映像表示装置において、 パラメータ生成手段 1 4 0 1および映像分析手段 1 4 0 2が追加された構造の動作以外は実 施の形態 9 と同様であるので、 実施の形態 9 と異なる部分についてのみ 説明する。  The video display device according to the eleventh embodiment is the same as the ninth embodiment except for the operation of the structure in which the parameter generating means 1401 and the video analyzing means 1442 are added. Only the parts that differ from 9 are explained.
まず映像受信手段 1 1 0 1 は、 放送またはネッ トワークを経由して伝 送される、 所定数の部分映像から構成される入力信号を入力し、 入力映 像信号をメモ リ入出力制御手段 1 1 0 3 に出力すると と もに、 映像分析 手段 1 4 0 2に出力する。 映像分析手段 1 4 0 2は、 入力映像信号から 所定数を判別した領域数情報をパラメ一タ生成手段 1 4 0 1 に出力する。 パラメータ生成手段 1 4 0 1では、 領域数情報に基づいて、 3次元座標 情報と、 入力映像信号からテクスチヤと して用いる領域を切り出す際の 位置を示す領域切り出し情報とから構成されるパラメ一タ情報を自動的 に生成し、 パラメータ出力制御情報に基づいて、 領域切り出し情報と 3 次元座標情報とを分離して、 領域切り 出し情報はメモ リ入出力制御手段 1 1 0 3に出力し、 3次元座標情報はオブジェク ト位置決定手段 1 1 0 5に出力する。 First, the video receiving unit 111 inputs an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network, and transmits the input video signal to the memory input / output control unit 1. Output to 1403 and output to video analysis means 1402. The video analyzing means 1402 outputs to the parameter generating means 1441, information on the number of areas determined from the input video signal by a predetermined number. The parameter generating means 1401, based on the number-of-regions information, is a parameter composed of three-dimensional coordinate information and region cutout information indicating a position for cutting out a region used as a texture from the input video signal. Information is automatically generated, and based on the parameter output control information, the area cutout information and the three-dimensional coordinate information are separated, and the area cutout information is stored in the memory input / output control means. And outputs the three-dimensional coordinate information to the object position determining means 1105.
第 2 7図は本発明の実施の形態 1 1 による 3次元情報の生成に関する 説明図である。 第 2 7 ( a ) 図は 2分割の場合の入力映像信号、 第 2 7 ( b ) 図は 4分割の場合の入力映像信号、 第 2 7 ( c ) 図は 6分割の場 合の入力映像信号、 第 2 7 ( d ) 図は 9分割の場合の入力映像信号であ る。 それぞれの分割された入力映像信号の下段の図は、 自動生成された 3次元オブジェク トを上から見た配置方法の例である。  FIG. 27 is an explanatory diagram relating to generation of three-dimensional information according to Embodiment 11 of the present invention. Fig. 27 (a) shows the input video signal when splitting into two, Fig. 27 (b) shows the input video signal when splitting into four, and Fig. 27 (c) shows the input video when splitting into six FIG. 27 (d) shows the input video signal in the case of 9 divisions. The lower diagram of each divided input video signal is an example of the arrangement method of the automatically generated 3D object viewed from above.
第 2 7図よ り n分割の映像が映像受信手段 1 1 0 1 に入力されると、 n分割の入力映像信号が映像分析手段 1 4 0 2に出力される。 映像分析 手段 1 4 0 2では、 映像の分割数 (この場合、 分割数は n ) を判別して、 分割数に応じてテクスチヤと して貼り付ける n個の 3次元オブジェク ト の面を用意し、 円形状に等間隔で n角形になるよ うに配置する (第 2 7 図下段の図) 。  As shown in FIG. 27, when the n-divided video is input to the video receiving means 1101, the n-divided input video signal is output to the video analyzing means 1402. The image analysis means 1442 determines the number of divisions of the image (in this case, the number of divisions is n), and prepares n three-dimensional object surfaces to be pasted as textures according to the number of divisions. They are arranged at equal intervals in a circular shape so as to form an n-gon (Fig. 27, lower panel).
このよ うに本実施の形態 1 1 による映像表示装置では、 放送またはネ ッ トワークを経由して伝送される、 所定数の部分映像から構成される入 力信号を映像受信手段 1 1 0 1 に受信して入力映像信号を出力し、 映像 分析手段 1 4 0 2で映像の分割数を判別して、 分割数に応じて 3次元ォ ブジェク 卜の形状情報を生成したから、 複数種類のマルチ画面構成の映 像への対応を実現することができる。  As described above, in the video display apparatus according to Embodiment 11 of the present invention, an input signal composed of a predetermined number of partial videos transmitted via a broadcast or a network is received by the video receiving unit 1101. The input video signal is output, and the video analysis means 1402 determines the number of video divisions and generates 3D object shape information according to the number of divisions. Can be realized.
なお、 本実施の形態 1 1 による映像表示装置において、 第 2 7図では、 3次元オブジェク トの面を円形状に配置した例を示したが、 それに限る ものではなく 、 奥行き方向にずらすなどして配置しても構わない。  In the image display device according to Embodiment 11, FIG. 27 shows an example in which the surface of the three-dimensional object is arranged in a circular shape. However, the present invention is not limited to this, and the surface is shifted in the depth direction. It may be arranged.
また、 上記実施の形態 1 1の映像表示装置によれば、 映像分析手段 1 4 0 2から出力される領域数情報に基づいて、 パラメータ生成手段 1 4 0 1でパラメータ情報を自動的に生成する構成と したがそれに限るもの ではなく 、 パラメ一タ情報と領域切り出し情報とが入力信号に多重化さ れて映像受信手段 1 1 0 1 に入力し、分離する構成であっても構わない。 実施の形態 1 2 . 第 2 8図は本発明の実施の形態 1 2による映像表示装置の構成を示す ブロック図である。 Further, according to the video display device of Embodiment 11, the parameter information is automatically generated by the parameter generation means 1401, based on the number-of-regions information output from the video analysis means 144. The configuration is not limited to this, and the configuration may be such that parameter information and area cutout information are multiplexed into an input signal, input to the video receiving unit 1101, and separated. Embodiment 1 2. FIG. 28 is a block diagram showing a configuration of a video display device according to Embodiment 12 of the present invention.
第 2 8図において第 1 8図と同一符号は同一又は相当部分である。 1 5 0 8は放送またはネッ トヮ一クを経由して伝送される、 第 1 の入力信 号を受信し、 所定数の部分映像から構成される第 1 の入力映像信号を出 力する映像受信手段 1 、 1 5 0 2はチャンネル情報に基づいて、 放送ま たはネッ トワークを経由して伝送される第 2の入力信号を選択受信し、 第 2の入力映像信号を出力する映像受信手段 2である。 1 5 1 1 はメモ リ入出力制御手段 1 1 0 3 よ り 出力された部分映像信号を拡大、 変形処 理して部分映像拡大変形信号を出力する拡大変形手段、 1 5 0 5 はフ レ ームメモ リ手段から出力された 3次元出力映像信号と、 拡大変形手段 1 5 1 1から出力された部分映像拡大変形信号とを所定のタイ ミ ングで切 り替えて出力映像信号を出力する映像切り替え手段である。  In FIG. 28, the same reference numerals as those in FIG. 18 denote the same or corresponding parts. 1508 is a video that receives a first input signal transmitted through a broadcast or a network and outputs a first input video signal composed of a predetermined number of partial videos Video receiving means for selectively receiving a second input signal transmitted via a broadcast or a network based on channel information, and outputting a second input video signal 2 1511 is an enlargement / transformation means for enlarging and transforming the partial video signal output from the memory input / output control means 11053 and outputting a partial video enlargement / deformation signal, and 1505 is a free Video switching that outputs the output video signal by switching the 3D output video signal output from the memory memory means and the partial video expansion and deformation signal output from the magnifying and deforming means 1 5 1 1 at a predetermined timing. Means.
次に本実施の形態 1 2による映像表示装置の動作について説明する。 本実施の形態 1 2による映像表示装置は、 選択されたチャンネルの全画 面表示に切り替える際に、 表示される映像をスムーズに切り替えるよ う にしたものである。  Next, the operation of the video display device according to Embodiment 12 will be described. The video display device according to Embodiment 12 switches the displayed video smoothly when switching to the full-screen display of the selected channel.
本実施の形態 1 2による映像表示装置において、実施の形態 9に対し、 映像受信手段 1 1 0 1 を置き換えて、 映像受信手段 1 ( 1 5 0 8 ) と映 像受信手段 2 ( 1 5 0 2 ) を配置し、 拡大変形手段 1 5 1 1 と映像切り 替え手段 1 5 0 5 とを追加した構造の動作以外は実施の形態 9 と同様で あるので、 実施の形態 9 と異なる部分についてのみ説明する。  In the video display device according to the present embodiment 12, the video receiving means 1 101 is replaced with the video receiving means 1 (1508) and the video receiving means 2 (150 2) is arranged, and the operation is the same as that of the ninth embodiment except for the operation of the structure in which the enlarging / deforming means 1511 and the video switching means 1505 are added. explain.
まず映像受信手段 1 ( 1 5 0 8 ) は、 複数の部分映像で構成されるマ ルチ画像映像チヤンネルである入力信号 1 を受信して、 入力映像信号 1 をメモリ入出力制御手段 1 1 0 3に出力する。 この入力映像信号 1は、 実施の形態 9 と同様に 3次元表示の映像を生成するために用いられるも のである。 一方、 映像受信手段 2 ( 1 5 0 2 ) はチヤンネル決定手段 1 1 1 1から出力されるチャンネル情報 1 2 1 8に基づいて入力信号 2を 受信し、 入力映像信号 2を映像表示手段 1 1 0 9に出力する。 この入力 T/JP99/07307 First, the video receiving unit 1 (1508) receives an input signal 1 which is a multi-image video channel composed of a plurality of partial videos, and converts the input video signal 1 into a memory input / output control unit 1 1 0 3 Output to This input video signal 1 is used to generate a three-dimensional display video as in the ninth embodiment. On the other hand, the video receiving means 2 (1502) receives the input signal 2 based on the channel information 1 2 18 output from the channel determining means 1 1 1 1 and displays the input video signal 2 on the video display means 1 1 0 Output to 9. This input T / JP99 / 07307
71 映像信号 2は選択されたチャンネルを全画面表示するものである。 また、 拡大変形手段 1 5 1 1はメモ リ入出力制御手段 1 1 0 3から出力された 部分映像信号に拡大変形などの所定の映像効果処理を施し、 部分映像拡 大変形信号と して映像切り替え手段 1 5 0 5に出力する。 映像切り替え 手段 1 5 0 5は、 フ レームメモ リ手段 1 1 0 8から出力された 3次元出 力映像信号と拡大変形手段 1 5 1 1 から出力された部分映像拡大変形信 号とを切り替えて出力映像信号を映像表示手段 1 1 0 9に出力する。 映像表示手段 1 1 0 9では出力映像信号と入力映像信号 2 とを切り替 えて表示する。  71 Video signal 2 displays the selected channel in full screen. The enlargement / transformation means 1511 1 performs predetermined image effect processing such as enlargement / deformation on the partial video signal output from the memory input / output control means 1103, and outputs the image as a partial image enlargement / deformation signal. Output to switching means 1505. The video switching means 1505 switches and outputs between the three-dimensional output video signal output from the frame memory means 1108 and the partial video enlarged deformation signal output from the magnifying and deforming means 1511. The video signal is output to the video display means 110. The video display means 110 switches between the output video signal and the input video signal 2 for display.
本実施の形態 1 2では、 実施の形態 9〜 1 1 とを比較して、 拡大変形 手段 1 5 1 1 を付加することによ り 、 チャンネルを選択するための 3次 元表示画面と、 選択された全画面表示との映像切り替え方法の変更をス ム一ズにする ものである。 そこで、 第 2 9図に実施の形態 9〜 1 1によ る映像切り替え手法に関する説明図、 第 3 0図に実施の形態 1 2による 映像切り替え手法に関する説明図を示し、 双方の違いを説明する。  In the present Embodiment 12, a three-dimensional display screen for selecting a channel can be obtained by adding enlargement / deformation means 15 11 1 in comparison with Embodiments 9 to 11 The change of the video switching method with the full-screen display is smoothed out. Thus, FIG. 29 shows an explanatory diagram of the video switching method according to the embodiments 9 to 11, and FIG. 30 shows an explanatory diagram of the video switching method according to the embodiment 12 to explain the difference between the two. .
第 2 9図よ り、 2 1 9は 4分割マルチ画面の場合の入力映像信号であ り、 4つの部分映像に対応した 3次元オブジェク トが円形状に配置され て回転し 3次元アニメーショ ンが表示される。 2 2 0は 3次元オブジェ ク トがディ スプレイに投影された映像、 2 2 1 は選択されたチャンネル の映像である。 第 2 9図では、 チャンネル丸 1 の選択と同時に、 表示さ れる映像が 3次元表示から丸 1 の映像へ即座に切り替わる。  According to Fig. 29, 219 is the input video signal in the case of the 4-split multi-screen, and the 3D objects corresponding to the 4 partial images are arranged in a circle and rotated to perform the 3D animation. Is displayed. Reference numeral 220 denotes an image in which a three-dimensional object is projected on a display, and reference numeral 222 denotes an image of a selected channel. In Fig. 29, at the same time that channel 1 is selected, the displayed image is immediately switched from the three-dimensional display to the image of circle 1.
第 3 0図よ り、 2 2 4は入力映像信号 2 2 2 よ り丸 1 の映像を選択し たことを示す入力映像信号、 2 2 5および 2 2 6は選択された丸 1の部 分映像を拡大、 変形処理を行っている入力映像信号である。 なお、 その 他の構成について第 2 9図と同様である場合は同じ符号を付して説明を 省略する。 第 3 0図では、 チャ ンネルが選択されると (図では丸 1 ) 、 ステップ S 3 において選択された丸 1 に対応する部分映像を利用して拡 大、 変形処理を施しながら表示し、 S 4において所定の時間後、 丸 1 の 全画面映像にスムーズに切り替わる。 このよ う に本実施の形態 1 2による映像表示装置では、 選択されたチ ヤンネルの全画面表示に切り替える際に、 3次元表示の際にテクスチャ と して用いた部分映像を拡大、 変形処理して表示した後に全画面表示に 切り替えるこ と と したから、 スムーズな映像切り替えを実現することが できる。 As shown in FIG. 30, 222 is an input video signal indicating that the video of circle 1 has been selected from the input video signal 222, and 222 and 222 are portions of the selected circle 1. This is an input video signal that has been enlarged and transformed. In addition, when the other configuration is the same as that of FIG. 29, the same reference numeral is assigned and the description is omitted. In FIG. 30, when a channel is selected (circle 1 in the figure), the image is displayed while performing enlargement and deformation processing using the partial image corresponding to the circle 1 selected in step S3, and S After a predetermined time in 4, the screen is smoothly switched to the full screen image of circle 1. As described above, in the video display device according to Embodiment 12, when switching to the full-screen display of the selected channel, the partial video used as the texture in the three-dimensional display is enlarged and deformed. After switching to full-screen display after displaying, smooth video switching can be realized.
なお、 上記実施の形態 1 2の映像表示装置によれば、 パラメータ情報 をパラメータ分離手段 1 1 0 2で領域切り 出し情報と 3次元座標情報と を分離する構成と したがそれに限る ものではなく、 パラメータ情報と領 域切り出し情報とが入力信号 1 に多重化されて映像受信手段 1 ( 1 5 0 8 ) に入力し、 分離する構成であっても構わない。  According to the video display device of Embodiment 12 described above, the parameter information is separated from the region cutout information and the three-dimensional coordinate information by the parameter separating means 1102, but the present invention is not limited to this. The configuration may be such that the parameter information and the region cutout information are multiplexed on the input signal 1 and input to the video receiving means 1 (1508) to be separated.
実施の形態 1 3 . Embodiment 1 3.
第 3 1図は本発明の実施の形態 1 3によるチヤンネル選択装置の構成 を示すブロ ック図である。  FIG. 31 is a block diagram showing a configuration of a channel selection device according to Embodiment 13 of the present invention.
第 3 1図において、 第 1図および第 1 8図と同一符号は同一又は相当 部分である。 1 4 5は使用者がチャ ンネルを選択する選択入力が入力さ れる選択入力手段、 1 4 6は選択入力手段 1 4 5から選択入力が入力さ れたときに 3次元回転体物体を構成する複数の面のう ちどの面が表示画 面上において正面を向いているかを判定する選択面判定手段、 1 4 7は 3次元回転体物体を構成する複数の面と、 各チャンネルに対応した部分 画像のテクスチャ情報と、 外部から入力された領域情報パラメータに基 づいて各チャンネルに対応した部分画像を生成するための領域切り出し 情報との対応関係を示す情報を保持する対応表保持手段であり、 第 3 2 図は対応表保持手段 1 4 7が保持する対応表の一例を示す図である。 1 4 8は選択面判定手段 1 4 6が判定した面に対応づけられたチャ ンネル が何であるかを対応表保持手段 1 4 7に保持された情報に基づいて判定 し、 切り替えて表示するべきチャンネルを決定して、 選択チャンネル情 報を映像受信手段 1 5 0に出力するチャ ンネル決定手段、 1 5 0は放送 またはネッ トワークを経由 して伝送される入力信号を受信し、 チャンネ ル決定手段 1 4 8から出力される選択チャンネル情報に基づき、 チャン ネルを選択して入力映像信号を出力する映像受信手段、 1 5 2は入力映 像信号を保持するメモリ手段、 1 5 1 は入力映像信号をメモリ手段 1 5 2に書き込み、 対応表保持手段 1 4 7から入力された領域切り出し情報 に従ってメモリ制御信号をメモリ手段 1 5 2に出力し、 メモリ手段 1 5 2から部分映像信号を読み出すメモリ入出力制御手段である。 In FIG. 31, the same reference numerals as those in FIGS. 1 and 18 denote the same or corresponding parts. 1 4 5 is a selection input means for inputting a selection input for the user to select a channel, 1 4 6 is a three-dimensional rotating object when a selection input is input from the selection input means 1 4 5 Selection surface determination means for determining which of the plurality of surfaces faces the front on the display screen, 147 is a plurality of surfaces constituting a three-dimensional rotating object and a portion corresponding to each channel Correspondence table holding means for holding information indicating a correspondence relationship between image texture information and area cutout information for generating a partial image corresponding to each channel based on an externally input area information parameter, FIG. 32 is a diagram showing an example of the correspondence table held by the correspondence table holding means 147. 1 4 8 should determine the channel associated with the surface determined by the selected surface determination means 1 4 6 based on the information held in the correspondence table holding means 1 4 7, and switch and display it. Channel determining means for determining a channel and outputting the selected channel information to the video receiving means 150, and 150 receiving an input signal transmitted via broadcast or a network, and determining the channel. Based on the selected channel information output from 1 4 8 Video receiving means for selecting an input video signal and outputting an input video signal; 152, a memory means for holding the input video signal; 151, an input video signal for writing to the memory means 152, a correspondence table holding means 1 Memory input / output control means for outputting a memory control signal to the memory means 152 in accordance with the area cutout information inputted from the memory means 47, and for reading out a partial video signal from the memory means 152.
次に本実施の形態 1 3によるチヤンネル選択装置は、 放送ゃネッ トヮ ークを経由して伝送される入力信号を、 3次元仮想空間内に配置した 3 次元回転体物体の各面にテクスチヤ と して貼り付け、 使用者による所定 の操作が行われた際に、 使用者の視点に対して最も正面を向いている面 に対応づけられたチヤンネル情報を表示するものである。  Next, the channel selection device according to Embodiment 13 applies a texture to an input signal transmitted via a broadcast network on each surface of a three-dimensional rotating object placed in a three-dimensional virtual space. When a predetermined operation is performed by the user, the channel information associated with the surface facing the front of the user's viewpoint is displayed.
まず放送またはネッ トワークを経由 して伝送される入力信号が映像受 信手段 1 5 0に入力されると、 映像受信手段 1 5 0から入力映像信号が メモリ入出力制御手段 1 5 1 に出力される。 メモリ入出力制御手段 1 5 1 は、 領域切り出し情報の切り出し座標に基づいて、 メモリ制御信号を メモリ手段 1 5 2に出力して、 メモリ手段 1 5 2に保持された入力映像 信号から部分映像信号を抽出し、 部分映像信号をテクスチャ保持手段 1 4 9 へ出力する。  First, when an input signal transmitted via broadcast or a network is input to the video receiving means 150, the input video signal is output from the video receiving means 150 to the memory input / output control means 151. You. The memory input / output control means 15 1 outputs a memory control signal to the memory means 15 2 based on the cut-out coordinates of the area cut-out information, and outputs a partial video signal from the input video signal held in the memory means 15 2. And outputs the partial video signal to the texture holding means 149.
次に本実施の形態 1 3によるチャンネル選択装置において、 チャンネ ル選択動作モー ドが開始すると、 3次元モデル座標保持手段 1 0 4に保 持された 3次元回転体物体の 3次元仮想空間内における初期座標が読み 出され、 透視変換手段 1 0 6が、 この初期座標と視点座標とを用いて、 3次元回転体物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標を出力する。 陰面処理手段 1 0 7は透視変換手段 1 0 6から 投影面座標を読み込んで、 隠れて表示されない領域を排除し、 表示され る領域のみを抽出して奥行き情報、 および陰面処理後ラスタ情報を出力 する。 テクスチャマツビング手段 1 1 0は陰面処理手段 1 0 7によ り奥 行き情報が考慮された陰面処理後ラスタ情報に対し、 奥行き情報保持手 段 1 0 8によ り保持された奥行き情報に基づいて、 上記テクスチャ保持 手段 1 4 9から読みこんだテクスチャを貼り付ける。 ここで、 3次元回 /JP 07 7 Next, in the channel selection device according to the first embodiment 13, when the channel selection operation mode starts, the three-dimensional rotating object held in the three-dimensional model coordinate holding means 104 in the three-dimensional virtual space is displayed. The initial coordinates are read out, and the perspective transformation means 106 performs perspective transformation to the display screen of the three-dimensional virtual space including the three-dimensional rotating object using the initial coordinates and the viewpoint coordinates, and obtains the projection plane coordinates. Is output. The hidden surface processing means 107 reads the projection plane coordinates from the perspective transformation means 106, excludes the hidden and undisplayed areas, extracts only the displayed areas, and outputs depth information and raster information after hidden surface processing I do. The texture matching means 110 is based on the depth information held by the depth information holding means 108 with respect to the raster information after hidden surface processing in which the depth information is considered by the hidden surface processing means 107. Then, paste the texture read from the above texture holding means 1 4 9. Where 3D times / JP 07 7
74 転体物体の各面とテクスチャとの対応関係は、 対応表保持手段 1 4 7か ら対応情報 (面一テクスチャ対応情報) を読み出すこ とによって得る。 レンダリ ング手段 1 1 1 はテクスチャマッピング手段 1 1 0が出力する テクスチャマッピング後フレーム情報に、 奥行き情報保持手段 1 0 8に よ り保持された奥行き情報に基づいて、 各画素の色や明るさなどすベて の画素情報を描画する。 レンダリ ング手段 1 1 1 によ り描画されたフレ ーム情報はフレームバッファ 1 1 2に保持され、 画面表示手段 1 1 3は フレームバッファ 1 1 2に保持されたフレーム情報を所定のタイ ミング で読み出して画面の表示を行う。 これによ り、 チャンネル選択動作モー ドの初期状態の画面が表示される。  74 The correspondence between each surface of the rolling object and the texture can be obtained by reading the correspondence information (plane-to-texture correspondence information) from the correspondence table holding means 144. The rendering means 111 is based on the frame information after texture mapping output by the texture mapping means 110 and the color and brightness of each pixel based on the depth information held by the depth information holding means 108. Draws all pixel information. The frame information drawn by the rendering means 111 is held in the frame buffer 112, and the screen display means 113 displays the frame information held in the frame buffer 112 at a predetermined timing. Read and display the screen. As a result, the screen in the initial state of the channel selection operation mode is displayed.
初期状態の画面が表示された状態で、 ユーザが回転指示入力手段 1 0 While the initial screen is displayed, the user inputs the rotation instruction input means 10
1 よ り回転指示制御信号を入力する と、 パラメ一タ変更手段 1 0 3は回 転指示入力手段 1 0 1 からの回転指示制御信号に基づき、 パラメータ保 持手段 1 0 2から変更前パラメ一タ (ここでは初期状態のパラメータ) を読み込み、 パラメータを変更し変更後パラメータと してパラメータ保 持手段 1 0 2に記録し、 力ゥンタ手段 1 1 4に対し力ゥンタ制御信号を 出力する。 座標変換手段 1 0 5は、 パラメータ保持手段 1 0 2に記録さ れた変更後パラメータを読み出し、 3次元モデル座標保持手段 1 0 4か ら読み出した初期座標の座標を変更後パラメータを用いて変換して得ら れる変更後モデル座標を透視変換手段 1 0 6に出力する。 透視変換手段 1 0 6は、 この変更後モデル座標と視点座標とを用いて、 3次元回転体 物体を含む 3次元仮想空間の表示画面への透視変換を行い、 投影面座標 を出力する。 この後、 陰面処理手段 1 0 7 , テクスチャマツビング手段 1 1 0, レンダリ ング手段 1 1 1, フレームバッファ 1 1 2, 及び画面 表示手段 1 1 3が上記チャンネル選択動作モー ドの初期表示動作時と同 様の処理を行い、 回転指示制御信号入力後の画面が表示される。 例えば 3次元回転体物体が第 2図に示す形状のものである場合、 初期状態にお いて面 1が正面を向いて表示されていたものが、 正方向の回転指示制御 信号を入力すると、 第 2図中の矢印の方向に回転し面 2が正面を向く 画 像が表示され、 負方向の回転指示制御信号を入力する と、 第 2図中の矢 印とは逆の方向に回転し面 6が正面を向く画像が表示される。 When a rotation instruction control signal is input from the control unit 1, the parameter changing unit 103 changes the parameter before changing from the parameter holding unit 102 based on the rotation instruction control signal from the rotation instruction input unit 101. The parameter (in this case, the parameter in the initial state) is read, the parameter is changed, and the changed parameter is recorded in the parameter holding means 102, and a power counter control signal is output to the power counter means 114. The coordinate conversion means 105 reads the changed parameters recorded in the parameter holding means 102 and converts the coordinates of the initial coordinates read from the three-dimensional model coordinate holding means 104 using the changed parameters. The changed model coordinates obtained as a result are output to the perspective transformation means 106. The perspective transformation means 106 performs perspective transformation to a display screen of a three-dimensional virtual space including the three-dimensional rotating object using the changed model coordinates and the viewpoint coordinates, and outputs projection plane coordinates. Thereafter, the hidden surface processing means 107, the texture matting means 110, the rendering means 111, the frame buffer 112, and the screen display means 113 are used for the initial display operation in the channel selection operation mode. The same process as above is performed, and the screen after the rotation instruction control signal is input is displayed. For example, if the three-dimensional rotating object has the shape shown in Fig. 2, what was displayed with the surface 1 facing the front in the initial state is changed to a positive direction rotation input control signal. 2 Rotate in the direction of the arrow in the figure and face 2 faces forward When an image is displayed and a negative-direction rotation instruction control signal is input, an image is displayed in which the image is rotated in the direction opposite to the arrow in FIG. 2 and the surface 6 faces front.
回転指示入力手段 1 0 1 については、 上記実施の形態 1 と同様、 リモ コンゃキ一ボー ドの力一ソルキーの操作やマウスの動きなどを 3次元回 転体物体の回転に対応づけるよ うにすればよい。  As with the first embodiment, the rotation instruction input means 101 is configured so that the operation of the remote control board, the operation of the sol key, the movement of the mouse, and the like correspond to the rotation of the three-dimensional rotating object. do it.
回転指示制御信号入力動作時にカウンタ手段 1 1 4ではパラメータ変 更手段 1 0 3が出力するカウンタ制御信号によ りカウン ト動作を行う。 具体的には例えば、 回転指示入力手段 1 0 1から正方向の回転指示制御 信号を入力すると、 パラメータ変更手段 1 0 3は力ゥンタ手段 1 1 4 の カウン ト値を 1 イ ンク リ メ ン トするカウンタ制御信号を出力し、 回転指 示入力手段 1 0 1 から負方向の回転指示制御信号を入力すると、 パラメ ータ変更手段 1 0 3はカウンタ手段 1 1 4のカウン ト値を 1デク リ メ ン トする力ゥンタ制御信号を出力し、 カウンタ手段 1 1 4は、 このカウン タ制御信号を受けて自己が保持する力ゥン ト値を変化させる。  At the time of the rotation instruction control signal input operation, the counter means 114 performs the count operation by the counter control signal output from the parameter changing means 103. Specifically, for example, when a rotation instruction control signal in the forward direction is input from the rotation instruction input means 101, the parameter changing means 103 changes the count value of the force counter means 114 by one increment. When the counter control signal is output and the negative direction rotation control signal is input from the rotation direction input means 101, the parameter changing means 103 decrements the count value of the counter means 114 by 1 decrement. The counter means 114 outputs a force counter control signal to be incremented, and the counter means 114 receives this counter control signal and changes the force counter value held by itself.
処理を所望するチヤンネルが表示された面が正面を向いた状態でユー ザが選択入力手段 1 1 5 よ り選択制御信号を入力すると、 選択面判定手 段 1 4 6は、 カウンタ手段 1 1 4からその時点のカウン ト値をカウン ト 情報と して取得し、 このカウン ト情報に基づいて選択制御信号が入力さ れた時に正面を向いている面を判定し、 この面を選択面情報と して出力 する。  When the user inputs a selection control signal from the selection input means 115 while the surface on which the channel desired to be processed is displayed is facing the front, the selection surface determination means 144 becomes counter means 114. The count value at that point is obtained as count information from, and based on this count information, the face facing forward when the selection control signal is input is determined, and this face is used as the selected face information. And output.
チャンネル決定手段 1 4 8は、 選択面判定手段 1 4 6から選択面情報 を取得し、 対応表保持手段 1 4 7に保持された面一チヤンネル対応情報 を参照して、 選択面情報で示される面に対応するチヤンネルを選択チヤ ンネル情報と して映像受信手段 1 5 0に出力する。  The channel determining means 148 acquires the selected plane information from the selected plane determining means 146, and refers to the plane-to-channel correspondence information held in the correspondence table holding means 147, and is indicated by the selected plane information. The channel corresponding to the plane is output to the video receiving means 150 as selected channel information.
映像受信手段 1 5 0では、 選択チャネル情報に基づいて、 受信チャン ネルを切り替えて入力映像信号を画面表示手段 1 1 3に表示する。  The video receiving means 150 switches the receiving channel based on the selected channel information and displays the input video signal on the screen display means 13.
つま り、 画面表示手段 1 1 3に表示される 3次元回転体物体は第 2図 に示す回転体物体を表示するが、 各面に対応するチヤンネルを選択する ためのテク スチャ情報は第 3 2図に基づいて表示される。 例えば、 面 1 に対して表示されるチャンネル Aのデータは第 3 3図に示す 3次元表示 に必要な情報に基づいて、 部分画像 Aの分割境界に沿った領域切り出し 座標 Aの頂点座標を基に 3次元回転体物体を構成する。 That is, the three-dimensional rotating object displayed on the screen display means 113 displays the rotating object shown in FIG. 2, but the texture information for selecting the channel corresponding to each surface is the third-two rotating object. It is displayed based on the figure. For example, face 1 The data of channel A displayed for is based on the information required for the three-dimensional display shown in Fig. 33, and the area is cut out along the division boundary of partial image A. Construct a body object.
このよ うに本実施の形態 1 3によるチヤンネル選択装置では、 放送ま たはネッ トワークを経由して伝送される入力信号から部分映像信号を切 り出して、 3次元回転体物体の各面に貼り付けて構成し、 その 3次元回 転体物体を 3次元仮想空間内に配置して表示し、 使用者が所定の操作に よ り指示をすることによ り 3次元回転体物体を回転させ、 使用者による 所定の選択操作が行われた際に、 使用者の視点に対して最も正面を向い ている面を判定し、 その面に対応づけられたチャネルを対応表を参照し て選択して対応する番組を画面に表示する構成と したから、 3次元仮想 空間における 3次元回転体物体を用いることによ り、 現実世界の円筒状 の回転体を転がすィメージを連想させることが可能であり、 使用者にも なじみ易い直感的な操作環境を実現することができる。  As described above, the channel selection device according to Embodiment 13 cuts out the partial video signal from the input signal transmitted via broadcast or network, and pastes the partial video signal on each surface of the three-dimensional rotating object. The three-dimensional rotating object is arranged and displayed in the three-dimensional virtual space, and the user rotates the three-dimensional rotating object by giving an instruction by a predetermined operation. When a predetermined selection operation is performed by the user, the surface that is most frontal to the user's viewpoint is determined, and the channel associated with that surface is selected by referring to the correspondence table. Since the corresponding program is configured to be displayed on the screen, by using the three-dimensional rotating object in the three-dimensional virtual space, it is possible to associate the image of rolling a cylindrical rotating object in the real world, Intuitive, easy to use for users An operation environment can be realized.
なお上記実施の形態 1 3のチャンネル選択装置によれば、 放送などの 入力信号は映像受信手段 1 5 0に入力される構成と したがそれに限るも のではなく 、対応表保持手段 1 4 7に入力される領域情報パラメータが、 入力信号に多重化されて入力される場合に、 パラメータ分離手段を設け て、 入力信号と領域情報パラメータを分離して、 入力信号は映像受信手 段 1 5 0に入力し、 領域情報パラメータは対応表保持手段 1 4 7に入力 するよ うに構成してもよい。 産業上の利用可能性  According to the channel selection device of Embodiment 13 described above, the configuration is such that an input signal such as a broadcast is input to the video receiving means 150, but the present invention is not limited to this. When the input area information parameter is multiplexed with the input signal and input, a parameter separating unit is provided to separate the input signal and the area information parameter, and the input signal is transmitted to the video receiving means 150. The area information parameter may be input to the correspondence table holding means 147. Industrial applicability
以上のよ う に本発明に係るプログラム選択実行装置、 データ選択実行 装置、 および映像表示装置、 チャンネル選択装置は、 これまで選択表示 画面を 2次元で表示していたものを 3次元回転体物体を構成して 3次元 仮想空間内に回転させることを可能と した。 これによ り 、 現実世界の円 筒状の回転体を転がすィメ一ジを連想させることが可能であり、 使用者 にもなじみ易い直感的な操作環境を実現することができる。  As described above, the program selection execution device, the data selection execution device, the video display device, and the channel selection device according to the present invention convert a three-dimensional rotating object, which has previously displayed a two-dimensional selection display screen. It can be configured and rotated in a 3D virtual space. Thereby, it is possible to associate the image of rolling a cylindrical rotating body in the real world, and an intuitive operation environment that is easy for the user to use can be realized.

Claims

請求の範囲 The scope of the claims
1 . 複数の面が中心軸に対して一定の間隔で配置された 3次元回転体 物体の上記各面にそれぞれプロ グラムの内容を示すテク スチャを貼り付 けた選択用ォブジェク トを 3次元仮想空間内に配置した画像を表示画面 上に表示する選択用オブジェク ト表示手段と、 1. A three-dimensional rotator in which a plurality of surfaces are arranged at regular intervals with respect to the central axis A selection object in which a texture indicating the content of the program is attached to each of the above surfaces of the object in a three-dimensional virtual space Means for displaying an object for selection for displaying an image arranged in the display on a display screen;
選択用オブジェク ト表示手段に対し、 上記選択用オブジェク トが 3次 元仮想空間内で上記中心軸を回転の中心と して回転する画像を表示する ための回転表示制御信号を与える回転表示制御手段と、  Rotation display control means for providing a selection object display means with a rotation display control signal for displaying an image in which the selection object rotates in the three-dimensional virtual space around the center axis as the center of rotation. When,
プログラムを選択する選択入力が入力される選択入力手段と、 選択入力手段から選択入力が入力されたときに 3次元回転体物体を構 成する複数の面のうちどの面が表示画面上において正面を向いているか を判定する選択面判定手段と、  A selection input means for inputting a selection input for selecting a program; and when the selection input is input from the selection input means, which of the plurality of surfaces constituting the three-dimensional rotating object is facing the front on the display screen. A selection plane determining means for determining whether the face is oriented,
上記 3次元回転体物体を構成する複数の面とプログラムとの対応関係 を示す情報を保持する対応表保持手段と、  Correspondence table holding means for holding information indicating a correspondence between a plurality of surfaces constituting the three-dimensional rotating object and the program;
選択面判定手段が判定した面に対応づけられたプログラムが何である かを上記対応表保持手段に保持された情報に基づいて判定し、 実行すベ きプログラムを決定するプログラム決定手段と、  Program determining means for determining what program is associated with the surface determined by the selected surface determining means based on the information held in the correspondence table holding means, and determining a program to be executed;
プ口グラム決定手段が決定したプログラムを実行するプログラム実行 手段とを備えた、  Program execution means for executing the program determined by the program determination means.
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
2 . 請求の範囲第 1項記載のプログラム選択実行装置において、 上記回転表示制御手段は、 外部から入力される回転指示入力に応じて 上記回転表示制御信号を選択用ォブジュク ト表示手段に与えるものであ る、  2. The program selection execution device according to claim 1, wherein the rotation display control means supplies the rotation display control signal to the selection object display means in response to a rotation instruction input from outside. is there,
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
3 . 請求の範囲第 1項記載のプログラム選択実行装置において、 上記回転表示制御手段は、 上記選択用オブジェク トを所定のパターン で回転させるための情報を保持する保持手段を備え、 該保持手段に保持 された情報に基づいて上記回転表示制御信号を選択用オブジェク ト表示 手段に与えるものである、 3. The program selection execution device according to claim 1, wherein the rotation display control means includes holding means for holding information for rotating the selection object in a predetermined pattern, and the holding means includes: Retention The rotation display control signal is provided to the object display means for selection based on the obtained information.
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
4 . 請求の範囲第 2項記載のプログラム選択実行装置において、 上記回転表示制御手段は、 上記選択用オブジェク トを所定のパターン で回転させるための情報を保持する保持手段を備えたものであり、 外部 から回転指示入力が入力される ときにはこの回転指示入力に応じて上記 回転表示制御信号を選択用ォブジェク ト表示手段に与え、 外部から回転 指示入力が入力されないときには上記保持手段に保持された情報に基づ いて上記回転表示制御信号を選択用オブジェク ト表示手段に与えるもの である、  4. The program selection execution device according to claim 2, wherein the rotation display control unit includes a holding unit that holds information for rotating the selection object in a predetermined pattern. When a rotation instruction input is input from outside, the rotation display control signal is provided to the selection object display means in response to the rotation instruction input, and when no rotation instruction input is input from outside, the information held in the holding means is provided. The rotation display control signal is given to the selection object display means based on the
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
5 . 請求の範囲第 1項ないし第 4項のいずれかに記載のプログラム選 択実行装置において、  5. The program selection / execution device according to any one of claims 1 to 4,
表示画面上において上記選択用オブジェク トが回転して 3次元回転体 物体を構成する複数の面のう ち正面を向いている面が切り替わる回数を カウン ト してカウン ト情報を出力するカウンタ手段を備え、  A counter means for counting the number of times the selection object rotates on the display screen and switching the face facing the front of the plurality of faces constituting the three-dimensional rotating object, and outputting count information. Prepared,
上記選択面判定手段は、 上記カウンタの出力するカウン ト情報に基づ いて表示画面上において正面を向いている面を判定する、  The selected surface determining means determines a surface facing the front on the display screen based on the count information output from the counter.
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
6 . 請求の範囲第 1項ないし第 4項のいずれかに記載のプログラム選 択実行装置において、  6. The program selection and execution device according to any one of claims 1 to 4,
上記選択面判定手段は、 上記選択用オブジェク ト表示手段が上記選択 用オブジェク トを画面表示する際に求める奥行き情報に基づいて表示画 面上において正面を向いている面を判定する、  The selection surface determination means determines a front facing surface on a display screen based on depth information obtained when the selection object display means displays the selection object on a screen.
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
7 . 請求の範囲第 1項ないし第 4項のいずれかに記載のプログラム選 択実行装置において、  7. The program selection execution device according to any one of claims 1 to 4,
上記選択面判定手段は、 上記選択用オブジェク トが初期状態から回転 した角度を示す回転角情報に基づいて表示画面上において正面を向いて いる面を判定する、 The selection plane determining means rotates the selection object from an initial state. Determining a front-facing surface on the display screen based on the rotation angle information indicating the tilted angle;
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
8 . 請求の範囲第 1項ないし第 4項のいずれかに記載のプログラム選 択実行装置において、  8. The program selection and execution device according to any one of claims 1 to 4,
選択されたプログラムが実行表示画面を有する場合に、 プログラム実 行時に上記実行表示画面が表示されるよ う に画面表示を切り替える画面 表示切替手段を備えた、  When the selected program has an execution display screen, screen display switching means for switching the screen display so that the execution display screen is displayed when the program is executed,
ことを特徴とするプログラム選択実行装置。  A program selection execution device characterized by the above-mentioned.
9 . 複数の面が中心軸に対して一定の間隔で配置された 3次元回転体 物体の上記各面にそれぞれデータの内容を示すテクスチヤを貼り付けた 選択用オブジェク トを 3次元仮想空間内に配置した画像を表示画面上に 表示する選択用オブジェク ト表示手段と、  9. A three-dimensional rotator in which a plurality of surfaces are arranged at regular intervals with respect to the central axis. A selection object in which a texture indicating data content is pasted on each of the above surfaces of the object is placed in a three-dimensional virtual space. A selection object display means for displaying the arranged image on the display screen,
選択用ォブジェク ト表示手段に対し、 上記選択用ォブジェク トが 3次 元仮想空間内で上記中心軸を回転の中心と して回転する画像を表示する ための回転表示制御信号を与える回転表示制御手段と、  Rotation display control means for providing a rotation display control signal to the selection object display means for displaying an image in which the selection object rotates around the center axis in the three-dimensional virtual space as the center of rotation. When,
データを選択する選択入力が入力される選択入力手段と、  Selection input means for inputting a selection input for selecting data,
選択入力手段から選択入力が入力されたときに 3次元回転体物体を構 成する複数の面のうちどの面が表示画面上において正面を向いているか を判定する選択面判定手段と、  Selection surface determining means for determining which of the plurality of surfaces constituting the three-dimensional rotating object is facing the front on the display screen when a selection input is input from the selection input means,
上記 3次元回転体物体を構成する複数の面とデータ との対応関係を示 す情報を保持する第 1の対応表保持手段と、  First correspondence table holding means for holding information indicating a correspondence between a plurality of surfaces constituting the three-dimensional rotating object and data;
選択面判定手段が判定した面に対応づけられたデータが何であるかを 上記第 1 の対応表保持手段に保持された情報に基づいて判定し、 開くベ きデータを決定するデータ決定手段と、  Data determining means for determining what data is associated with the surface determined by the selected surface determining means based on the information held in the first correspondence table holding means, and determining data to be opened;
データとそのデータを開く プログラムとの対応関係を示す情報を保持 する第 2の対応表保持手段と、  Second correspondence table holding means for holding information indicating a correspondence between data and a program for opening the data,
データ決定手段が決定したデータを開く ために実行するプログラムを 上記第 2の対応表保持手段に保持された情報に基づいて判定し、 実行す べきプログラムを決定するプログラム決定手段と、 The program executed to open the data determined by the data determining means is determined based on the information held in the second correspondence table holding means, and is executed. Program deciding means for deciding a program to be performed;
プログラム決定手段が決定したプログラムを実行しデータ決定手段が 決定したデータを開く プログラム実行手段とを備えた、  A program executing means for executing the program determined by the program determining means and opening the data determined by the data determining means.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 0 . 請求の範囲第 9項記載のデータ選択実行装置において、  10. The data selection execution device according to claim 9,
上記回転表示制御手段は、 外部から入力される回転指示入力に応じて 上記回転表示制御信号を選択用オブジェク ト表示手段に与えるものであ る、  The rotation display control means provides the rotation display control signal to the selection object display means in response to a rotation instruction input from outside.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 1 . 請求の範囲第 9項記載のデータ選択実行装置において、  11. The data selection execution device according to claim 9, wherein:
上記回転表示制御手段は、 上記選択用ォブジェク トを所定のパターン で回転させるための情報を保持する保持手段を備え、 該保持手段に保持 された情報に基づいて上記回転表示制御信号を選択用オブジェク ト表示 手段に与える ものである、  The rotation display control means includes a holding means for holding information for rotating the selection object in a predetermined pattern, and based on the information held in the holding means, the rotation display control signal is used for the selection object. To the display means,
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 2 . 請求の範囲第 1 0項記載のデータ選択実行装置において、 上記回転表示制御手段は、 上記選択用ォブジェク トを所定のパターン で回転させるための情報を保持する保持手段を備え、 外部から回転指示 入力が入力される ときにはこの回転指示入力に応じて上記回転表示制御 信号を選択用オブジェク ト表示手段に与え、 外部から回転指示入力が入 力されないと きには上記保持手段に保持された情報に基づいて上記回転 表示制御信号を選択用オブジェク ト表示手段に与えるものである、  12. The data selection execution device according to claim 10, wherein the rotation display control means includes holding means for holding information for rotating the selection object in a predetermined pattern. When the rotation instruction input is input, the rotation display control signal is supplied to the selection object display means in response to the rotation instruction input, and when no rotation instruction input is input from outside, the rotation display control signal is held by the holding means. The rotation display control signal is provided to the selection object display means based on the information.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 3 . 請求の範囲第 9項ないし第 1 2項のいずれかに記載のデータ選 択実行装置において、  13. The data selection execution device according to any one of claims 9 to 12,
表示画面上において上記選択用オブジェク トが回転して 3次元回転体 物体を構成する複数の面のう ち正面を向いている面が切り替わる回数を カウン ト してカウン ト情報を出力するカウンタ手段を備え、  A counter means for counting the number of times the selection object rotates on the display screen and switching the face facing the front of the plurality of faces constituting the three-dimensional rotating object, and outputting count information. Prepared,
上記選択面判定手段は、 上記カウンタの出力するカウン ト情報に基づ いて表示画面上において正面を向いている面を判定する、 ことを特徴とするデータ選択実行装置。 The selected plane determining means is based on the count information output from the counter. And determining a surface facing the front on the display screen.
1 4 . 請求の範囲第 9項ないし第 1 2項のいずれかに記載のデータ選 択実行装置において、  14. The data selection execution device according to any one of claims 9 to 12,
上記選択面判定手段は、 上記選択用オブジェク ト表示手段が上記選択 用オブジェク トを画面表示する際に求める奥行き情報に基づいて表示画 面上において正面を向いている面を判定する、  The selection surface determination means determines a front facing surface on a display screen based on depth information obtained when the selection object display means displays the selection object on a screen.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 5 . 請求の範囲第 9項ないし第 1 2項のいずれかに記載のデータ選 択実行装置において、  15. The data selection execution device according to any one of claims 9 to 12,
上記選択面判定手段は、 上記選択用オブジェク トが初期状態から回転 した角度を示す回転角情報に基づいて表示画面上において正面を向いて いる面を判定する、  The selection surface determination means determines a surface facing the front on the display screen based on rotation angle information indicating an angle at which the selection object has rotated from an initial state.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 6 . 請求の範囲第 9項ないし第 1 5項のいずれかに記載のデータ選 択実行装置において、  16. The data selection execution device according to any one of claims 9 to 15,
実行すべきプログラムが実行表示画面を有する場合に、 プログラム実 行時に上記実行表示画面が表示されるよ う に画面表示を切り替える画面 表示切替手段を備えた、  When a program to be executed has an execution display screen, screen display switching means for switching the screen display so that the execution display screen is displayed when the program is executed is provided.
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
1 7 . 請求の範囲第 9項ないし第 1 6項のいずれかに記載のデータ選 択実行装置において、  17. The data selection execution device according to any one of claims 9 to 16,
上記選択用オブジェク ト表示手段は、 3次元回転体物体の各面に対応 づけられるデータが動画像データであるとき、 動画像データを再生して 得られる画像をテクスチヤと して対応する面に貼り付けるものである、 ことを特徴とするデータ選択実行装置。  When the data associated with each surface of the three-dimensional rotating object is moving image data, the selection object display means pastes an image obtained by reproducing the moving image data as a texture onto the corresponding surface. A data selection execution device, characterized in that:
1 8 . 請求の範囲第 1 7項記載のデータ選択実行装置において、 上記選択用オブジェク ト表示手段は、 3次元回転体物体を構成する複 数の面のうち表示画面上で正面を向いている面には該面に対応づけられ る動画像データを再生して得られる動画像をテクスチヤと して貼り付け 3次元回転体物体を構成する複数の面のう ち表示画面上で正面を向いて いない面には該面に対応づけられる動画像データを再生して得られる動 画像から取り出した静止画像をテクスチヤと して貼り付けるものである、 ことを特徴とするデータ選択実行装置。 18. The data selection execution device according to claim 17, wherein the selection object display means faces forward on a display screen among a plurality of surfaces constituting the three-dimensional rotating object. The surface is mapped to the surface A moving image obtained by playing back moving image data as a texture A data selection execution device characterized in that a still image extracted from a moving image obtained by reproducing moving image data to be obtained is pasted as a texture.
1 9 . 請求の範囲第 9項ないし第 1 8項のいずれかに記載のデータ選 択実行装置において、  19. The data selection execution device according to any one of claims 9 to 18,
3次元回転体物体の各面に対応づけられるデータが音声データ、 動画 像データ、 あるいは音声データを伴う動画像データであるとき、 上記選 択用オブジェク トの表示に併せて, 対応づけられるデータの再生表示を 行うデータ再生表示手段であって、 上記選択用オブジェク 卜の回転によ り表示画面上で最も正面を向いている面が第 1 の面から該第 1 の面に隣 接する第 2の面へと切り替わる際に、 上記第 1 の面に対応づけられるデ —タの再生表示をフェー ドァゥ 卜 し、 上記第 2 の面に対応づけられるデ ータの再生表示をフエ一ドインするよ うに再生表示を行うデータ再生表 示手段を備えた、  When the data associated with each surface of the three-dimensional rotating object is audio data, moving image data, or moving image data accompanied by audio data, the data associated with the selection object is displayed. A data reproduction display means for performing reproduction display, wherein a surface facing the front most on the display screen is adjacent to the first surface from the first surface by rotating the selection object. When switching to the plane, fade-out the playback display of the data associated with the first side and fade-in the playback display of the data associated with the second side. Provided with data reproduction display means for performing reproduction display,
ことを特徴とするデータ選択実行装置。  A data selection execution device, characterized in that:
2 0 . 請求の範囲第 9項ないし第 1 8項のいずれかに記載のデータ選 択実行装置において、 20. The data selection execution device according to any one of claims 9 to 18,
3次元回転体物体の各面に対応づけられるデータが音声データを含む データである とき、 上記選択用オブジェク 卜の表示に併せて, 対応づけ られるデータの再生表示を行うデータ再生表示手段であって、 上記選択 用オブジェク トの回転によ り表示画面上で最も正面を向いている面が第 1 の面から該第 1 の面に隣接する第 2 の面へと切り替わる際に、 上記第 1 の面に対応づけられるデータの再生音源位置と上記第 2 の面に対応づ けられるデータの再生音源位置を、 表示画面上における上記第 1, 第 2 の面の位置の移動に合わせて移動させて再生表示を行うデータ再生表示 手段を備えた、  When the data associated with each surface of the three-dimensional rotating object is data including audio data, the data reproduction display means performs reproduction display of the associated data in addition to the display of the selection object. When the surface facing the front on the display screen is switched from the first surface to the second surface adjacent to the first surface due to the rotation of the selection object, the first The reproduction sound source position of the data associated with the surface and the reproduction sound source position of the data associated with the second surface are moved in accordance with the movement of the positions of the first and second surfaces on the display screen. Provided with data reproduction display means for performing reproduction display,
ことを特徴とするデータ選択実行装置。 A data selection execution device, characterized in that:
2 1 . 放送またはネッ トワークを経由して伝送される入力信号を受信 し、 入力映像信号を出力する映像受信手段と、 2 1. A video receiving means for receiving an input signal transmitted through a broadcast or a network and outputting an input video signal;
上記入力映像信号を保持するメモ リ手段と、  Memory means for holding the input video signal,
上記入力映像信号を上記メモリ手段に書き込み、 上記入力映像信号か らテクスチャ と して用いる領域を切り出す際の位置を示す領域切り出し 情報に従ってメモ リ制御信号を上記メモ リ手段に出力し、 該メモ リ手段 から部分映像信号を読み出すメモ リ入出力制御手段と、  The input video signal is written into the memory means, and a memory control signal is output to the memory means in accordance with area cutout information indicating a position at which a region to be used as a texture is cut out from the input video signal. Memory input / output control means for reading a partial video signal from the means;
3次元座標情報と、 領域切り 出し情報とから構成されるパラメ一タ情 報から、 上記領域切り出し情報と上記 3次元座標情報とを分離して、 上 記領域切り 出し情報は上記メモリ入出力制御手段に出力し、 上記 3次元 座標情報はオブジェク ト位置決定手段に出力するパラメ一タ分離手段と、 上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのォブジェク ト座標情 報を出力するオブジェク ト位置決定手段と、  The above-mentioned area extraction information and the above-mentioned 3D coordinate information are separated from the parameter information composed of the three-dimensional coordinate information and the area extraction information, and the above-mentioned area extraction information is stored in the memory input / output control. Means for outputting the three-dimensional coordinate information to the object position determining means, and a three-dimensional object in the three-dimensional virtual space based on the three-dimensional coordinate information. Object position determining means for outputting object coordinate information of the three-dimensional object;
上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク トの所定の面にテクスチャマッピングして、 3次元映像信号を生成出 力するラスタライズ手段と、  Rasterizing means for performing texture mapping of the partial video signal on a predetermined surface of a three-dimensional object based on the projection plane coordinate information to generate and output a three-dimensional video signal;
上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出 力するフレームメモ リ手段と、  Frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing;
上記出力映像信号を表示する映像表示手段とを備えた、  Video display means for displaying the output video signal,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 2 . 請求の範囲第 2 1項記載の映像表示装置において、 22. The video display device according to claim 21.
上記パラメ一タ分離手段が入力するパラメ一タ情報は、 時系列で変化 する、  The parameter information input by the parameter separating means changes in time series.
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 3 . 請求の範囲第 2 1項記載の映像表示装置において、 23. In the video display device according to claim 21,
上記透視投影変換手段に代えて、 ァフィ ン変換手段を備える、 ことを特徴とする映像表示装置。 An affine conversion unit is provided instead of the perspective projection conversion unit. An image display device characterized by the above-mentioned.
2 4 . 放送またはネッ トワークを経由 して伝送される、 所定数の部分 映像から構成される入力信号を受信し、 入力映像信号を出力する映像受 信手段と、  24. Video receiving means for receiving an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network, and outputting the input video signal;
上記入力映像信号を保持するメモ リ手段と、  Memory means for holding the input video signal,
上記入力映像信号を上記メモ リ手段に書き込み、 上記入力映像信号か らテクスチャ と して用いる領域を切り 出す際の位置を示し、 部分映像の 所定数に対応した領域切り出し情報に従ってメモ リ制御信号を上記メモ リ手段に出力 し、 該メモリ手段から部分映像信号を読み出すメモ リ入出 力制御手段と、  The input video signal is written into the memory means, a position at which an area used as a texture is cut out from the input video signal, and a memory control signal is written according to area cutout information corresponding to a predetermined number of partial images. A memory input / output control means for outputting to the memory means and reading a partial video signal from the memory means;
部分映像の所定数に対応した 3次元座標情報と、 領域切り出し情報と から構成されるパラメータ情報から、 パラメータ出力制御情報に基づい て、 上記領域切り出し情報と上記 3次元座標情報とを分離して、 上記領 域切り出し情報は上記メモリ入出力制御手段に出力し、 上記 3次元座標 情報はオブジェク ト位置決定手段に出力するパラメータ分離手段と、 上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのオブジェク ト座標情 報を出力するオブジェク ト位置決定手段と、  Based on parameter output control information, the area cutout information and the three-dimensional coordinate information are separated from parameter information composed of three-dimensional coordinate information corresponding to a predetermined number of partial images and area cutout information. The above-mentioned area cutout information is outputted to the above-mentioned memory input / output control means, and the above-mentioned three-dimensional coordinate information is outputted to the object position determining means.Parameter separation means, and a three-dimensional object is obtained from the three-dimensional coordinate information in a three-dimensional virtual space. Object position determining means for arranging objects and outputting object coordinate information of the three-dimensional object in the three-dimensional virtual space;
上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク トの所定の面にテクスチャマッピングする際に、 上記パラメータ出力 制御情報を上記パラメータ分離手段に対して部分映像の所定数に対応す る回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出 力するフレームメモ リ手段と、  When texture mapping the partial video signal to a predetermined surface of a three-dimensional object based on the projection plane coordinate information, the parameter output control information corresponds to the predetermined number of partial video to the parameter separating means. A rasterizing means for outputting a predetermined number of times and generating and outputting a 3D video signal; a frame memory means for holding the 3D video signal and outputting an output video signal at a predetermined timing;
上記出力映像信号を表示する映像表示手段とを備えた、  Video display means for displaying the output video signal,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 5 . 請求の範囲第 2 4項記載の映像表示装置において、 上記パラメータ分離手段が入力するパラメータ情報は、 時系列で変化 する、 25. In the video display device according to claim 24, The parameter information input by the parameter separating means changes in a time series.
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 6 . 請求の範囲第 2 4項記載の映像表示装置において、  26. In the video display device according to claim 24,
上記透視投影変換手段に代えて、 ァフィ ン変換手段を備える、 ことを特徴とする映像表示装置。  An image display device comprising an affinity conversion unit instead of the perspective projection conversion unit.
2 7 . 放送またはネッ トワークを経由して伝送される、 所定数の部分 映像から構成される入力信号を受信し、 入力映像信号を出力する映像受 信手段と、  27. A video receiving means for receiving an input signal composed of a predetermined number of partial videos transmitted through a broadcast or a network, and outputting the input video signal;
上記入力映像信号から上記入力映像信号からテクスチヤ と して用いる 領域を切り出す際の位置を示し、 部分映像の所定数に対応した領域切り 出し情報に従って領域を分離し、 メ モ リ格納用映像信号を出力する領域 分離手段と、  This indicates the position at which an area used as a texture is cut out from the input video signal from the input video signal, the area is separated according to the area cutout information corresponding to a predetermined number of partial images, and the memory storage video signal is output. Output area separation means,
上記メモ リ格納用映像信号を保持するメモリ手段と、  Memory means for holding the video signal for memory storage,
上記メモ リ格納用映像信号を上記メ モ リ手段に書き込み、 領域切り 出 し情報に従ってメモ リ制御信号を上記メモ リ手段に出力し、 該メモ リ手 段から部分映像信号を読み出すメモ リ入出力制御手段と、  The memory storage video signal is written into the memory means, a memory control signal is output to the memory means according to the area cutout information, and a partial video signal is read from the memory means. Control means;
部分映像の所定数に対応した 3次元座標情報と、 領域切り出し情報と から構成されるパラメータ情報から、 パラメータ出力制御情報に基づい て、 上記領域切り 出し情報と上記 3次元座標情報とを分離して、 上記領 域切り出し情報は上記メモリ入出力制御手段に出力し、 上記 3次元座標 情報はオブジェク ト位置決定手段に出力するパラメ一タ分離手段と、 上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのォブジェク ト座標情 報を出力するオブジェク ト位置決定手段と、  Based on parameter output control information, the area cutout information and the three-dimensional coordinate information are separated from parameter information composed of three-dimensional coordinate information corresponding to a predetermined number of partial images and area cutout information. The area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is output to the object position determining means. Object position determining means for arranging a three-dimensional object and outputting object coordinate information of the three-dimensional object in the three-dimensional virtual space;
上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク トの所定の面にテクスチヤマ ツ ビングする際に、 上記パラメータ出力 制御情報を上記パラメータ分離手段に対して部分映像の所定数に対応す る回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出 力するフ レームメモ リ手段と、 When texturing the partial video signal on a predetermined surface of a three-dimensional object based on the projection plane coordinate information, the parameter output is performed. Rasterizing means for outputting control information to the parameter separating means a number of times corresponding to a predetermined number of partial images, generating and outputting a three-dimensional video signal, and holding the three-dimensional video signal and providing a predetermined time Frame memory means for outputting an output video signal by
上記出力映像信号を表示する映像表示手段とを備えた、  Video display means for displaying the output video signal,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 8 . 放送またはネッ トワークを経由して伝送される、 所定数の部分 映像から構成される入力信号を受信し、 入力映像信号を出力する映像受 信手段と、  28. Video receiving means for receiving an input signal composed of a predetermined number of partial images transmitted through a broadcast or a network, and outputting the input video signal;
上記入力映像信号を保持するメモ リ手段と、  Memory means for holding the input video signal,
上記入力映像信号を上記メモリ手段に書き込み、 上記入力映像信号か らテクスチャと して用いる領域を切り 出す際の位置を示す領域切り出し 情報に従ってメモ リ制御信号を上記メモ リ手段に出力し、 該メモ リ手段 から部分映像信号を読み出すメモ リ入出力制御手段と、  The input video signal is written into the memory means, and a memory control signal is output to the memory means in accordance with area cutout information indicating a position at which a region used as a texture is cut out from the input video signal. Memory input / output control means for reading a partial video signal from the memory means;
上記入力映像信号から所定数を判別し、 領域数情報を出力する映像分 析手段と、  Video analyzing means for determining a predetermined number from the input video signal and outputting area number information;
上記領域数情報に基づいて、 3次元座標情報と領域切り 出し情報とか ら構成されるパラメータ情報を生成し、 パラメータ出力制御情報に基づ いて、 上記領域切り出し情報は上記メモ リ入出力制御手段に出力し、 上 記 3次元座標情報はオブジェク ト位置決定手段に出力するパラメータ生 成手段と、  Based on the area number information, parameter information composed of three-dimensional coordinate information and area clipping information is generated, and based on the parameter output control information, the area clipping information is sent to the memory input / output control means. Parameter generating means for outputting the three-dimensional coordinate information to the object position determining means,
上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのォブジェク ト座標情 報を出力するオブジェク ト位置決定手段と、  Object position determining means for arranging a three-dimensional object in the three-dimensional virtual space from the three-dimensional coordinate information and outputting object coordinate information of the three-dimensional object in the three-dimensional virtual space;
上記ォブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク トの所定の面にテクスチャマッ ピングする際に、 上記パラメータ出力 制御情報を上記パラメ一タ生成手段に対して部分映像の所定数に対応す る回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、 上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出 力するフ レームメモリ手段と、 When the partial video signal is texture-mapped to a predetermined surface of a three-dimensional object based on the projection plane coordinate information, the parameter output control information is transmitted to the parameter generating means in accordance with the predetermined partial video signal. Correspond to the number Rasterizing means for outputting a number of times and generating and outputting a three-dimensional video signal; frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing;
上記出力映像信号を表示する映像表示手段とを備えた、  Video display means for displaying the output video signal,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
2 9 . チャンネル情報に基づいて、 放送またはネッ トワークを経由し て伝送される、所定数の部分映像から構成される入力信号を選択受信し、 入力映像信号を出力する映像受信手段と、  2 9. An image receiving means for selectively receiving an input signal composed of a predetermined number of partial images transmitted through a broadcast or a network based on the channel information and outputting the input image signal;
上記入力映像信号を保持するメモ リ手段と、  Memory means for holding the input video signal,
上記入力映像信号を上記メモ リ手段に書込み、 上記入力映像信号から テクスチャ と して用いる領域を切り 出す際の位置を示し、 部分映像の所 定数に対応した領域切り出し情報に従ってメモ リ制御信号を上記メモ リ 手段に出力し、 該メ モ リ手段から部分映像信号を読み出すメモリ入出力 制御手段と、  The input video signal is written to the memory means, a position at which an area used as a texture is cut out from the input video signal, and a memory control signal is written according to the area cutout information corresponding to a fixed number of partial images. A memory input / output control means for outputting to a memory means and reading a partial video signal from the memory means;
部分映像の所定数に対応した 3次元座標情報と、領域切り出し情報と、 オブジェク ト とチヤンネルとの対応情報を示すチヤンネル対応情報とか ら構成されるパラメータ情報から、パラメータ出力制御情報に基づいて、 上記領域切り 出し情報と上記 3次元座標情報とを分離して、 上記領域切 り出し情報は上記メモリ入出力制御手段に出力し、 上記 3次元座標情報 はオブジェク ト位置決定手段に出力し、 上記チャ ンネル対応情報はチヤ ンネル決定手段に出力するパラメ一タ分離手段と、  Based on the parameter output control information, based on parameter output control information based on parameter information including three-dimensional coordinate information corresponding to a predetermined number of partial images, area cutout information, and channel correspondence information indicating correspondence information between an object and a channel. The area cutout information is separated from the three-dimensional coordinate information, the area cutout information is output to the memory input / output control means, the three-dimensional coordinate information is output to object position determination means, and the Channel correspondence information is output to the channel determination means;
上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのォブジェク ト座標情 報を出力する と同時に、 ユーザ入力に従って、 上記オブジェク ト座標情 報よ りオブジェク ト配置順序情報を出力するオブジェク ト位置決定手段 と、  A three-dimensional object is arranged in a three-dimensional virtual space from the three-dimensional coordinate information, and object coordinate information of the three-dimensional object in the three-dimensional virtual space is output. At the same time, the object coordinate information is input according to a user input. Object position determining means for outputting object arrangement order information,
上記オブジェク ト配置順序情報で各オブジェク トの位置を比較し、 所 定の条件でォブジェク トを選択した選択オブジェク ト情報を上記チャン ネル決定手段に出力するオブジェク ト位置比較手段と、 上記選択オブジェク ト情報と上記チャンネル対応情報とから、 選択さ れたォブジェク トに対応するチャンネルを決定し、 チャンネル情報を出 力するチヤンネル決定手段と、 Object position comparing means for comparing the position of each object based on the object arrangement order information and outputting selected object information for selecting an object under predetermined conditions to the channel determining means; A channel determining means for determining a channel corresponding to the selected object from the selected object information and the channel corresponding information, and outputting the channel information;
上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク 卜の所定の面にテクスチャマッピングする際に、 パラメ一タ出力制御 情報をパラメータ分離手段に対して部分映像の所定数に対応する回数分. 出力 し、 3次元映像信号を生成出力するラスタライズ手段と、  When the partial video signal is texture-mapped to a predetermined surface of a three-dimensional object based on the projection plane coordinate information, parameter output control information corresponds to a predetermined number of partial videos to a parameter separating unit. Rasterizing means for outputting and generating and outputting a 3D video signal,
上記 3次元映像信号を保持し、 所定のタイ ミ ングで出力映像信号を出 力するフ レームメモ リ手段と、  Frame memory means for holding the three-dimensional video signal and outputting an output video signal at a predetermined timing;
上記出力映像信号と上記映像受信手段から出力された入力映像信号と を切り替えて表示する映像表示手段とを備えた、  Video output means for switching and displaying the output video signal and the input video signal output from the video receiving means,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
3 0 . 請求の範囲第 2 9項記載の映像表示装置において、  30. The video display device according to claim 29, wherein
上記オブジェク ト位置決定手段は、 視点からの位置が最も近い面を選 択する、  The object position determining means selects a surface closest to the position from the viewpoint,
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
3 1 . 放送またはネッ トワークを経由して伝送される、 第 1 の入力信 号を受信し、 所定数の部分映像から構成される第 1 の入力映像信号を出 力する第 1 の映像受信手段と、  3 1. First video receiving means for receiving a first input signal transmitted through a broadcast or a network and outputting a first input video signal composed of a predetermined number of partial videos When,
チャンネル情報に基づいて、 放送またはネッ トワークを経由して伝送 される、 第 2の入力信号を選択受信し、 第 2の入力映像信号を出力する 第 2の映像受信手段と、  A second video receiving means for selectively receiving a second input signal and outputting a second input video signal, the second video signal being transmitted via a broadcast or a network based on the channel information;
上記第 1 の入力映像信号を保持するメモ リ手段と、  Memory means for holding the first input video signal,
上記第 1 の入力映像信号を上記メモリ手段に書き込み、 上記入力映像 信号からテク スチャ と して用いる領域を切り出す際の位置を示し、 部分 映像の所定数に対応した領域切り 出し情報に従ってメモ リ制御信号を上 記メモ リ手段に出力し、 該メモ リ手段から部分映像信号を読み出すメモ リ入出力制御手段と、 The first input video signal is written into the memory means, and a position at which an area used as a texture is cut out from the input video signal is indicated. A signal that outputs a signal to the memory means and reads a partial video signal from the memory means Re-input / output control means;
部分映像の所定数に対応した 3次元座標情報と、領域切り出し情報と、 オブジェク ト とチャンネルとの対応情報を示すチヤンネル対応情報とか ら構成されるパラメ一タ情報から、パラメータ出力制御情報に基づいて、 上記領域切り 出し情報と上記 3次元座標情報とを分離して、 上記領域切 り出し情報は上記メモ リ入出力制御手段に出力し、 上記 3次元座標情報 はォブジェク ト位置決定手段に出力し、 上記チャンネル対応情報はチヤ ンネル決定手段に出力するパラメータ分離手段と、  Based on parameter output control information, based on parameter output information composed of three-dimensional coordinate information corresponding to a predetermined number of partial images, area cutout information, and channel correspondence information indicating correspondence information between an object and a channel. The area cutout information is separated from the three-dimensional coordinate information, and the area cutout information is output to the memory input / output control means, and the three-dimensional coordinate information is output to the object position determination means. The parameter correspondence means for outputting the channel correspondence information to the channel determination means;
上記 3次元座標情報から 3次元仮想空間に 3次元オブジェク トを配置 し、 3次元仮想空間における 3次元オブジェク トのオブジェク ト座標情 報を出力する と同時に、 ユーザ入力に従って、 上記オブジェク ト座標情 報よ りオブジェク ト配置順序情報を出力するオブジェク ト位置決定手段 と、  A three-dimensional object is arranged in a three-dimensional virtual space from the three-dimensional coordinate information, and the object coordinate information of the three-dimensional object in the three-dimensional virtual space is output. At the same time, the object coordinate information is input according to a user input. Object position determining means for outputting object arrangement order information,
上記オブジェク ト配置順序情報で各オブジェク トの位置を比較し、 所 定の条件でォブジェク トを選択した選択オブジェク ト情報を上記チャン ネル決定手段に出力するオブジェク ト位置比較手段と、  Object position comparing means for comparing the position of each object based on the object arrangement order information, and outputting selected object information in which an object is selected under predetermined conditions to the channel determining means;
上記選択オブジェク ト情報と上記チャ ンネル対応情報とから、 選択さ れたォブジェク トに対応するチャ ンネルを決定し、 チャンネル情報を出 力するチヤンネル決定手段と、  A channel determining means for determining a channel corresponding to the selected object from the selected object information and the channel correspondence information, and outputting channel information;
上記オブジェク ト座標情報をディ スプレイ投影面に透視投影し、 ディ スプレイ投影面座標情報に変換する透視投影変換手段と、  Perspective projection conversion means for perspectively projecting the object coordinate information onto a display projection plane and converting the object coordinate information into display projection plane coordinate information;
上記投影面座標情報に基づいて、 上記部分映像信号を 3次元オブジェ ク トの所定の面にテクスチャマッ ピングする際に、 パラメータ出力制御 情報をパラメ一タ分離手段に対して部分映像の所定数に対応する回数分、 出力し、 3次元映像信号を生成出力するラスタライズ手段と、  When the partial video signal is texture-mapped to a predetermined surface of a three-dimensional object based on the projection plane coordinate information, the parameter output control information is converted into a predetermined number of partial videos for the parameter separating means. Rasterizing means for outputting a corresponding number of times, generating and outputting a three-dimensional video signal,
上記 3次元映像信号を保持し、 所定のタイ ミ ングで 3次元出力映像信 号を出力するフ レームメモ リ手段と、  A frame memory means for holding the 3D video signal and outputting a 3D output video signal at a predetermined timing;
上記部分映像信号を拡大、 変形処理して部分映像拡大変形信号を出力 する拡大変形手段と、 上記 3次元出力映像信号と上記部分映像拡大変形信号とを、 所定のタ イ ミングで切り替えて出力映像信号を出力する映像切り替え手段と、 上記出力映像信号と上記第 2の入力映像信号とを切り替えて表示する 映像表示手段とを備えた、 Enlarging and deforming means for enlarging and deforming the partial video signal and outputting a partial video enlarging and deforming signal; Video switching means for switching the three-dimensional output video signal and the partial video enlargement / deformation signal at a predetermined timing to output an output video signal; and switching between the output video signal and the second input video signal. Video display means for displaying
ことを特徴とする映像表示装置。  A video display device characterized by the above-mentioned.
3 2 . 放送またはネッ トワークを経由 して伝送される入力信号を受信 し、 チヤンネル決定手段から出力される選択チャンネル情報に基づき、 チャンネルを選択して入力映像信号を出力する映像受信手段と、  3 2. Video receiving means for receiving an input signal transmitted via broadcasting or a network, selecting a channel based on the selected channel information output from the channel determining means, and outputting an input video signal,
上記入力映像信号を保持するメモリ手段と、  Memory means for holding the input video signal,
上記入力映像信号を上記メモ リ手段に書き込み、 対応表保持手段から 入力された領域切り出し情報に従ってメモリ制御信号を上記メモリ手段 に出力し、 該メモ リ手段から部分映像信号を読み出すメモ リ入出力制御 手段と、  Memory input / output control for writing the input video signal to the memory means, outputting a memory control signal to the memory means in accordance with the area cutout information input from the correspondence table holding means, and reading out the partial video signal from the memory means Means,
複数の面が中心軸に対して一定の間隔で配置された 3次元回転体物体 の上記各面にそれぞれチャンネルの内容を示す、 部分画像を選択し、 テ タスチヤと して貼り付けた選択用オブジェク トを 3次元仮想空間内に配 置した画像を表示画面上に表示する選択用オブジェク ト表示手段と、 該選択用オブジェク ト表示手段に対し、 上記選択用オブジェク トが 3 次元仮想空間内で上記中心軸を回転の中心と して回転する画像を表示す るための回転表示制御信号を与える回転表示制御手段と、  A selection object that shows the contents of the channel on each of the above surfaces of a three-dimensional rotating object in which multiple surfaces are arranged at regular intervals with respect to the center axis, and pastes them as a tester. Object display means for displaying an image in which the objects are arranged in a three-dimensional virtual space on a display screen, and the object for selection is displayed in the three-dimensional virtual space with respect to the object display means for selection. Rotation display control means for providing a rotation display control signal for displaying an image that rotates with the center axis as the center of rotation;
チャンネルを選択する選択入力が入力される選択入力手段と、 該選択入力手段から選択入力が入力されたときに 3次元回転体物体を 構成する複数の面のうちどの面が表示画面上において正面を向いている かを判定する選択面判定手段と、  Selection input means for inputting a selection input for selecting a channel; and, when the selection input is input from the selection input means, which of a plurality of surfaces constituting the three-dimensional rotator object faces front on the display screen. A selection plane determining means for determining whether the face is oriented,
上記 3次元回転体物体を構成する複数の面と、 各チャンネルに対応し た部分画像のテクスチヤ情報と、 外部から入力された領域情報パラメ一 タに基づいて各チャンネルに対応した部分画像を生成するための領域切 り出し情報との対応関係を示す情報を保持する対応表保持手段と、 選択面判定手段が判定した面に対応づけられたチャンネルが何である かを上記対応表保持手段に保持された情報に基づいて判定し、 切り替え て表示するべきチャンネルを決定して、 選択チヤンネル情報を上記映像 受信手段に出力するチャンネル決定手段とを備えた、 A partial image corresponding to each channel is generated based on a plurality of surfaces constituting the three-dimensional rotating object, texture information of the partial image corresponding to each channel, and region information parameters input from outside. Table holding means for holding information indicating the correspondence relationship with the area cutout information for use, and the channel associated with the plane determined by the selected plane determining means. Channel determination means for determining whether or not to switch and determine the channel to be displayed, and outputting the selected channel information to the video receiving means,
ことを特徴とするチャンネル選択装置。  A channel selection device, characterized in that:
3 3 . 請求の範囲第 3 2項記載のチヤンネル選択装置において、 上記領域情報パラメータが、 入力信号に多重されて入力される場合、 入力信号から領域パラメータを分離するパラメータ分離手段を備えた、 ことを特徴とするチャンネル選択装置。 33. The channel selection device according to claim 32, further comprising parameter separation means for separating the area parameter from the input signal when the area information parameter is multiplexed and input to the input signal. A channel selection device characterized by the above-mentioned.
PCT/JP1999/007307 1998-12-25 1999-12-24 Program selective execution device, data selective execution device, image display device, and channel selection device WO2000039662A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP10368894A JP2000196971A (en) 1998-12-25 1998-12-25 Video display device
JP10/368894 1998-12-25
JP11/110098 1999-04-16
JP11009899A JP3673425B2 (en) 1999-04-16 1999-04-16 Program selection execution device and data selection execution device

Publications (1)

Publication Number Publication Date
WO2000039662A1 true WO2000039662A1 (en) 2000-07-06

Family

ID=26449778

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1999/007307 WO2000039662A1 (en) 1998-12-25 1999-12-24 Program selective execution device, data selective execution device, image display device, and channel selection device

Country Status (1)

Country Link
WO (1) WO2000039662A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1257121A3 (en) * 2001-05-08 2004-04-21 Canon Kabushiki Kaisha Display control apparatus
US7590995B2 (en) 2001-03-05 2009-09-15 Panasonic Corporation EPG display apparatus, EPG display method, medium, and program
US20120223884A1 (en) * 2011-03-01 2012-09-06 Qualcomm Incorporated System and method to display content
CN102768613A (en) * 2011-05-06 2012-11-07 宏达国际电子股份有限公司 System and method for interface management, and computer program product therefor
CN109903380A (en) * 2019-03-14 2019-06-18 广州世峰数字科技有限公司 A kind of three-dimensional building model and information show interface layout system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02288600A (en) * 1989-04-28 1990-11-28 Hitachi Ltd Information processing system
JPH03109810A (en) * 1989-09-25 1991-05-09 Toshiba Corp Automatic channel search/storage device
JPH05328244A (en) * 1992-05-21 1993-12-10 Hitachi Ltd Channel selection device for television receiver
JPH07105404A (en) * 1993-10-04 1995-04-21 Ricoh Co Ltd Stereoscopic image processor and its processing method
JPH07114451A (en) * 1993-10-19 1995-05-02 Canon Inc Method and device for selecting three-dimension menu
JPH0822555A (en) * 1994-07-08 1996-01-23 Hitachi Ltd Method and device for graphic plotting
JPH08149384A (en) * 1994-11-18 1996-06-07 Sony Corp Image display controller
JPH08297601A (en) * 1995-04-26 1996-11-12 Matsushita Electric Ind Co Ltd Device and method for file management
JPH09134269A (en) * 1995-11-10 1997-05-20 Matsushita Electric Ind Co Ltd Display controller
JPH09190544A (en) * 1996-01-12 1997-07-22 Hitachi Ltd Acoustic presentation method for image data
JPH09222981A (en) * 1996-02-14 1997-08-26 Casio Comput Co Ltd Information processor
JPH09307827A (en) * 1996-05-16 1997-11-28 Sharp Corp Channel selection device
JPH1051709A (en) * 1996-08-05 1998-02-20 Hitachi Ltd Television receiver
JPH1069364A (en) * 1996-08-28 1998-03-10 Fuji Electric Co Ltd Window screen selection system
JPH10145699A (en) * 1996-11-05 1998-05-29 Toshiba Corp Multi-screen display television receiver

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02288600A (en) * 1989-04-28 1990-11-28 Hitachi Ltd Information processing system
JPH03109810A (en) * 1989-09-25 1991-05-09 Toshiba Corp Automatic channel search/storage device
JPH05328244A (en) * 1992-05-21 1993-12-10 Hitachi Ltd Channel selection device for television receiver
JPH07105404A (en) * 1993-10-04 1995-04-21 Ricoh Co Ltd Stereoscopic image processor and its processing method
JPH07114451A (en) * 1993-10-19 1995-05-02 Canon Inc Method and device for selecting three-dimension menu
JPH0822555A (en) * 1994-07-08 1996-01-23 Hitachi Ltd Method and device for graphic plotting
JPH08149384A (en) * 1994-11-18 1996-06-07 Sony Corp Image display controller
JPH08297601A (en) * 1995-04-26 1996-11-12 Matsushita Electric Ind Co Ltd Device and method for file management
JPH09134269A (en) * 1995-11-10 1997-05-20 Matsushita Electric Ind Co Ltd Display controller
JPH09190544A (en) * 1996-01-12 1997-07-22 Hitachi Ltd Acoustic presentation method for image data
JPH09222981A (en) * 1996-02-14 1997-08-26 Casio Comput Co Ltd Information processor
JPH09307827A (en) * 1996-05-16 1997-11-28 Sharp Corp Channel selection device
JPH1051709A (en) * 1996-08-05 1998-02-20 Hitachi Ltd Television receiver
JPH1069364A (en) * 1996-08-28 1998-03-10 Fuji Electric Co Ltd Window screen selection system
JPH10145699A (en) * 1996-11-05 1998-05-29 Toshiba Corp Multi-screen display television receiver

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590995B2 (en) 2001-03-05 2009-09-15 Panasonic Corporation EPG display apparatus, EPG display method, medium, and program
EP1257121A3 (en) * 2001-05-08 2004-04-21 Canon Kabushiki Kaisha Display control apparatus
US20120223884A1 (en) * 2011-03-01 2012-09-06 Qualcomm Incorporated System and method to display content
CN103430126A (en) * 2011-03-01 2013-12-04 高通股份有限公司 System and method to display content
US9285883B2 (en) * 2011-03-01 2016-03-15 Qualcomm Incorporated System and method to display content based on viewing orientation
CN102768613A (en) * 2011-05-06 2012-11-07 宏达国际电子股份有限公司 System and method for interface management, and computer program product therefor
EP2521020A1 (en) * 2011-05-06 2012-11-07 HTC Corporation Systems and methods for interface management
CN109903380A (en) * 2019-03-14 2019-06-18 广州世峰数字科技有限公司 A kind of three-dimensional building model and information show interface layout system

Similar Documents

Publication Publication Date Title
JP3673425B2 (en) Program selection execution device and data selection execution device
US11520479B2 (en) Mass media presentations with synchronized audio reactions
US10936270B2 (en) Presentation facilitation
JP4638913B2 (en) Multi-plane 3D user interface
US6363404B1 (en) Three-dimensional models with markup documents as texture
JP5563564B2 (en) Method and system for switching between video sources
US8205169B1 (en) Multiple editor user interface
US8471873B2 (en) Enhanced UI operations leveraging derivative visual representation
WO2005013618A1 (en) Live streaming broadcast method, live streaming broadcast device, live streaming broadcast system, program, recording medium, broadcast method, and broadcast device
WO2018173791A1 (en) Image processing device and method, and program
JPH07319899A (en) Controller for turning and displaying of page
US20050028110A1 (en) Selecting functions in context
WO2000039662A1 (en) Program selective execution device, data selective execution device, image display device, and channel selection device
WO2020131536A1 (en) Interactive viewing and editing system
US20090241059A1 (en) Event driven smooth panning in a computer accessibility application
CN100419666C (en) Screen synchronous switching-over control method
US6657644B1 (en) Layer viewport for enhanced viewing in layered drawings
JP4045768B2 (en) Video processing device
JPH11146299A (en) Device and method for displaying data television broadcasting
JP2007086616A (en) Presentation support system
JPH06311510A (en) Conference supporting system for remote location
JP5200555B2 (en) Recording / reproducing apparatus and program
WO2018173790A1 (en) Image processing device, method, and program
JPH08115335A (en) Multimedia processor
US20220245887A1 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CN KR US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 09869175

Country of ref document: US

122 Ep: pct application non-entry in european phase