US20120094773A1 - Storage medium having stored thereon game program, image processing apparatus, image processing system, and image processing method - Google Patents

Storage medium having stored thereon game program, image processing apparatus, image processing system, and image processing method Download PDF

Info

Publication number
US20120094773A1
US20120094773A1 US13/080,989 US201113080989A US2012094773A1 US 20120094773 A1 US20120094773 A1 US 20120094773A1 US 201113080989 A US201113080989 A US 201113080989A US 2012094773 A1 US2012094773 A1 US 2012094773A1
Authority
US
United States
Prior art keywords
image
game
character object
face image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/080,989
Inventor
Toshiaki Suzuki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nintendo Co Ltd
Original Assignee
Nintendo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nintendo Co Ltd filed Critical Nintendo Co Ltd
Assigned to NINTENDO CO., LTD. reassignment NINTENDO CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SUZUKI, TOSHIAKI
Publication of US20120094773A1 publication Critical patent/US20120094773A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/26Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/46Computing the game score
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/105Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals using inertial sensors, e.g. accelerometers, gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/61Score computation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6653Methods for processing data by generating or executing the game program for rendering three dimensional images for altering the visibility of an object, e.g. preventing the occlusion of an object, partially hiding an object
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present invention relates to a storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method.
  • An image of a face area (hereinafter a “face image”) is an image of the most characteristic part of a living thing, and therefore is very useful as information for reflecting the real world on a virtual world.
  • the conventional techniques do not make sufficient use of the feature of a face image in which it is possible to reflect a situation in the real world on a virtual world.
  • the present invention may employ, for example, the following configurations. It is understood that when the description of the scope of the appended claims is interpreted, the scope should be interpreted only by the description of the scope of the appended claims. If the description of the scope of the appended claims contradicts the description of these columns, the description of the scope of the appended claims has priority.
  • a configuration example of a computer-readable storage medium having stored thereon a game program according to the present invention is executed by a computer of a game apparatus that displays an image on a display device.
  • the game program causing the computer to execute an image acquisition step, a step of creating a first character object, a first game processing step, a determination step, and a step of saving in a second storage area in an accumulating manner.
  • the image acquisition step acquires a face image and temporarily stores the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game.
  • the step of creating a first character object creates a first character object, the first character object being a character object including the face image stored in the first storage area.
  • the first game processing step in the predetermined game, advances a game related to the first character object in accordance with an operation of a player.
  • the determination step determines a success in the game related to the first character object.
  • the step of saving in a second storage area in an accumulating manner at least when a success in the game has been determined in the determination step, saves the face image stored in the first storage area, in a second storage area in an accumulating manner.
  • the player cannot save in the second storage area the face image acquired during the game or before the start of the game and temporarily stored in the first storage area, and therefore enjoys a sense of tension.
  • the face image may be saved in the second storage area when a success in the game has been determined, whereby, for example, it is possible to utilize the acquired face image even after the game has ended in the first game processing step. This causes the player to tackle the game of the first game processing step very enthusiastically and with their concentration.
  • the face image may be acquired and temporarily stored in the first storage area.
  • the game program may further cause the computer to execute a step of creating a second character object.
  • the step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area.
  • a game related to the second character object may be additionally advanced in accordance with an operation of the player.
  • the game program may further cause the computer to execute a step of creating a second character object and a second game processing step.
  • the step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area.
  • the second game processing step advances a game related to the second character object in accordance with an operation of the player.
  • the player can create the second character object including a face image selected from among the face images saved in the second storage area, and execute the game of the second game processing step. That is, the player can enjoy the game of the second game processing step by utilizing the face images stored by succeeding in the game of the first game processing step.
  • the character object that appears in the game includes a face image selected automatically or by an operation of the player, and therefore, the player can simply introduce a mental picture of the real world into a virtual world.
  • the game apparatus may be capable of acquiring an image from a capturing device.
  • the face image may be acquired from the capturing device before the start of the predetermined game.
  • the game apparatus may be capable of acquiring an image from a first capturing device that captures a front direction of a display surface of the display device, and an image from a second capturing device that captures a direction of a back surface of the display surface of the display device, the first capturing device and the second capturing device serving as the capturing device.
  • the image acquisition step may include: a step of acquiring a face image captured by the first capturing device in preference to acquiring a face image captured by the second capturing device; and a step of, after the face image from the first capturing device has been saved in the second storage area, permitting the face image captured by the second capturing device to be acquired.
  • the acquisition of a face image using the first capturing device is preferentially made, the first capturing device capturing the front direction of the display surface of the display device.
  • This increases the possibility that a face image of the player of the game apparatus or the like who views the display surface of the display device is preferentially acquired.
  • This increases the possibility of restricting the acquisition of an image with the second capturing device in the state where the player of the game apparatus or the like is not specified, the second capturing device capturing the direction of the display surface of the display device.
  • the game program may further cause the computer to execute a step of specifying attributes of the face images and a step of prompting the player to acquire a face image.
  • the step of specifying attributes of the face images specifies attributes of the face images saved in the second storage area.
  • the step of prompting the player to acquire a face image prompts the player to acquire a face image corresponding to an attribute different from the attributes specified from the face images saved in the second storage area.
  • the first game processing step may include a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player.
  • an attack on the first character object may be a valid attack for succeeding in the game related to the first character object
  • an attack on the second character object may be an invalid attack for succeeding in the game related to the first character object.
  • the player needs to control an attack on the second character object in the game of the first game processing step, and therefore selects and attacks the first character object.
  • the player needs to correctly recognize the first character object, and requires concentration.
  • the game program may further cause the computer to execute a step of creating a third character object.
  • the step of creating a third character object creates a third character object, the third character object being a character object including a face image different from the face image included in the second character object.
  • the second game processing step may include a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player.
  • an attack on the second character object may be a valid attack for succeeding in the game related to the second character object
  • an attack on the third character object may be an invalid attack for succeeding the game related to the second character object.
  • the player needs to control an attack on the third character object in the game of the second game processing step, and therefore, selects and attacks the second character object.
  • the player needs to correctly recognize the second character object, and requires concentration.
  • the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object.
  • the step of creating a third character object creates a third character object, the third character object being a character object including the face image stored in the first storage area and being smaller in dimensions than the first character object.
  • the step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image stored in the first storage area and being smaller in dimensions than the first character object.
  • the first game processing step may include: a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the first character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the first character object approaches the original face image stored in the first storage area.
  • the player needs to correctly recognize the first character object, the third character object, and the fourth character object, and requires concentration.
  • the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically.
  • the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object.
  • the step of creating a third character object creates a third character object, the third character object being a character object including the same face image as the face image included in the second character object and being smaller in dimensions than the second character object.
  • the step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image included in the second character object and being smaller in dimensions than the second character object.
  • the second game processing step may include: a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the second character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the second character object approaches the original face image saved in the second storage area.
  • the player needs to correctly recognize the second character object, the third character object, and the fourth character object, and requires concentration.
  • the first game processing step may include a step of, when the game related to the first character object has been successful, restoring the deformed face image to the original face image stored in the first storage area.
  • the player when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
  • a character object including a face image obtained by deforming the face image saved in the second storage area may be created as the second character object.
  • the second game processing step may include a step of, when the game related to the second character object has been successful, restoring the deformed face image to the original face image saved in the second storage area.
  • the player when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the second game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
  • the face image may be acquired and temporarily stored in the first storage area during the predetermined game.
  • the first character object in accordance with the creation of the first character object based on the acquisition of the face image during the predetermined game, the first character object may be caused to appear in the predetermined game, and the game related to the first character object may be advanced.
  • the game program may further cause the computer to execute a captured image acquisition step, a display image generation step, and a display control step.
  • the captured image acquisition step acquires a captured image captured by a real camera.
  • the display image generation step generates a display image in which a virtual character object that appears in the predetermined game is placed so as to have, as a background, the captured image acquired in the captured image acquisition step.
  • the display control step displays on the display device the display image generated in the display image generation step. In this case, in the image acquisition step, during the predetermined game, at least one face image may be extracted from the captured image displayed on the display device, and may be temporarily stored in the first storage area.
  • a face image included in a captured image of the real world displayed as a background appears as a character object. This makes it possible to save the face image in an accumulating manner by a success in a game related to the character object.
  • the display image may be generated by placing the first character object such that, when displayed on the display device, the first character object overlaps a position of the face image in the captured image, the face image extracted in the image acquisition step.
  • the first character object as if appearing from the captured image, and display an image as if the first character object is present in a real space captured by the real camera.
  • captured images of a real world captured in real time by the real camera may be repeatedly acquired.
  • the captured images repeatedly acquired in the captured image acquisition step may be sequentially set as the background.
  • face images corresponding to the already extracted face image may be repeatedly acquired from the captured images sequentially set as the background.
  • the first character object may be repeatedly created so as to include the face images repeatedly acquired in the image acquisition step.
  • the display image may be generated by placing the repeatedly created first character object such that, when displayed on the display device, the repeatedly created first character object overlaps positions of the face images in the respective captured images, the face images repeatedly acquired in the image acquisition step.
  • the game apparatus may be capable of using image data stored in storage means for storing data not temporarily.
  • image data stored in storage means for storing data not temporarily.
  • at least one face image may be extracted from the image data stored in the storage means, and may be temporarily stored in the first storage area.
  • a face image is acquired from image data stored in advance in the game apparatus.
  • a face image acquired in advance by another application or the like e.g., a face image included in an image photographed by a camera capturing application, or included in an image received from another device by a communication application
  • serves as an acquisition target e.g., a face image included in an image photographed by a camera capturing application, or included in an image received from another device by a communication application
  • FIG. 1 is a front view showing an example of a game apparatus 10 in an open state
  • FIG. 2 is a side view showing an example of the game apparatus 10 in the open state
  • FIG. 3A is a left side view showing an example of the game apparatus 10 in a closed state
  • FIG. 3B is a front view showing an example of the game apparatus 10 in the closed state
  • FIG. 3C is a right side view showing an example of the game apparatus 10 in the closed state
  • FIG. 3D is a rear view showing an example of the game apparatus 10 in the closed state
  • FIG. 4 is a diagram showing an example of a user holding the game apparatus 10 with both hands;
  • FIG. 5 is a diagram showing an example of a user holding the game apparatus 10 with one hand
  • FIG. 6 is a block diagram showing an example of the internal configuration of the game apparatus 10 ;
  • FIG. 7 is an example of face images displayed as a list
  • FIG. 8 is another example of face images displayed as a list
  • FIG. 9 is an example of a screen for attaching a face image to an enemy object
  • FIG. 10 is a diagram illustrating a face image selection screen
  • FIG. 11 is a diagram showing an example of various data stored in a main memory in accordance with the execution of an image processing program according to the present invention by the game apparatus of FIG. 1 ;
  • FIG. 12 is an example of the data structure of face image management information
  • FIG. 13 is an example of an aggregate table where already acquired face images are classified by attribute
  • FIG. 14 is a flow chart showing an example of the operation of the game apparatus 10 according to a first embodiment
  • FIG. 15 is a flow chart showing an example of a detailed process of a face image acquisition process 1 ;
  • FIG. 16 is a flow chart showing an example of a detailed process of a face image acquisition process 2 ;
  • FIG. 17 is a flow chart showing an example of a detailed process of a list display process
  • FIG. 18 is a flow chart showing an example of a detailed process of a cast determination process
  • FIG. 19A is a flow chart showing an example of a detailed process of a face image management assistance process 1 ;
  • FIG. 19B is a flow chart showing an example of a detailed process of a face image management assistance process 2 ;
  • FIG. 19C is a flow chart showing an example of a detailed process of a face image management assistance process 3 ;
  • FIG. 20A is a diagram showing an overview of a virtual space, which is an example of the image processing program
  • FIG. 20B is a diagram showing the relationship between a screen model and an ⁇ -texture
  • FIG. 21 is a diagram showing an example of the virtual space
  • FIG. 22 is a diagram showing a virtual three-dimensional space (game world) defined in a game program, which is an example of the image processing program;
  • FIG. 23 is an example of process steps of examples of the forms of display performed on an upper LCD of a game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 24 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 25 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 26 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 27A is a diagram showing an example of silhouette models of a shadow object as viewed from above;
  • FIG. 27B is a diagram showing an example of silhouette models of the shadow object
  • FIG. 28 is a diagram showing an example of the non-transparencies of objects
  • FIG. 29 is a flow chart showing an example of the operation of image processing performed by the game apparatus executing the image processing program
  • FIG. 30 is a subroutine flow chart showing an example of a detailed operation of an enemy-object-related process
  • FIG. 31 is a subroutine flow chart showing an example of a detailed operation of a bullet-object-related process
  • FIG. 32A is a subroutine flow chart showing an example of a detailed operation of a display image updating process (a first drawing method) of the image processing program;
  • FIG. 32B is a subroutine flow chart showing an example of a detailed operation of a display image updating process (a second drawing method) of the image processing program according to the present invention.
  • FIG. 33 is a diagram illustrating an example of a rendering process in the first drawing method
  • FIG. 34 is a diagram illustrating the positional relationships between objects of FIG. 33 ;
  • FIG. 35 is a diagram illustrating an example of a process of rendering a camera image
  • FIG. 36 is a diagram illustrating an example of a coordinate system used when the camera image is rendered
  • FIG. 37 is a diagram illustrating an example of a process of rendering a virtual space
  • FIG. 38 is a diagram illustrating the positional relationship between object of FIG. 37 ;
  • FIG. 39 is a diagram illustrating an example of a coordinate system of a boundary surface, used when the virtual space is rendered.
  • FIG. 40 is a diagram showing an example of a display image generated by the image processing program
  • FIG. 41 is a diagram showing an example of a screen of a game apparatus according to a second embodiment
  • FIG. 42 is a flow chart showing an example of the operation of the game apparatus according to the second embodiment.
  • FIG. 43 is a diagram showing an example of a screen of a game apparatus according to a third embodiment.
  • FIG. 44 is a flow chart showing an example of the operation of the game apparatus according to the third embodiment.
  • FIG. 45 is a diagram showing an example of a screen according to a fourth embodiment.
  • FIG. 46 is a flow chart showing an example of the operation of a game apparatus according to the fourth embodiment.
  • FIG. 47 is a diagram showing an example of a screen displayed on an upper LCD of a game apparatus according to a fifth embodiment
  • FIG. 48 is a diagram showing an example of the screen displayed on the upper LCD of the game apparatus according to the fifth embodiment.
  • FIG. 49 is a subroutine flow chart showing an example of a detailed operation of a during-game face image acquisition process performed by executing the image processing program according to the fifth embodiment
  • FIG. 50 is a subroutine flow chart showing an example of a detailed operation of a yet-to-appear process performed in step 202 of FIG. 49 ;
  • FIG. 51 is a subroutine flow chart showing an example of a detailed operation of an already-appeared process performed in step 208 of FIG. 49 .
  • data processed by a computer is illustrated using graphs and natural language. More specifically, however, the data is specified by computer-recognizable pseudo-language, commands, parameters, machine language, arrays, and the like. The present invention does not limit the method of representing the data.
  • the image processing apparatus according to the present invention is not limited to a game apparatus.
  • the image processing apparatus according to the present invention may be a given computer system, such as a general-purpose computer.
  • the image processing program according to the present embodiment is a game program.
  • the image processing program according to the present invention is not limited to a game program.
  • the image processing program according to the present invention can be applied by being executed by a given computer system.
  • the processes of the present embodiment may be subjected to distributed processing by a plurality of networked devices, or may be performed by a network system where, after main processes are performed by a server, the process results are distributed to terminals, or may be performed by a so-called cloud network.
  • FIGS. 1 , 2 , 3 A, 3 B, 3 C, and 3 D are each a plan view showing an example of the appearance of the game apparatus 10 .
  • the game apparatus 10 shown in FIGS. 1 through 3D includes a capturing section (camera), and therefore is capable of capturing an image with the capturing section, displaying the captured image on a screen, and storing data of the captured image. Further, the game apparatus 10 is capable of executing a game program stored in an exchangeable memory card, or a game program received from a server or another game apparatus via a network.
  • the game apparatus 10 is also capable of displaying on the screen an image generated by computer graphics processing, such as an image captured by a virtual camera set in a virtual space. It should be noted that in the present specification, the act of obtaining image data with the camera is described as “capturing”, and the act of storing the image data of the captured image is described as “photographing”.
  • the game apparatus 10 shown in FIGS. 1 through 3D includes a lower housing 11 and an upper housing 21 .
  • the lower housing 11 and the upper housing 21 are joined together by a hinge structure so as to be openable and closable in a folding manner (foldable). That is, the upper housing 21 is attached to the lower housing 11 so as to be rotatable (pivotable) relative to the lower housing 11 .
  • the game apparatus 10 has the following two forms: a closed state where the upper housing 21 is in firm contact with the lower housing 11 ( FIGS. 3A and 3C ); and a state where the upper housing 21 has rotated relative to the lower housing 11 such that the state of firm contact is released (an open state).
  • the rotation of the upper housing 21 is allowed to the position where, as shown in FIG. 2 , the upper housing 21 and the lower housing 11 are approximately parallel to each other in the open state (see FIG. 2 ).
  • FIG. 1 is a front view showing an example of the game apparatus 10 being open (in the open state).
  • a planar shape of each of the lower housing 11 and the upper housing 21 is a wider-than-high rectangular plate-like shape having a longitudinal direction (horizontal direction (left-right direction): an x-direction in FIG. 1 ) and a transverse direction ((up-down direction): a y-direction in FIG. 1 ).
  • the lower housing 11 and the upper housing 21 are joined together at the longitudinal upper outer edge of the lower housing 11 and the longitudinal lower outer edge of the upper housing 21 by a hinge structure so as to be rotatable relative to each other.
  • a user uses the game apparatus 10 in the open state.
  • the user stores away the game apparatus 10 in the closed state.
  • the upper housing 21 can maintain the state of being stationary at a desired angle formed between the lower housing 11 and the upper housing 21 due, for example, to a frictional force generated at the connecting part between the lower housing 11 and the upper housing 21 . That is, the game apparatus 10 can maintain the upper housing 21 stationary at a desired angle with respect to the lower housing 11 .
  • the upper housing 21 is open at a right angle or an obtuse angle with the lower housing 11 .
  • the respective opposing surfaces of the upper housing 21 and the lower housing 11 are referred to as “inner surfaces” or “main surfaces”. Further, the surfaces opposite to the respective inner surfaces (main surfaces) of the upper housing 21 and the lower housing 11 are referred to as “outer surfaces”.
  • Projections 11 A are provided at the upper long side portion of the lower housing 11 , each projection 11 A projecting perpendicularly (in a z-direction in FIG. 1 ) to an inner surface (main surface) 11 B of the lower housing 11 .
  • a projection (bearing) 21 A is provided at the lower long side portion of the upper housing 21 , the projection 21 A projecting perpendicularly to the lower side surface of the upper housing 21 from the lower side surface of the upper housing 21 .
  • a rotating shaft (not shown) is accommodated so as to extend in the x-direction from one of the projections 11 A through the projection 21 A to the other projection 11 A.
  • the upper housing 21 is freely rotatable about the rotating shaft, relative to the lower housing 11 .
  • the lower housing 11 and the upper housing 21 are connected together in a foldable manner.
  • the inner surface 11 B of the lower housing 11 shown in FIG. 1 includes a lower liquid crystal display (LCD) 12 , a touch panel 13 , operation buttons 14 A through 14 L, an analog stick 15 , a first LED 16 A, and a microphone hole 18 .
  • LCD liquid crystal display
  • the lower LCD 12 is accommodated in the lower housing 11 .
  • a planar shape of the lower LCD 12 is a wider-than-high rectangle, and is placed such that the long side direction of the lower LCD 12 coincides with the longitudinal direction of the lower housing 11 (the x-direction in FIG. 1 ).
  • the lower LCD 12 is provided in the center of the inner surface (main surface) of the lower housing 11 .
  • the screen of the lower LCD 12 is exposed through an opening of the inner surface of the lower housing 11 .
  • the game apparatus 10 is in the closed state when not used, so that the screen of the lower LCD 12 is prevented from being soiled or damaged.
  • the number of pixels of the lower LCD 12 is 320 dots ⁇ 240 dots (horizontal ⁇ vertical).
  • the lower LCD 12 is a display device that displays an image in a planar manner (not in a stereoscopically visible manner). It should be noted that although an LCD is used as a display device in the first embodiment, any other display device may be used, such as a display device using electroluminescence (EL). Further, a display device having a desired resolution may be used as the lower LCD 12 .
  • EL electroluminescence
  • the touch panel 13 is one of input devices of the game apparatus 10 .
  • the touch panel 13 is mounted so as to cover the screen of the lower LCD 12 .
  • the touch panel 13 may be, but is not limited to, a resistive touch panel.
  • the touch panel may also be a touch panel of any pressure type, such as an electrostatic capacitance type.
  • the touch panel 13 has the same resolution (detection accuracy) as that of the lower LCD 12 .
  • the resolutions of the touch panel 13 and the lower LCD 12 may not necessarily need to be the same.
  • the operation buttons 14 A through 14 L are each an input device for providing a predetermined input.
  • the cross button 14 A direction input button 14 A
  • the button 14 B the button 14 C
  • the button 14 D the button 14 D
  • the button 14 E the power button 14 F
  • the select button 14 J the home button 14 K
  • the start button 14 L are provided on the inner surface (main surface) of the lower housing 11 .
  • the cross button 14 A is cross-shaped, and includes buttons for indicating at least up, down, left, and right directions, respectively.
  • the cross button 14 A is provided in an lower area of the area to the left of the lower LCD 12 .
  • the cross button 14 A is placed so as to be operated by the thumb of a left hand holding the lower housing 11 .
  • the button 14 B, the button 14 C, the button 14 D, and the button 14 E are placed in a cross formation in an upper portion of the area to the right of the lower LCD 12 .
  • the button 14 B, the button 14 C, the button 14 D, and the button 14 E, are placed where the thumb of a right hand holding the lower housing 11 is naturally placed.
  • the power button 14 F is placed in a lower portion of the area to the right of the lower LCD 12 .
  • the select button 14 J, the home button 14 K, and the start button 14 L are provided in a lower area of the lower LCD 12 .
  • buttons 14 A through 14 E, the select button 14 J, the home button 14 K, and the start button 14 L are appropriately assigned functions, respectively, in accordance with the program executed by the game apparatus 10 .
  • the cross button 14 A is used for, for example, a selection operation and a moving operation of a character during a game.
  • the operation buttons 14 B through 14 E are used for, for example, a determination operation or a cancellation operation.
  • the power button 14 F is used to power on/off the game apparatus 10 .
  • the analog stick 15 is a device for indicating a direction.
  • the analog stick 15 is provided to an upper portion of the area to the left of the lower LCD 12 of the inner surface (main surface) of the lower housing 11 . That is, the analog stick 15 is provided above the cross button 14 A.
  • the analog stick 15 is placed so as to be operated by the thumb of a left hand holding the lower housing 11 .
  • the provision of the analog stick 15 in the upper area places the analog stick 15 at the position where the thumb of the left hand of the user holding the lower housing 11 is naturally placed.
  • the cross button 14 A is placed at the position where the thumb of the left hand holding the lower housing 11 is moved slightly downward.
  • the analog stick 15 functions in accordance with the program executed by the game apparatus 10 .
  • the game apparatus 10 executes a game where a predetermined object appears in a three-dimensional virtual space
  • the analog stick 15 functions as an input device for moving the predetermined object in the three-dimensional virtual space.
  • the predetermined object is moved in the direction in which the key top of the analog stick 15 has slid.
  • the analog stick 15 may be a component capable of providing an analog input by being tilted by a predetermined amount in any one of up, down, right, left, and diagonal directions.
  • buttons 14 B, the button 14 C, the button 14 D, and the button 14 E, and the analog stick 15 are placed symmetrically to each other with respect to the lower LCD 12 .
  • This also enables, for example, a left-handed person to provide a direction indication input using these four buttons, namely the button 14 B, the button 14 C, the button 14 D, and the button 14 E, depending on the game program.
  • the first LED 16 A ( FIG. 1 ) notifies the user of the on/off state of the power supply of the game apparatus 10 .
  • the first LED 16 A is provided on the right of an end portion shared by the inner surface (main surface) of the lower housing 11 and the lower side surface of the lower housing 11 . This enables the user to view whether or not the first LED 16 A is lit on, regardless of the open/closed state of the game apparatus 10 .
  • the microphone hole 18 is a hole for a microphone built into the game apparatus 10 as a sound input device.
  • the built-in microphone detects a sound from outside the game apparatus 10 through the microphone hole 18 .
  • the microphone and the microphone hole 18 are provided below the power button 14 F on the inner surface (main surface) of the lower housing 11 .
  • the upper side surface of the lower housing 11 includes an opening 17 (a dashed line shown in FIGS. 1 and 3D ) for a stylus 28 .
  • the opening 17 can accommodate the stylus 28 that is used to perform an operation on the touch panel 13 . It should be note that, normally, an input is provided to the touch panel 13 using the stylus 28 .
  • the touch panel 13 can be operated not only by the stylus 28 but also by a finger of the user.
  • the upper side surface of the lower housing 11 includes an insertion slot 11 D (a dashed line shown in FIGS. 1 and 3D ), into which an external memory 45 having a game program stored thereon is to be inserted.
  • a connector (not shown) is provided for electrically connecting the game apparatus 10 and the external memory 45 in a detachable manner.
  • the connection of the external memory 45 to the game apparatus 10 causes a processor included in internal circuitry to execute a predetermined game program.
  • the connector and the insertion slot 11 D may be provided on another side surface (e.g., the right side surface) of the lower housing 11 .
  • the inner surface 21 B of the upper housing 21 shown in FIG. 1 includes loudspeaker holes 21 E, an upper LCD 22 , an inner capturing section 24 , a 3D adjustment switch 25 , and a 3D indicator 26 are provided.
  • the inner capturing section 24 is an example of a first capturing device.
  • the upper LCD 22 is a display device capable of displaying a stereoscopically visible image.
  • the upper LCD 22 is capable of displaying a left-eye image and a right-eye image, using substantially the same display area.
  • the upper LCD 22 is a display device using a method in which the left-eye image and the right-eye image are displayed alternately in the horizontal direction in predetermined units (e.g., in every other line).
  • the upper LCD 22 may be a display device using a method in which the left-eye image and the right-eye image are displayed alternately for a predetermined time.
  • the upper LCD 22 is a display device capable of displaying an image stereoscopically visible with the naked eye.
  • the upper LCD 22 is a parallax-barrier-type display device.
  • the upper LCD 22 displays an image stereoscopically visible with the naked eye (a stereoscopic image), using the right-eye image and the left-eye image. That is, the upper LCD 22 allows the user to view the left-eye image with their left eye, and the right-eye image with their right eye, using the parallax barrier.
  • the upper LCD 22 is capable of disabling the parallax barrier.
  • the upper LCD 22 is capable of displaying an image in a planar manner (the upper LCD 22 is capable of displaying a planar view image, as opposed to the stereoscopically visible image described above. This is a display mode in which the same displayed image can be viewed with both the left and right eyes).
  • the upper LCD 22 is a display device capable of switching between: the stereoscopic display mode for displaying a stereoscopically visible image; and the planar display mode for displaying an image in a planar manner (displaying a planar view image).
  • the switching of the display modes is performed by the 3D adjustment switch 25 described later.
  • the upper LCD 22 is accommodated in the upper housing 21 .
  • a planar shape of the upper LCD 22 is a wider-than-high rectangle, and is placed at the center of the upper housing 21 such that the long side direction of the upper LCD 22 coincides with the long side direction of the upper housing 21 .
  • the area of the screen of the upper LCD 22 is set greater than that of the lower LCD 12 .
  • the screen of the upper LCD 22 is set horizontally longer than the screen of the lower LCD 12 . That is, the proportion of the width in the aspect ratio of the screen of the upper LCD 22 is set greater than that of the lower LCD 12 .
  • the screen of the upper LCD 22 is provided on the inner surface (main surface) 21 B of the upper housing 21 , and is exposed through an opening of the inner surface of the upper housing 21 . Further, the inner surface of the upper housing 21 is covered by a transparent screen cover 27 .
  • the screen cover 27 protects the screen of the upper LCD 22 , and integrates the upper LCD 22 and the inner surface of the upper housing 21 , and thereby provides unity.
  • the number of pixels of the upper LCD 22 is 800 dots ⁇ 240 dots (horizontal ⁇ vertical). It should be noted that an LCD is used as the upper LCD 22 in the first embodiment.
  • the upper LCD 22 is not limited to this, and a display device using EL or the like may be used. Furthermore, a display device having any resolution may be used as the upper LCD 22 .
  • the loudspeaker holes 21 E are holes through which sounds from loudspeakers 44 that serve as a sound output device of the game apparatus 10 are output.
  • the loudspeakers holes 21 E are placed symmetrically with respect to the upper LCD. Sounds from the loudspeakers 44 described later are output through the loudspeaker holes 21 E.
  • the inner capturing section 24 functions as a capturing section having an imaging direction that is the same as the inward normal direction of the inner surface 21 B of the upper housing 21 .
  • the inner capturing section 24 includes an imaging device having a predetermined resolution, and a lens.
  • the lens may have a zoom mechanism.
  • the inner capturing section 24 is placed: on the inner surface 21 B of the upper housing 21 ; above the upper edge of the screen of the upper LCD 22 ; and in the center of the upper housing 21 in the left-right direction (on the line dividing the upper housing 21 (the screen of the upper LCD 22 ) into two equal left and right portions).
  • Such a placement of the inner capturing section 24 makes it possible that when the user views the upper LCD 22 from the front thereof, the inner capturing section 24 captures the user's face from the front thereof.
  • a left outer capturing section 23 a and a right outer capturing section 23 b will be described later.
  • the 3D adjustment switch 25 is a slide switch, and is used to switch the display modes of the upper LCD 22 as described above.
  • the 3D adjustment switch 25 is also used to adjust the stereoscopic effect of a stereoscopically visible image (stereoscopic image) displayed on the upper LCD 22 .
  • the 3D adjustment switch 25 is provided at an end portion shared by the inner surface and the right side surface of the upper housing 21 , so as to be visible to the user, regardless of the open/closed state of the game apparatus 10 .
  • the 3D adjustment switch 25 includes a slider that is slidable to any position in a predetermined direction (e.g., the up-down direction), and the display mode of the upper LCD 22 is set in accordance with the position of the slider.
  • the upper LCD 22 When, for example, the slider of the 3D adjustment switch 25 is placed at the lowermost position, the upper LCD 22 is set to the planar display mode, and a planar image is displayed on the screen of the upper LCD 22 .
  • the same image may be used as the left-eye image and the right-eye image, while the upper LCD 22 remains set to the stereoscopic display mode, and thereby performs planar display.
  • the upper LCD 22 when the slider is placed above the lowermost position, the upper LCD 22 is set to the stereoscopic display mode. In this case, a stereoscopically visible image is displayed on the screen of the upper LCD 22 .
  • the visibility of the stereoscopic image is adjusted in accordance with the position of the slider. Specifically, the amount of deviation in the horizontal direction between the position of the right-eye image and the position of the left-eye image is adjusted in accordance with the position of the slider.
  • the 3D indicator 26 indicates whether or not the upper LCD 22 is in the stereoscopic display mode.
  • the 3D indicator 26 is an LED, and is lit on when the stereoscopic display mode of the upper LCD 22 is enabled.
  • the 3D indicator 26 is placed on the inner surface 21 B of the upper housing 21 near the screen of the upper LCD 22 . Accordingly, when the user views the screen of the upper LCD 22 from the front thereof, the user can easily view the 3D indicator 26 . This enables the user to easily recognize the display mode of the upper LCD 22 even when viewing the screen of the upper LCD 22 .
  • FIG. 2 is a right side view showing an example of the game apparatus 10 in the open state.
  • the right side surface of the lower housing 11 includes a second LED 16 B, a wireless switch 19 , and the R button 14 H.
  • the second LED 16 B notifies the user of the establishment state of the wireless communication of the game apparatus 10 .
  • the game apparatus 10 is capable of wirelessly communicating with other devices, and the second LED 16 B is lit on when wireless communication is established between the game apparatus 10 and other devices.
  • the game apparatus 10 has the function of establishing connection with a wireless LAN by, for example, a method based on the IEEE 802.11.b/g standard.
  • the wireless switch 19 enables/disables the function of the wireless communication.
  • the R button 14 H will be described later.
  • FIG. 3A is a left side view showing an example of the game apparatus 10 being closed (in the closed state).
  • the left side surface of the lower housing 11 shown in FIG. 3A includes an openable and closable cover section 11 C, the L button 14 H, and the sound volume button 141 .
  • the sound volume button 141 is used to adjust the sound volume of the loudspeakers of the game apparatus 10 .
  • a connector (not shown) is provided for electrically connecting the game apparatus 10 and a data storage external memory 46 (see FIG. 1 ).
  • the data storage external memory 46 is detachably attached to the connector.
  • the data storage external memory 46 is used to, for example, store (save) data of an image captured by the game apparatus 10 .
  • the connector and the cover section 11 C may be provided on the right side surface of the lower housing 11 .
  • the L button 14 H will be described later.
  • FIG. 3B is a front view showing an example of the game apparatus 10 in the closed state.
  • the outer surface of the upper housing 21 shown in FIG. 3B includes a left outer capturing section 23 a , a right outer capturing section 23 b , and a third LED 29 .
  • the left outer capturing section 23 a and the right outer capturing section 23 b each includes an imaging device (e.g., a CCD image sensor or a CMOS image sensor) having a predetermined common resolution, and a lens.
  • the lens may have a zoom mechanism.
  • the imaging directions of the left outer capturing section 23 a and the right outer capturing section 23 b are each the same as the outward normal direction of the outer surface 21 D. That is, the imaging direction of the left outer capturing section 23 a and the imaging direction of the right outer capturing section 23 b are parallel to each other.
  • the left outer capturing section 23 a and the right outer capturing section 23 b are collectively referred to as an “outer capturing section 23 ”.
  • the outer capturing section 23 is an example of a second capturing device.
  • the left outer capturing section 23 a and the right outer capturing section 23 b included in the outer capturing section 23 are placed along the horizontal direction of the screen of the upper LCD 22 . That is, the left outer capturing section 23 a and the right outer capturing section 23 b are placed such that a straight line connecting between the left outer capturing section 23 a and the right outer capturing section 23 b is placed along the horizontal direction of the screen of the upper LCD 22 .
  • the left outer capturing section 23 a is placed on the left side of the user viewing the screen, and the right outer capturing section 23 b is placed on the right side of the user (see FIG. 1 ).
  • the distance between the left outer capturing section 23 a and the right outer capturing section 23 b is set to correspond to the distance between both eyes of a person, and may be set, for example, in the range from 30 mm to 70 mm. It should be noted, however, that the distance between the left outer capturing section 23 a and the right outer capturing section 23 b is not limited to this range. It should be noted that in the first embodiment, the left outer capturing section 23 a and the right outer capturing section 23 b are fixed to the housing 21 , and therefore, the imaging directions cannot be changed.
  • the left outer capturing section 23 a and the right outer capturing section 23 b are placed symmetrically with respect to the line dividing the upper LCD 22 (the upper housing 21 ) into two equal left and right portions. Further, the left outer capturing section 23 a and the right outer capturing section 23 b are placed in the upper portion of the upper housing 21 and in the back of the portion above the upper edge of the screen of the upper LCD 22 , in the state where the upper housing 21 is in the open state (see FIG. 1 ).
  • the left outer capturing section 23 a and the right outer capturing section 23 b are placed on the outer surface of the upper housing 21 , and, if the upper LCD 22 is projected onto the outer surface of the upper housing 21 , is placed above the upper edge of the screen of the projected upper LCD 22 .
  • the left outer capturing section 23 a and the right outer capturing section 23 b of the outer capturing section 23 are placed symmetrically with respect to the center line of the upper LCD 22 extending in the transverse direction. This makes it possible that when the user views the upper LCD 22 from the front thereof, the imaging directions of the outer capturing section 23 coincide with the directions of the respective lines of sight of the user's right and left eyes. Further, the outer capturing section 23 is placed in the back of the portion above the upper edge of the screen of the upper LCD 22 , and therefore, the outer capturing section 23 and the upper LCD 22 do not interfere with each other inside the upper housing 21 . Further, when the inner capturing section 24 provided on the inner surface of the upper housing 21 as shown by a dashed line in FIG.
  • 3B is projected onto the outer surface of the upper housing 21 , the left outer capturing section 23 a and the right outer capturing section 23 b are placed symmetrically with respect to the projected inner capturing section 24 .
  • This makes it possible to reduce the upper housing 21 in thickness as compared to the case where the outer capturing section 23 is placed in the back of the screen of the upper LCD 22 , or the case where the outer capturing section 23 is placed in the back of the inner capturing section 24 .
  • the left outer capturing section 23 a and the right outer capturing section 23 b can be used as a stereo camera, depending on the program executed by the game apparatus 10 .
  • either one of the two outer capturing sections may be used solely, so that the outer capturing section 23 can also be used as a non-stereo camera, depending on the program.
  • the left outer capturing section 23 a captures a left-eye image, which is to be viewed with the user's left eye
  • the right outer capturing section 23 b captures a right-eye image, which is to be viewed with the user's right eye.
  • images captured by the two outer capturing sections may be combined together, or may be used to compensate for each other, so that imaging can be performed with an extended imaging range.
  • a left-eye image and a right-eye image that have a parallax may be generated from a single image captured using one of the outer capturing sections 23 a and 23 b , and a pseudo-stereo image as if captured by two cameras can be generated.
  • a pseudo-stereo image as if captured by two cameras can be generated.
  • the third LED 29 is lit on when the outer capturing section 23 is operating, and informs that the outer capturing section 23 is operating.
  • the third LED 29 is provided near the outer capturing section 23 on the outer surface of the upper housing 21 .
  • FIG. 3C is a right side view showing an example of the game apparatus 10 in the closed state.
  • FIG. 3D is a rear view showing an example of the game apparatus 10 in the closed state.
  • the L button 14 G and the R button 14 H are provided on the upper side surface of the lower housing 11 shown in FIG. 3D .
  • the L button 14 G is provided at the left end portion of the upper side surface of the lower housing 11
  • the R button 14 H is provided at the right end portion of the upper side surface of the lower housing 11 .
  • the L button 14 G and the R button 14 H are appropriately assigned functions, respectively, in accordance with the program executed by the game apparatus 10 .
  • the L button 14 G and the R button 14 H function as shutter buttons (capturing instruction buttons) of the capturing sections described above.
  • a rechargeable battery that serves as the power supply of the game apparatus 10 is accommodated in the lower housing 11 , and the battery can be charged through a terminal provided on the side surface (e.g., the upper side surface) of the lower housing 11 .
  • FIGS. 4 and 5 each show an example of the state of the use of the game apparatus 10 .
  • FIG. 4 is a diagram showing an example of a user holding the game apparatus 10 with both hands.
  • the user holds the side surfaces and the outer surface (the surface opposite to the inner surface) of the lower housing 11 with both palms, middle fingers, ring fingers, and little fingers, such that the lower LCD 12 and the upper LCD 22 face the user.
  • Such holding enables the user to perform operations on the operation buttons 14 A through 14 E and the analog stick 15 with their thumbs, and to perform operations on the L button 14 G and the R button 14 H with their index fingers, while holding the lower housing 11 .
  • FIG. 5 is a diagram showing an example of a user holding the game apparatus 10 with one hand.
  • the user when providing an input on the touch panel 13 , the user releases one of the hands having held the lower housing 11 therefrom, and holds the lower housing 11 only with the other hand. This makes it possible to provide an input to the touch panel 13 with the one hand.
  • FIG. 6 is a block diagram showing an example of the internal configuration of the game apparatus 10 .
  • the game apparatus 10 includes, as well as the components described above, electronic components, such as an information processing section 31 , a main memory 32 , an external memory interface (external memory I/F) 33 , a data storage external memory I/F 34 , a data storage internal memory 35 , a wireless communication module 36 , a local communication module 37 , a real-time clock (RTC) 38 , an acceleration sensor 39 , an angular velocity sensor 40 , a power circuit 41 , and an interface circuit (I/F circuit) 42 .
  • These electronic components are mounted on electronic circuit boards, and are accommodated in the lower housing 11 (or may be accommodated in the upper housing 21 ).
  • the information processing section 31 is information processing means including a central processing unit (CPU) 311 that executes a predetermined program, a graphics processing unit (GPU) 312 that performs image processing, and the like.
  • a predetermined program is stored in a memory (e.g., the external memory 45 connected to the external memory I/F 33 , or the data storage internal memory 35 ) included in the game apparatus 10 .
  • the CPU 311 of the information processing section 31 executes the predetermined program, and thereby performs the image processing described later or game processing. It should be noted that the program executed by the CPU 311 of the information processing section 31 may be acquired from another device by communication with said another device.
  • the information processing section 31 further includes a video RAM (VRAM) 313 .
  • VRAM video RAM
  • the GPU 312 of the information processing section 31 generates an image in accordance with an instruction from the CPU 311 of the information processing section 31 , and draws the image in the VRAM 313 .
  • the GPU 312 of the information processing section 31 outputs the image drawn in the VRAM 313 to the upper LCD 22 and/or the lower LCD 12 , and the image is displayed on the upper LCD 22 and/or the lower LCD 12 .
  • the external memory I/F 33 is an interface for establishing a detachable connection with the external memory 45 .
  • the data storage external memory I/F 34 is an interface for establishing a detachable connection with the data storage external memory 46 .
  • the main memory 32 is volatile storage means used as a work area or a buffer area of the information processing section 31 (the CPU 311 ). That is, the main memory 32 temporarily stores various types of data used for image processing or game processing, and also temporarily stores a program acquired from outside (the external memory 45 , another device, or the like) the game apparatus 10 .
  • the main memory 32 is, for example, a pseudo SRAM (PSRAM).
  • the external memory 45 is nonvolatile storage means for storing the program executed by the information processing section 31 .
  • the external memory 45 is composed of, for example, a read-only semiconductor memory.
  • the information processing section 31 can load a program stored in the external memory 45 .
  • a predetermined process is performed.
  • the data storage external memory 46 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data.
  • the data storage external memory 46 stores images captured by the outer capturing section 23 and/or images captured by another device.
  • the information processing section 31 loads an image stored in the data storage external memory 46 , and the image can be displayed on the upper LCD 22 and/or the lower LCD 12 .
  • the data storage internal memory 35 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data.
  • the data storage internal memory 35 stores data and/or programs downloaded by wireless communication through the wireless communication module 36 .
  • the wireless communication module 36 has the function of establishing connection with a wireless LAN by, for example, a method based on the IEEE 802.11.b/g standard. Further, the local communication module 37 has the function of wirelessly communicating with another game apparatus of the same type by a predetermined communication method (e.g., infrared communication).
  • the wireless communication module 36 and the local communication module 37 are connected to the information processing section 31 .
  • the information processing section 31 is capable of transmitting and receiving data to and from another device via the Internet, using the wireless communication module 36 , and is capable of transmitting and receiving data to and from another game apparatus of the same type, using the local communication module 37 .
  • the acceleration sensor 39 is connected to the information processing section 31 .
  • the acceleration sensor 39 detects the magnitudes of accelerations (linear accelerations) in the directions of straight lines along three axial (x, y, and z axes in the present embodiment) directions, respectively.
  • the acceleration sensor 39 is provided, for example, within the lower housing 11 .
  • the long side direction of the lower housing 11 is defined as an x-axis direction
  • the short side direction of the lower housing 11 is defined as a y-axis direction
  • the direction perpendicular to the inner surface (main surface) of the lower housing 11 is defined as a z-axis direction.
  • the acceleration sensor 39 thus detects the magnitudes of the linear accelerations produced in the respective axial directions.
  • the acceleration sensor 39 is, for example, an electrostatic capacitance type acceleration sensor, but may be an acceleration sensor of another type. Further, the acceleration sensor 39 may be an acceleration sensor for detecting an acceleration in one axial direction, or accelerations in two axial directions.
  • the information processing section 31 receives data indicating the accelerations detected by the acceleration sensor 39 (acceleration data), and calculates the orientation and the motion of the game apparatus 10 .
  • the angular velocity sensor 40 is connected to the information processing section 31 .
  • the angular velocity sensor 40 detects angular velocities generated about three axes (x, y, and z axes in the present embodiment) of the game apparatus 10 , respectively, and outputs data indicating the detected angular velocities (angular velocity data) to the information processing section 31 .
  • the angular velocity sensor 40 is provided, for example, within the lower housing 11 .
  • the information processing section 31 receives the angular velocity data output from the angular velocity sensor 40 , and calculates the orientation and the motion of the game apparatus 10 .
  • the RTC 38 and the power circuit 41 are connected to the information processing section 31 .
  • the RTC 38 counts time, and outputs the counted time to the information processing section 31 .
  • the information processing section 31 calculates the current time (date) based on the time counted by the RTC 38 .
  • the power circuit 41 controls the power from the power supply (the rechargeable battery accommodated in the lower housing 11 , which is described above) of the game apparatus 10 , and supplies power to each component of the game apparatus 10 .
  • the I/F circuit 42 is connected to the information processing section 31 .
  • a microphone 43 , a loudspeaker 44 , and the touch panel 13 are connected to the I/F circuit 42 .
  • the loudspeaker 44 is connected to the I/F circuit 42 through an amplifier not shown in the figures.
  • the microphone 43 detects a sound from the user, and outputs a sound signal to the I/F circuit 42 .
  • the amplifier amplifies the sound signal from the I/F circuit 42 , and outputs the sound from the loudspeaker 44 .
  • the I/F circuit 42 includes: a sound control circuit that controls the microphone 43 and the loudspeaker 44 (amplifier); and a touch panel control circuit that controls the touch panel 13 .
  • the sound control circuit performs A/D conversion and D/A conversion on the sound signal, and converts the sound signal to sound data in a predetermined format.
  • the touch panel control circuit generates touch position data in a predetermined format, based on a signal from the touch panel 13 , and outputs the touch position data to the information processing section 31 .
  • the touch position data indicates the coordinates of the position (touch position), on the input surface of the touch panel 13 , at which an input has been provided. It should be noted that the touch panel control circuit reads a signal from the touch panel 13 , and generates the touch position data, once in a predetermined time.
  • the information processing section 31 acquires the touch position data, and thereby recognizes the touch position, at which the input has been provided on the touch panel 13 .
  • An operation button 14 includes the operation buttons 14 A through 14 L described above, and is connected to the information processing section 31 .
  • Operation data is output from the operation button 14 to the information processing section 31 , the operation data indicating the states of inputs provided to the respective operation buttons 14 A through 141 (indicating whether or not the operation buttons 14 A through 141 have been pressed).
  • the information processing section 31 acquires the operation data from the operation button 14 , and thereby performs processes in accordance with the inputs provided to the operation button 14 .
  • the lower LCD 12 and the upper LCD 22 are connected to the information processing section 31 .
  • the lower LCD 12 and the upper LCD 22 each display an image in accordance with an instruction from the information processing section 31 (the GPU 312 ).
  • the information processing section 31 causes the lower LCD 12 to display an image for a hand-drawn image input operation, and causes the upper LCD 22 to display an image acquired from either one of the outer capturing section 23 and the inner capturing section 24 .
  • the information processing section 31 causes the upper LCD 22 to display a stereoscopic image (stereoscopically visible image) using a right-eye image and a left-eye image that are captured by the inner capturing section 24 , or causes the upper LCD 22 to display a planar image using one of a right-eye image and a left-eye image that are captured by the outer capturing section 23 .
  • a stereoscopic image stereographically visible image
  • the upper LCD 22 causes the upper LCD 22 to display a planar image using one of a right-eye image and a left-eye image that are captured by the outer capturing section 23 .
  • the information processing section 31 is connected to an LCD controller (not shown) of the upper LCD 22 , and causes the LCD controller to set the parallax barrier to on/off.
  • the parallax barrier is on in the upper LCD 22
  • a right-eye image and a left-eye image that are stored in the VRAM 313 of the information processing section 31 (that are captured by the outer capturing section 23 ) are output to the upper LCD 22 .
  • the LCD controller repeatedly alternates the reading of pixel data of the right-eye image for one line in the vertical direction, and the reading of pixel data of the left-eye image for one line in the vertical direction, and thereby reads the right-eye image and the left-eye image from the VRAM 313 .
  • the right-eye image and the left-eye image are each divided into strip images, each of which has one line of pixels placed in the vertical direction, and an image including the divided left-eye strip images and the divided right-eye strip images alternately placed is displayed on the screen of the upper LCD 22 .
  • the user views the images through the parallax barrier of the upper LCD 22 , whereby the right-eye image is viewed with the user's right eye, and the left-eye image is viewed with the user's left eye. This causes the stereoscopically visible image to be displayed on the screen of the upper LCD 22 .
  • the outer capturing section 23 and the inner capturing section 24 are connected to the information processing section 31 .
  • the outer capturing section 23 and the inner capturing section 24 each capture an image in accordance with an instruction from the information processing section 31 , and output data of the captured image to the information processing section 31 .
  • the information processing section 31 gives either one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image, and the capturing section that has received the instruction captures an image, and transmits data of the captured image to the information processing section 31 .
  • the user selects the capturing section to be used, through an operation using the touch panel 13 and the operation button 14 .
  • the information processing section 31 (the CPU 311 ) detects that an capturing section has been selected, and the information processing section 31 gives the selected one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image.
  • the outer capturing section 23 and the inner capturing section 24 When started by an instruction from the information processing section 31 (CPU 311 ), the outer capturing section 23 and the inner capturing section 24 perform capturing at, for example, a speed of 60 images per second.
  • the captured images captured by the outer capturing section 23 and the inner capturing section 24 are sequentially transmitted to the information processing section 31 , and displayed on the upper LCD 22 or the lower LCD 12 by the information processing section 31 (GPU 312 ).
  • the captured images are stored in the VRAM 313 , are output to the upper LCD 22 or the lower LCD 12 , and are deleted at predetermined times.
  • images are captured at, for example, a speed of 60 images per second, and the captured images are displayed, whereby the game apparatus 10 can display views in the imaging ranges of the outer capturing section 23 and the inner capturing section 24 , on the upper LCD 22 of the lower LCD 12 in real time.
  • the 3D adjustment switch 25 is connected to the information processing section 31 .
  • the 3D adjustment switch 25 transmits to the information processing section 31 an electrical signal in accordance with the position of the slider.
  • the 3D indicator 26 is connected to the information processing section 31 .
  • the information processing section 31 controls whether or not the 3D indicator 26 is to be lit on. When, for example, the upper LCD 22 is in the stereoscopic display mode, the information processing section 31 lights on the 3D indicator 26 .
  • the game apparatus 10 provides the function of collecting face images by acquiring and saving, for example, face images of people through the inner capturing section 24 , the outer capturing section 23 , or the like in accordance with an operation of a user (hereinafter also referred to as a “player”).
  • a user hereinafter also referred to as a “player”.
  • the user executes a game (first game) using an acquired face image, and when the result of the game has been successful, the user can save the acquired image.
  • the user can acquire a face image, which is a target to be saved, from: an image captured by the inner capturing section 24 , the outer capturing section 23 , or the like before executing the first game; an image acquired by an application different from the first game before executing the first game; an image captured by the inner capturing section 24 , the outer capturing section 23 , or the like during the execution of the first game; or the like.
  • the game apparatus 10 saves the face image acquired before the first game or during the first game, in a saved data storage area Do (see FIG. 11 ), which is accessible during the execution of the game processing.
  • the user repeats a similar operation on the game apparatus 10 , and thereby can collect a plurality of face images, add the face images to the saved data storage area Do, and accumulate the face images.
  • the saved data storage area Do is an area accessible to the game apparatus 10 that is executing the game. Accordingly, the acquired face image is saved in the saved data storage area Do, and thereby is available in the subsequent processes. Further, the game apparatus 10 reads data accumulated in the saved data storage area Do, and thereby displays a list of face images collected as a result of the first game. Then, the game apparatus 10 executes a game (second game) using a face image selected by the user, or a face image automatically selected by the game apparatus 10 , from among the displayed list of face images.
  • the games executed by the game apparatus 10 are, for example, each a game where the user makes an attack on enemy objects EO by aiming at them, and destroys the enemy objects EO.
  • a face image acquired by the user and yet to be saved in the saved data storage area Do is mapped as a texture onto a character object, such as an enemy object EO.
  • the user can execute the first game by acquiring a desired face image through an capturing section, such as a camera. Then, when having succeeded in the first game, the user can save the acquired face image in the saved data storage area Do in an accumulating manner, cause a list of the face images to be displayed, and use the face images in the second game.
  • “in an accumulating manner” means that when the user has acquired a new face image and further succeeded in the first game, the new face image is added.
  • the user can select a desired face image from among the collected face images, and create an enemy object EO. Then, the user can execute a game using the enemy object EO created using the desired face image, for example, a game where the user destroys the created enemy object EO. Without an operation of the user, however, the game apparatus 10 may automatically select a face image, for example, randomly from the saved data storage area Do, and may create an enemy object EO or the like.
  • a character object may be created using a face image already collected in the saved data storage area Do, and may be caused to appear in the game together with a character object created using a face image yet to be saved in the saved data storage area Do.
  • the enemy objects EO are referred to as, for example, “enemy objects EO 1 , EO 2 . . .
  • the enemy objects EO 1 , EO 2 , and the like are collectively referred to, or a plurality of enemy objects do not need to be distinguished from one another, the enemy objects EO are referred to as “enemy objects EO”.
  • FIG. 7 is an example of face images displayed as a list on the upper LCD 22 of the game apparatus 10 .
  • the face images displayed as a list are each obtained by texture-mapping image data acquired by, for example, the inner capturing section 24 , the left outer capturing section 23 a , or the right outer capturing section 23 b , onto a three-dimensional model of a human facial surface.
  • the image data is attached as a texture to, for example, the surface of a three-dimensional model formed by combining a plurality of polygons.
  • the face images are not limited to those obtained by performing texture-mapping on three-dimensional models.
  • the game apparatus 10 may display, as a face image, image data held in a simple two-dimensional pixel array.
  • the face images displayed as a list at least one face image may be held in a simple two-dimensional pixel array, and the other face images may be obtained by performing texture-mapping on three-dimensional models.
  • a face image G 1 is surrounded by a heavy line L 1 .
  • the heavy line L 1 indicates that the face image G 1 is in the state of being selected.
  • “The state of being selected” means the state of being selected as a processing target by, for example, the user operating the operation buttons 14 or the like.
  • the state of being selected is also referred to as “the state of being focused on”. For example, each time the user presses the operation buttons 14 , the face image in the state of being selected is switched from left to right, or top to bottom. For example, when the user has pressed the right direction of the cross button 14 A in the state of FIG.
  • the face image in the state of being selected transfers to the right, such as from the face image G 1 to a face image G 2 and from the face image G 2 to the face image G 3 .
  • “The face image in the state of being selected transfers” means that the face image surrounded by the heavy line L 1 is switched on the screen of the upper LCD 22 .
  • a horizontal row of face images is referred to as a “tier”.
  • a face image G 4 at the right end of one of the tiers is in the state of being selected, if the user further presses the right direction of the cross button 14 A, the face image in the state of being selected is switched to a face image G 5 at the left end of the next lower tier.
  • the face image in the state of being selected transfers upward and to the left, such as from the face image G 5 to the face image G 4 and from the face image G 4 to the face image G 3 .
  • the switching of the state of being selected is not limited to the pressing of the left and right directions of the cross button 14 A, and the state of being selected may be switched by pressing the up and down directions. Further, the switching of the state of being selected is not limited to the pressing of the cross button 14 A, and the image in the state of being selected may be switched by pressing other operation buttons 14 , such as the operation button 14 B (A button). Alternatively, the face image in the state of being selected may be switched by performing an operation on the touch panel 13 of the lower LCD 12 . For example, the game apparatus 10 displays in advance on the lower LCD 12 a list of the face images similar to the list of the face images displayed on the upper LCD 22 .
  • the game apparatus 10 may detect an operation on the touch panel 13 , and thereby detect which face image has entered the state of being selected. Then, the game apparatus 10 may display the face image having entered the state of being selected, e.g., the face image G 1 , by surrounding the face image G 1 by the heavy line L 1 .
  • the user can browse a list of currently already acquired face images, using the screen shown in FIG. 7 displayed on the upper LCD 22 . Further, the user can cause a desired face image, from among the list of currently already acquired images, to enter the state of being selected. Then, for example, the user can fix the state of being selected by selecting a predetermined determination button, such as a determination button displayed on the lower LCD 12 , using the touch panel 13 , the operation buttons 14 , or the like. Furthermore, for example, the user may press the button 14 C (also referred to as a “B button”), whereby the screen of the list of the face images on the upper LCD 22 is closed, and a screen that waits for a menu selection (not shown) is displayed.
  • a predetermined determination button such as a determination button displayed on the lower LCD 12
  • the touch panel 13 such as a touch panel 13 , the operation buttons 14 , or the like.
  • the user may press the button 14 C (also referred to as a “B button”), whereby the screen of the list
  • FIG. 8 is another example of the face images displayed as a list.
  • the face image G 2 is in the state of being selected, and is surrounded by a heavy line L 2 .
  • face images related to the face image G 2 are reacting. For example, a heart mark is displayed near a face image G 0 , and the face image G 0 is giving a look with one eye closed to the face image G 2 in the state of being selected.
  • the face image G 3 and a face image G 7 are also turning their faces toward and giving looks to the face image G 2 in the state of being selected.
  • the reactions of the face images related to the face image in the state of being selected are not limited to actions such as: turning its face; giving a look with one eye closed while a heart mark is displayed near the face image; and turning its face and giving a look.
  • a related face image may show reactions such as smiling and producing a voice.
  • a face image unrelated to the face image in the state of being selected may change its expression from a smiling expression to a straight expression.
  • a face image unrelated to the face image in the state of being selected may turn in the direction opposite to the direction of the face image in the state of being selected.
  • a related face image may be defined at the stage of, for example, the acquisition of a face image.
  • groups for classifying face images may be set in advance, and when a face image is acquired, the user may input the group to which the face image to be acquired belongs.
  • a group of face images may be defined in accordance with the progression of the game, and the face images may be classified. For example, when the face image G 1 has been newly acquired using the face image G 0 during the progression of the game, it may be determined that the face image G 0 and the face image G 1 are face images that are related to each other and belong to the same group.
  • FIGS. 9 and 10 are examples of screens for attaching a face image to an enemy object EO, which is one of game characters.
  • FIG. 9 is an example of an enemy object EO's head selection screen.
  • the user has performed an operation on the touch panel 13 to select a menu “select boss” from menus displayed on the lower LCD 12 of the game apparatus 10 , a list of the head shapes of enemy objects EO prepared in the game apparatus 10 as shown in FIG. 9 is displayed.
  • Such an operation is not limited to an operation on the touch panel 13 , and the list of the head shapes as shown in FIG. 9 may be displayed by, for example, an operation on the operation buttons 14 or the like.
  • the head shape H 1 includes: a facial surface portion H 12 formed of a three-dimensional model; and a peripheral portion H 13 surrounding the facial surface portion H 12 .
  • An enemy object EO to appear in the game is formed as shown in FIGS. 7 and 8 by texture-mapping a face image onto the facial surface portion H 12 , which is a three-dimensional model.
  • the peripheral portion H 13 may have a shape suggesting the feature of the enemy object EO to appear in the game.
  • the peripheral portion H 13 has a shape representing a helmet, it is possible to represent an aggressive mental picture of the enemy object EO.
  • FIG. 9 three types of head shapes, namely the head shapes H 1 , H 2 , and H 3 , are illustrated on the list of the head shapes, which has two rows and four columns, and undefined marks H 0 are displayed in addition to these head shapes.
  • the head shapes H 1 through H 3 are merely illustrative, and the types of head shapes are not limited to three types.
  • a new head shape may be added from a medium of an upgraded game program, a website where parts in the game are provided on the Internet, or the like.
  • a label LB with “boss” indicates that the head shape H 1 has been caused by the user to enter the state of being selected.
  • the label LB can be moved by, for example, the cross button 14 A.
  • the user performs an operation on the cross button 14 A or the like to move the label LB, and thereby can cause, for example, the head shape H 2 or H 3 to enter the state of being selected.
  • the user may press the operation button 14 B (A button) or the like to determine the state of being selected.
  • the determination of the state of being selected places the label LB, and fixes the selection of the head shape in the state of being selected, e.g., the head shape H 1 in the example of FIG. 9 .
  • the user may press the operation button 14 C (B button) when the screen shown in FIG. 9 is displayed on the upper LCD 22 , whereby the enemy object EO's head selection screen is closed, and display returns to the previous operation screen.
  • FIG. 10 is a diagram illustrating a face image selection screen.
  • the operation button 14 B A button
  • the screen shown in FIG. 10 is displayed.
  • the screen shown in FIG. 10 is similar to those of FIGS. 7 and 8 , but is different from those of FIGS. 7 and 8 in that the peripheral portion H 13 of the head shape selected in FIG. 9 is added to the face image in the state of being selected.
  • the user can operate the cross button 14 A or the like to switch the face image in the state of being selected. That is, the user can switch the face image in the state of being selected, such as from the face image G 0 to G 1 and from the face image G 1 to G 2 . Then, display is performed such that the face image caused to enter the state of being selected is texture-mapped onto the facial surface portion of the head shape. For example, in the example of FIG. 10 , the face image G 2 is texture-mapped onto the facial surface portion H 12 of the head shape H 1 selected on the screen shown in FIG. 9 , and is displayed in combination with the peripheral portion H 13 .
  • Such a combination of the peripheral portion H 13 and the face image G 2 is displayed, whereby an enemy object EO is temporarily created.
  • the user imagines a mental picture of the enemy to confront in the game, by the temporarily displayed enemy object EO.
  • the user can operate the cross button 14 A or the like to switch the face image in the state of being selected, and thereby can switch the face image of the enemy object EO. That is, the user can switch the faces of the enemy objects EO, one after another, and thereby can create an enemy object EO that fits a mental picture of the enemy to fight with in the game.
  • the game apparatus 10 for example, the face images collected by succeeding in the first game and accumulated in the saved data storage area Do as described above are used in the subsequent second game. That is, the game apparatus 10 performs game processing using enemy objects EO created using the collected face images. For example, in accordance with an operation of the user, the game apparatus 10 performs a process termed a “cast determination process” before the execution of the game, and generates enemy objects EO that fit mental pictures formed by the user, or the game apparatus 10 automatically generates enemy objects EO before the execution of the game.
  • “Automatically” means that for example, the game apparatus 10 can generate enemy objects EO by selecting a required number of face images, i.e., generate enemy objects EO to appear in the game, randomly from among the collected face images. Further, for example, in accordance with the history of the game processing performed by the user in the past, the game apparatus 10 may create enemy objects EO by selecting face images expected to be desired next by the user, based on the properties, the taste, and the like of the user.
  • the game apparatus 10 may select a face image to be used next, based on face images, together with the attributes of the subjects of the face images, such as age, gender, friendship (family, friends, and relationships in work, school, and community), or, if a subject is a living thing such as a pet, the ownership relationship of the subject. Further, for example, the game apparatus 10 may select a face image to be used next, based on the performances of the user in the game executed in the past.
  • the game apparatus 10 performs game processing (the second game) using the enemy objects EO created by the specification made by such operations of the user, or created by the processing of the game apparatus 10 .
  • a character object that appears in the game which is described using the term “enemy object EO” as an example, is not limited to an object having an adversarial relationship with the user, and may be a friend character object.
  • the present invention is not limited to a game where there are relationships such as enemies and friends, and may be a game where a player object representing the user themselves appears.
  • the present invention may be, for example, a game where an object termed an “agent” appears, the object assisting the user in executing the game.
  • the game apparatus 10 executes a game where various character objects, such as the enemy objects EO described above, appear.
  • various character objects such as the enemy objects EO described above
  • the face images collected by the user succeeding in the first game are attached by texture mapping or the like.
  • the character objects including the face images collected by the user themselves appear.
  • the user can execute a game where the real-world relationships with the people represented by the face images or with the living things of the face images are reflected on the various character objects.
  • a game including emotions, such as affection, friendliness, favorable impression, and criticism.
  • the face image G 2 is in the state of being selected in combination with the peripheral portion 13 of the head shape H 1 , and a related face image, e.g., the face image G 4 , is smiling with its face turned toward the face image G 2 and giving a look to the face image G 2 .
  • the face image G 5 is giving an envious look to the face image G 2 with its face turned upward.
  • face images G 8 and G 9 are also giving looks to the face image G 2 .
  • the face images other than the face images G 4 , G 5 , G 8 , and G 9 are not showing any reactions to the fact that the face image G 2 has entered the state of being selected. Such differences in reaction make it possible to represent the relationships of affinity between a plurality of face images.
  • the game apparatus 10 can perform drawing by introducing intimacy relationships between people in the real world, into a virtual world represented by the game apparatus 10 .
  • FIG. 11 is a diagram showing an example of various data stored in the main memory 32 by executing the image processing program.
  • programs for performing the processing of the game apparatus 10 are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35 ), or included in the external memory 45 or the data storage external memory 46 , and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34 , into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311 .
  • the main memory 32 stores the programs loaded from the built-in memory, the external memory 45 , or the data storage external memory 46 , and temporary data generated in the image processing.
  • operation data Da real camera image data Db; real world image data Dc; boundary surface data Dd; back wall image data De; enemy object data Df; bullet object data Dg; score data Dh; motion data Di; virtual camera data Dj; rendered image data Dk; display image data Dl; aiming cursor image data Dm; management data Dn; saved data storage area Do; and the like.
  • a group of various programs Pa are stored that configure the image processing program.
  • the operation data Da indicates operation information of an operation of the user on the game apparatus 10 .
  • the operation data Da includes controller data Da 1 and angular velocity data Da 2 .
  • the controller data Da indicates that the user has operated a controller, such as the operation buttons 14 or the analog stick 15 , of the game apparatus 10 .
  • the angular velocity data Da 2 indicates the angular velocities detected by the angular velocity sensor 40 .
  • the angular velocity data Da 2 includes x-axis angular velocity data indicating an angular velocity about the x-axis, y-axis angular velocity data indicating an angular velocity about the y-axis, and z-axis angular velocity data indicating an angular velocity about the z-axis, the angular velocities detected by the angular velocity sensor 40 .
  • the operation data from the operation buttons 14 or the analog stick 15 and the angular velocity data from the angular velocity sensor 40 are acquired per unit of time in which the game apparatus 10 performs processing (e.g., 1/60 seconds), and are stored in the controller data Da 1 and the angular velocity data. Da 2 , respectively, in accordance with the acquisition, to thereby be updated.
  • controller data Da 1 and the angular velocity data Da 2 are each updated every one-frame period, which corresponds to the processing cycle.
  • the controller data Da 1 and the angular velocity data Da 2 may be updated in another processing cycle.
  • the controller data Da 1 may be updated in each cycle of detecting the operation of the user on a controller, such as the operation buttons 14 of the analog stick 15 , and the updated controller data Da 1 may be used in each processing cycle.
  • the cycles of updating the controller data Da 1 and the angular velocity data Da 2 differ from the processing cycle.
  • the real camera image data Db indicates a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24 .
  • the real camera image data Db is updated using a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24 .
  • the cycle of updating the real camera image data Db using the real camera image captured by the outer capturing section 23 or the inner capturing section 24 may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time.
  • the real camera image data Db may be updated as necessary, independently of the processing described later.
  • the process may be performed invariably using the most recent real camera image indicated by the real camera image data Db.
  • the real camera image data Db is data indicating a real camera image captured by the outer capturing section 23 (e.g., the left outer capturing section 23 a ).
  • a boundary surface 3 is introduced that is obtained by texture-mapping a real camera image captured by a real camera of the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24 ).
  • the real world image data Dc is data for generating a real world image that seems to be present on the boundary surface 3 , using the real camera image captured by the real camera of the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24 ).
  • the real world image data Dc includes texture data of the real camera image for attaching the real world image to the boundary surface (a screen object in the display range of a virtual camera). Further, in a second drawing method described later, for example, the real world image data Dc includes: data of a planar polygon for generating the real world image; texture data of the real camera image to be mapped onto the planar polygon; and data indicating the position of the planar polygon in a virtual space (the position from a real world drawing camera described later).
  • the boundary surface data Dd is data for, in combination with the real world image data Dc described above, generating the real world image that seems to be present on the boundary surface 3 .
  • the boundary surface data Dd is data concerning the screen object, and includes: opening determination data (corresponding to data of an ⁇ -texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3 ; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like.
  • the boundary surface data Dd is data for representing an opening in a planar polygon of the real world image, and includes: opening determination data (corresponding to data of an ⁇ -texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3 ; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like.
  • the data indicating the placement position of the boundary surface 3 in the virtual space is, for example, conditional equations for a spherical surface (relational expressions for defining a spherical surface in the virtual space), and indicates the existence range of the boundary surface 3 in the virtual space.
  • the opening determination data indicating the state of being open is, for example, two-dimensional (e.g., a rectangular shape having 2048 pixels ⁇ 384 pixels) texture data in which the alpha value (non-transparency) of each point can be set.
  • the alpha value is a value of from “0” to “1”, with “0” being minimum and “1” being maximum.
  • the alpha value indicates transparent by “0”, and indicates non-transparent by “1”.
  • the opening determination data can indicate that a position where “0” is stored in the opening determination data is in the state of being open, and a position where “1” is stored is not in the state of being open.
  • the alpha value can be set in, for example, an image of a game world generated in the game apparatus 10 , or a pixel block unit including a pixel or a plurality of pixels in the upper LCD 22 .
  • “predetermined values over 0 but less than 1 (0.2 in the present embodiment)” are stored in an unopen area. This data is not used when applied to the real world image.
  • alpha values of “0.2” stored in the opening determination data are handled as “1”. It should be noted that an alpha values of “0.2” is used to draw a shadow ES of each of the enemy objects EO described above.
  • the setting of the alpha value and the range of the alpha value does not limit the image processing program according to the present invention.
  • the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of the virtual camera; by color information (pixel values) of a texture of the real world image to be attached to the boundary surface 3 .
  • the second drawing method it is possible to generate the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of a virtual world drawing camera; by color information (pixel values) of the real world image (specifically, rendered image data of the real camera image rendered with a parallel projection described later using the real world image data Dc). This is because when alpha values of “0” stored at the position of the opening are multiplied by the color information of the real world image at the position, the values of the color information of the real world image are “0” (the state of being completely transparent).
  • an image to be displayed on the upper LCD 22 is generated by rendering a virtual space image in which virtual objects are placed so as to include an object of the real world image to which the opening determination data is applied.
  • the virtual space image is rendered, taking into account the opening determination data. That is, the priority of each virtual object relative to the boundary surface (the priority relative to the real world image) is determined based on the opening determination data, and the virtual space image is generated by rendering each virtual object. Then, an image to be displayed on the upper LCD 22 is generated by combining the real world image with the virtual space image generated as described above.
  • the shape of the boundary surface 3 is a spherical surface (see FIGS. 20A and 20B ).
  • the shape of the opening determination data may be defined as rectangular. The opening determination data of this rectangular shape is mapped onto a central portion of the spherical surface as shown in FIGS. 20A and 20B , whereby it is possible to cause the points of the opening determination data to correspond to the points of the boundary surface.
  • the opening determination data is only data corresponding to the central portion of the spherical surface shown in FIG. 20A . Accordingly, the opening determination data may not be present depending on the orientation of the virtual camera (the virtual world drawing camera in the second drawing method).
  • the real world image is drawn as it is. That is, the real world image is drawn on the condition that ⁇ -values of “1” are set.
  • the back wall image data De is data concerning a back wall BW, which is present in a second space 2 .
  • the back wall image data De includes: image data for generating an image of the back wall BW; data indicating the position of a polygon model defining the back wall BW in the virtual space; and the like.
  • the polygon model defining the back wall BW is typically a model that has a radius greater than that of the sphere shown in FIG. 20A , about a vertical axis extending through the position of the virtual camera (the virtual world drawing camera in the second drawing method), and has the same shape as that of the central portion of the sphere shown in FIG. 20A . That is, the model defining the back wall BW includes the boundary surface 3 . Further, the polygon model may be a planar polygon placed behind the position of an opening to be formed in the boundary surface 3 . Furthermore, each time an opening is formed in the boundary surface 3 , a planar polygon defining the projection surface of the opening may be placed in the second space 2 .
  • Image data (texture) to be attached to the polygon model of the back wall BW may be given data.
  • This image data represents another space (second space 2 ) existing behind the real world image, and therefore, the image data is preferably an image representing unreality, such as an image representing outer space, the sky, or an area in water, because it is possible to cause the player a strange feeling as if an unreal space exists behind real space.
  • a texture of the back wall may represent landscapes that are not normally seen, such as a desert and a wilderness. As described above, the selection of a texture of the back wall BW allows the player to notice a desired mental picture in another world hidden behind a real image represented as a background of the game world.
  • the image data is an image that can use repeated representations, such as an image of outer space
  • the image data is such an image, it is possible to draw an image of the back wall BW without specifying the position where the back wall BW is to be drawn in the virtual space. This is because if an image can use repeated representations, the image is drawn without depending on the position (the repeated pattern can be represented on the entire polygon model).
  • the enemy object data Df is data concerning an enemy object EO, and includes substance data Df 1 , silhouette data Df 2 , and opening shape data Df 3 .
  • the substance data Df 1 is data for drawing the substance of the enemy object EO, and includes, for example, a polygon model defining a three-dimensional shape of the substance of the enemy object EO, and texture data to be mapped onto the polygon model.
  • the texture data may be, for example, a photograph of the face of the user or the like captured by each capturing section of the game apparatus 10 .
  • the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the texture data. In the present embodiment, it is assumed that an alpha value of “1” is defined for the texture data.
  • the silhouette data Df 2 is data for semi-transparently drawing in the real world image the shadow of the enemy object EO present in the second space 2 , and includes a polygon model and texture data to be attached to the polygon model.
  • this silhouette model includes eight planar polygons, and is placed at the same position as that of the enemy object EO present in the second space 2 .
  • the silhouette model to which a texture is attached is drawn, for example, semi-transparently, in the real world image as viewed from the virtual world drawing camera, whereby it is possible to represent the shadow of the enemy object EO present in the second space 2 .
  • the texture data of the silhouette data Df 2 may be, for example, images of the enemy object EO as viewed from all directions as shown in FIGS.
  • these images may each be an image obtained by simplifying the silhouette model of the enemy object EO.
  • the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the texture data to be attached to the silhouette model.
  • an alpha value of “1” is defined for the texture data in the shadow image portion, and an alpha value of “0” is defined in the portion where there is no shadow image (the peripheral portion).
  • the opening shape data Df 3 is data concerning the shape of an opening generated in the boundary surface 3 when the enemy object EO moves between a first space 1 and the second space 2 .
  • the opening shape data Df 3 is data for setting alpha values of “0” at the position in the opening determination data corresponding to the position in the boundary surface 3 where the opening is generated.
  • the opening shape data Df 3 is texture data that corresponds to the shape of the opening to be generated and has alpha values of “0”. It should be noted that in the present embodiment alpha values of “0” are set in the opening determination data for the shape indicated by the opening shape data Df 3 , the shape formed around the portion corresponding to the position through which the enemy object EO has passed in the boundary surface 3 . The image processing performed when the enemy object EO generates an opening in the boundary surface 3 will be described later.
  • the bullet object data Dg is data concerning a bullet object BO, which is fired in accordance with an attack operation of the player.
  • the bullet object data Dg includes: a polygon model and bullet image (texture) data for drawing the bullet object BO; data indicating the placement direction and the placement position of the bullet object BO; and data indicating the moving velocity and the moving direction (e.g., a moving velocity vector) of the bullet object BO.
  • the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the bullet image data. In the present embodiment, an alpha value of “1” is defined for the bullet image data.
  • the score data Dh indicates the score of a game where the enemy object EO appears. For example, as described above, points are added to the score of the game when the user has vanquished the enemy object BO by an attack operation, and points are deducted from the score of the game when the enemy object EO has reached the position of the user (i.e., the placement position of the virtual camera in the virtual space).
  • the motion data Di indicates the motion of the game apparatus 10 in real space.
  • the motion of the game apparatus 10 is calculated by the angular velocities detected by the angular velocity sensor 40 .
  • the virtual camera data Dj is data concerning a virtual camera set in the virtual space.
  • the virtual camera data Dj includes data indicating the placement direction and the placement position of a virtual camera in the virtual space.
  • the virtual camera data Dj includes: data indicating the placement direction and the placement position of a real world drawing camera in the virtual space; and data indicating the placement direction and the placement position of a virtual world drawing camera in the virtual space.
  • the data indicating the placement direction and the placement position of the virtual camera in the virtual space in the first drawing method, and the data indicating the placement direction and the placement position of the virtual world drawing camera in the virtual space in the second drawing method change in accordance with the motion of the game apparatus 10 (angular velocities) indicated by the motion data Di.
  • the virtual camera data Dj includes angle-of-view (drawing range) data of the virtual camera.
  • the rendered image data Dk is data concerning an image rendered by processing described later.
  • the real world image is rendered as an object in the virtual space
  • the rendered image data Dk includes rendered image data of the virtual space.
  • the rendered image data of the virtual space is data indicating a virtual world image obtained by rendering with a perspective projection from the virtual camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3 (screen object) to which the real world image is applied as a texture, and the back wall BW are placed.
  • the real world image and the virtual world image are rendered by virtual cameras different from each other, and therefore, the rendered image data Dk includes rendered image data of the real camera image and rendered image data of the virtual space.
  • the rendered image data of the real camera image indicates the real world image obtained by rendering with a parallel projection from the real world image drawing camera a planar polygon on which a texture of the real camera image is mapped.
  • the rendered image data of the virtual space indicates the virtual world image obtained by rendering with a perspective projection from the virtual world drawing camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3 , and the back wall BW are placed.
  • the display image data Dl indicates a display image to be displayed on the upper LCD 22 .
  • a display image to be displayed on the upper LCD 22 is generated by a process of rendering the virtual space.
  • a display image to be displayed on the upper LCD 22 is generated by combining the rendered image data of the camera image with the rendered image data of the virtual space by a method described later.
  • the aiming cursor image data Dm is image data of an aiming cursor AL that is displayed on the upper LCD 22 .
  • the image data may be given data.
  • the data concerning each object includes information about the priority, the information defining the priority of drawing.
  • the information about the priority uses alpha values. The relationship between the alpha values and the image processing will be described later.
  • the data concerning each object used for drawing includes data indicating whether or not a depth determination is to be made between the object and another.
  • the data is set such that a depth determination is valid between each pair of: the enemy object EO; the bullet object BO; a semi-transparent enemy object; an effect object; and the screen object (boundary surface 3 ).
  • the data is set such that a depth determination is valid “between the shadow planar polygon (silhouette data Df 2 ) and the enemy object EO (substance data Df 1 )”, “between the shadow planar polygon (silhouette data Df 2 ) and the bullet object BO”, “between the shadow planar polygon (silhouette data Df 2 ) and the semi-transparent enemy object”, and “between the shadow planar polygon (silhouette data Df 2 ) and the effect object”. Furthermore, the data is set such that a depth determination is invalid between the shadow planar polygon (silhouette data Df 2 ) and the screen object (boundary surface data Dd).
  • the management data Dn is data for managing: data to be processed by the game apparatus 10 , such as collected face images; data accumulated by the game apparatus 10 ; and the like.
  • the management data Dn includes face image management information Dn 1 , a face image attribute aggregate table Dn 2 , and the like.
  • the face image management information Dn 1 stores: the destination for storing the data of each face image (e.g., the address in the main memory 32 or the like); the source of acquiring the face image (e.g., the inner capturing section 24 or the outer capturing section 23 ); the attributes of the face image (e.g., the gender, the age, and the like of the subject of the face image); information of other face images related to the face image; and the like.
  • the face image attribute aggregate table Dn 2 stores by attribute the numbers of collections of the face images currently already collected by the user. For example, when the subjects of the collected face images are classified by gender, age, and the like, the collection achievement value of each category is stored. Examples of the data structures of the face image management information Dn 1 and the face image attribute aggregate table Dn 2 will be described later.
  • the saved data storage area Do is an area where, when the information processing section 31 executes the image processing program such as a game program, data to be processed by the information processing section 31 , the resulting data of the process of the information processing section 31 , and the like are saved.
  • data of a face image acquired by the game apparatus 10 through the inner capturing section 24 , the outer capturing section 23 , the wireless communication module 36 , the local communication module 37 , and the like is saved.
  • the information processing section 31 executes the first game in the state where a face image acquired by the game apparatus 10 is temporarily stored in the main memory 32 .
  • the information processing section 31 saves in the saved data storage area Do the face image temporarily stored in the main memory 32 .
  • the face image saved in the saved data storage area Do is available in the subsequent game processing or the like.
  • the structure of the saved data storage area Do is not particularly limited.
  • the saved data storage area Do may be placed in the same physical address space as that of a regular memory, so as to be accessible to the information processing section 31 .
  • the saved data storage area Do may allow in advance the information processing section 31 to secure (or allocate) a predetermined block unit or a predetermined page unit at a necessary time.
  • the saved data storage area Do may have a structure where connections are made by management information, such as points connecting blocks, as in the file system of a computer.
  • the saved data storage area Do may, for example, secure an individual area for each program executed by the game apparatus 10 . Accordingly, when a game program has been loaded into the main memory 32 , the information processing section 31 may access the saved data storage area Do (input and output data) based on management information or the like of the game program.
  • the saved data storage area Do of a program may be accessible to the information processing section 31 that is executing another program. With this, data processed in the program may be delivered to said another program.
  • the information processing section 31 that is executing the second game may create a character object by reading data of a face image saved in the saved data storage area Do as a result of the execution of the first game described later.
  • the saved data storage area Do is an example of a second storage area.
  • FIG. 12 is an example of the data structure of the face image management information Dn 1 for managing face images saved in the game apparatus 10 .
  • the game apparatus 10 stores data of saved face images in the face image management information Dn 1 , and thereby can display a list of the face images on the screen of the upper LCD 22 in the form of, for example, FIGS. 7 and 8 .
  • the face image management information Dn 1 is, for example, created as information in which a record is prepared for each face image.
  • the face image management information Dn 1 is, for example, saved in the data storage internal memory 35 or the data storage external memory 46 .
  • the elements of a record are illustrated by a record 1 . Further, in FIG. 12 , details of a record 2 and thereafter are not shown.
  • the information processing section 31 may, for example, save the total number of records of the face image management information Dn 1 , i.e., the total number of acquired face images, in the data storage internal memory 35 , the data storage external memory 46 , or the like.
  • the face image management information Dn 1 includes, for example, face image identification information, the address of face image data, the source of acquiring the face image, the estimation of gender, the estimation of age, and pieces of related face image information 1 through N.
  • FIG. 12 is an example of the face image management information Dn 1 , and this does not mean that face image management information is limited to the elements shown in FIG. 12 .
  • the face image identification information is information uniquely identifying the saved face image.
  • the face image identification information may be, for example, a serial number.
  • the address of face image data is, for example, the address where data of the face image is stored in the data storage internal memory 35 or the data storage external memory 46 .
  • a path name, a file name, and the like in the file system may be set as the address of face image data.
  • the source of acquiring the face image is, for example, information identifying the capturing device that has acquired the face image.
  • information identifying the inner capturing section 24 , the left outer capturing section 23 a , or the right outer capturing section 23 b is set.
  • information indicating both capturing sections is set.
  • the face image has been acquired by a capturing device other than the inner capturing section 24 , the left outer capturing section 23 a , and the right outer capturing section 23 b , e.g., by a capturing device provided outside the game apparatus 10
  • information indicating such a state e.g., “other”
  • the face image has been acquired by a capturing device provided outside the game apparatus 10 is, for example, the case where an image captured by another game apparatus 10 similar to the game apparatus 10 has been acquired through the external memory interface 33 , the wireless communication module 36 , the local communication module 37 , or the like.
  • examples of such a case also include the cases: where an image obtained by a camera not included in the game apparatus 10 has been acquired; where an image obtained by a scanner has been acquired; and where an image such as a video image obtained from a video device has been acquired, each image obtained through the external memory interface 33 , the wireless communication module 36 , or the like.
  • the estimation of gender is information indicating whether the face image is male or female.
  • the estimation of gender may be, for example, made by a process shown in another embodiment described later.
  • the estimation of age is information indicating the age of a person represented by the face image.
  • the estimation of age may be, for example, made by a process shown in another embodiment described later.
  • Each of the pieces of related image identification information 1 through N is information indicating another face image related to the face image.
  • pieces of face image identification information of up to N related other face images may be set.
  • the related other face images may be, for example, specified by an operation of the user through a GUI.
  • the information processing section 31 may detect, in the state where the user has operated the operation buttons 14 or the like to cause one or more face images related to the acquired face image to enter the state of being selected, an operation on the GUI of giving an instruction to set related images.
  • the acquired face image may be classified by categories prepared by the game apparatus 10 , such as themselves, friends, colleagues, and strangers.
  • face images belonging to the same category may be linked together using the pieces of related image identification information 1 through N.
  • an element “classification of face images” may be simply prepared, instead of the preparation of the entry of the pieces of related face image identification information 1 through N, so that themselves, friends, colleagues, strangers, and the like may be set.
  • a fixed number is used for the pieces of related image identification information 1 through N.
  • the number N may be a variable number. In this case, the number N that is already set may be held in the face image management information Dn 1 .
  • the records of face images related to each other may be connected together by chains of pointers.
  • FIG. 13 shows the data structure of the face image attribute aggregate table Dn 2 .
  • the face image attribute aggregate table Dn 2 is a table where already acquired face images are classified by attribute, and the numbers of the classified images are aggregated.
  • the face image attribute aggregate table Dn 2 will also be referred to simply as an “aggregate table”.
  • the information processing section 31 saves the aggregate table shown in FIG. 13 in, for example, the data storage internal memory 35 or the data storage external memory 46 .
  • the aggregate table stores the number of acquired face images in each row defined by performing classification with the combination of gender (male or female) and an age bracket (under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over).
  • each row of the table shown in FIG. 13 includes elements such as gender, age, and the number of acquired face images.
  • elements such as gender, age, and the number of acquired face images.
  • the numbers of acquired face images already acquired face images are classified, and the numbers of the classified acquired face images are aggregated.
  • the categories and the attributes of face images are not limited to genders or age brackets that are shown in FIG. 13 .
  • the CPU 311 executes a boot program (not shown). This causes the programs stored in the built-in memory, the external memory 45 , or the data storage external memory 46 , to be expanded in the main memory 32 into the form of being executable by the CPU 311 .
  • the form of being executable by the CPU 311 means, for example, the form where machine instructions for the CPU 311 are written in a predetermined order and placed at appropriate addresses in the main memory 32 , so as to be readable by a control section that processes the machine instructions for the CPU 311 .
  • the expansion into the form of being executable is also referred to simply as “loading”. It should be noted that in FIGS. 14 through 19 , processes not directly related to the first embodiment are not described.
  • FIG. 14 is a flow chart showing an example of the operation of the information processing section 31 .
  • the information processing section 31 performs the process of FIG. 14 .
  • a GUI graphical user interface
  • an operation of the user on the GUI through the touch panel 13 or the operation buttons 14 is referred to simply as an “operation on the GUI”.
  • the information processing section 31 waits for an operation of the user (step 8 ).
  • steps are abbreviated as “S” in the drawings.
  • the information processing section 31 performs the processes of step 9 and thereafter.
  • the operation of the user on the GUI is an instruction to “acquire a face image with the inner capturing section 24 ” (“Yes” in step 9 )
  • the information processing section 31 performs a face image acquisition process 1 (step 10 ).
  • the instruction “to acquire a face image with the inner capturing section 24 ” is, for example, an instruction for acquisition using the inner capturing section 24 , in accordance with an operation of the user on the GUI or the like.
  • the information processing section 31 proceeds to step 19 .
  • the face image acquisition process 1 will be described later with reference to FIG. 15 .
  • the operation of the user on the GUI is not an instruction “to acquire a face image with the inner capturing section 24 ” (“No” in step 9 )
  • the information processing section 31 proceeds to step 11 .
  • the information processing section 31 performs a face image acquisition process 2 (step 12 ). Subsequently, the information processing section 31 proceeds to step 19 .
  • the instruction “to acquire a face image with the outer capturing section 23 ” is, for example, an instruction for acquisition using the outer capturing section 23 by an operation of the user on the GUI or the like.
  • the face image acquisition process 2 will be described later with reference to FIG. 16 .
  • the information processing section 31 proceeds to step 13 .
  • step 14 when the operation of the user on the GUI is an instruction to display a list of collected face images (“Yes” in step 13 ), the information processing section 31 performs a list display process (step 14 ). Subsequently, the information processing section 31 proceeds to step 19 .
  • the list display process will be described later with reference to FIG. 17 .
  • the information processing section 31 proceeds to step 15 .
  • step 16 When the operation of the user on the GUI is an instruction to determine a cast (“Yes” in step 15 ), the information processing section 31 performs a cast determination process (step 16 ). Subsequently, the information processing section 31 proceeds to step 19 .
  • the cast determination process will be described later with reference to FIG. 18 .
  • the information processing section 31 proceeds to step 17 .
  • step 18 When the operation of the user is an instruction to execute a game (“Yes” in step 17 ), the information processing section 31 executes the game (step 18 ).
  • the process of step 18 is an example of a second game processing step.
  • the game apparatus 10 performs the game processing of the game where various character objects, such as enemy objects EO, created in the cast determination process in step 16 appear.
  • the type of the game is not limited.
  • the game executed in step 18 may be a game where the user fights with enemy objects EO created in the cast determination process. In this case, for example, the user fights with enemy objects EO having face images collected in the face image acquisition process 1 in step 10 and in the face image acquisition process 2 in step 12 , and displayed in the list display process in step 14 .
  • this game may be an adventure game where a player object representing the user moves forward by overcoming various hurdles, obstacles, and the like.
  • examples of the game may include: a war simulation where historical characters appear; a management simulation where a player object appear; and a driving simulation of a vehicle or the like, where a player object appears.
  • the game may be a novel game modeled on the original of a novel, where character objects appear.
  • the game may be one termed a role-playing game (RPG) where the user controls a main character and characters that appear in a story, to play their roles.
  • RPG role-playing game
  • the game may be one where the user simply has some training with the assistance of an agent that appears.
  • face images collected by the user having succeeded in the first game in step 10 are attached by texture mapping or the like. Accordingly, in the game executed in step 18 , the character objects including the face images collected by the user themselves appear.
  • the user can execute a game where the real-world relationships with the people (or the living things) of the face images are reflected on the various character objects. For example, it is possible to perform game processing including emotions, such as affection, friendliness, favorable impression, and criticism.
  • step 17 when the operation of the user is not an instruction to execute a game (“No” in step 17 ), the information processing section 31 proceeds to step 19 .
  • the information processing section 31 determines whether or not the process is to be ended. When having detected through the GUI an instruction to end the process, the information processing section 31 ends the process of FIG. 14 . On the other hand, when having detected through the GUI an instruction not to end the process (e.g., an instruction to retry the process), the information processing section 31 performs a face image management assistance process 1 (step 1 A).
  • the face image management assistance process 1 is, for example, a process of, based on already acquired face images, providing the user with information about the attributes and the like of a face image to be acquired next, so as to assist the user in acquiring a face image. A detailed process of the face image management assistance process 1 will be described later with reference to FIG. 19A . Subsequently, the information processing section 31 returns to step 8 .
  • FIG. 15 is a flow chart showing an example of a detailed process of the face image acquisition process 1 (step 10 of FIG. 14 ).
  • the information processing section 31 first performs a face image management assistance process 2 (step 100 ).
  • the face image management assistance process 2 is, for example, a process of, based on already acquired face images, providing the user with information about the attributes and the like of a face image to be acquired next, so as to assist the user in acquiring a face image.
  • a detailed process of the face image management assistance process 2 will be described later with reference to FIG. 19B .
  • the information processing section 31 performs a face image acquisition process (step 101 ).
  • the CPU 311 of the information processing section 31 performs the process of step 101 as an example of image acquisition means.
  • the information processing section 31 obtains images captured by, for example, the inner capturing section 24 , the left outer capturing section 23 a , and/or the right outer capturing section 23 b in predetermined cycles, and displays the obtained images on the upper LCD 22 .
  • the display cycle may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time.
  • the information processing section 31 displays, for example, an image from the inner capturing section 24 on the upper LCD 22 .
  • an capturing section selection GUI is prepared so as to select at least one of the inner capturing section 24 , the left outer capturing section 23 a , and the right outer capturing section 23 b (including the case where both the left outer capturing section 23 a and the right outer capturing section 23 b are used).
  • the user can operate the capturing section selection GUI to freely switch capturing sections to be used.
  • the inner capturing section 24 , the left outer capturing section 23 a , and/or the right outer capturing section 23 b that are, due to the initial state or the operation on the capturing section selection GUI, used for capturing are referred to simply as an “capturing section”.
  • the information processing section 31 acquires, as data, an image from the inner capturing section 24 that is displayed on the upper LCD 22 , and temporarily stores the acquired data in the main memory 32 .
  • the data of the image is only present in the main memory 32 , and is not saved in the saved data storage area Do described later.
  • the data present in the main memory 32 is only used in the game in step 106 described later, and as will be described later, is discarded when the game has not been successful and has been ended.
  • the main memory 32 is an example of a first data storage area.
  • the face image acquired in step 101 is texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game is executed. Accordingly, in the process of step 101 , it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section.
  • the following processing is performed.
  • the information processing section 31 detects the contour of the face in the acquired image. The contour of the face is estimated from the distance between the eyes, and the positional relationships between the eyes and the mouth. That is, the information processing section 31 recognizes the boundary line between the contour of the face and the background, based on the arrangement of the eyes and the mouth, using the dimensions of a standard face.
  • the boundary line can be acquired by combining, for example, differential processing (contour enhancement) and average processing (smoothing calculation), which are normal image processing. It should be noted that the method of detecting the contour of the face may be another known method.
  • the information processing section 31 fits the obtained face image with the dimensions of the facial surface portion of the head shape of the enemy object EO by enlarging or reducing the obtained face image. This process enables the game apparatus 10 to even acquire face images varying to some extent in dimensions and attach the acquired face images to enemy objects EO.
  • the process of acquiring a face image is not limited to the procedure described above.
  • a face image having target dimensions may be acquired from the capturing section, instead of the acquisition of an image from a given distance and in given dimensions.
  • a face image may be acquired on the condition that a distance from a subject is established such that the distance between the eyes of the face image obtained from the subject approximates a predetermined number of pixels.
  • the information processing section 31 may derive the distance from the subject.
  • the information processing section 31 may, for example, lead a person who is the subject, or the user who is the capturer, to adjust the angle of the subject's face with respect to the direction of the optical axis of the capturing section. Further, instead of the user pressing, for example, the R button 14 H (or the L button 14 G) to save the image, when it is determined that the adjustment of the distance from the subject and the adjustment of the angle of the face with respect to the direction of the optical axis of the capturing section are completed, the information processing section 31 may save the image. For example, the information processing section 31 may display marks representing target positions for positioning the eyes and the mouth, in superposition with the face image of the subject on the upper LCD 22 . Then, when the positions of the eyes and the mouth of the subject that have been acquired from the capturing section have fallen within predetermined tolerance ranges from the marks of the target positions corresponding to the eyes and the mouth, the information processing section 31 may save the image in a memory.
  • the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn 2 shown in FIG. 13 .
  • the corresponding row means, for example, the row of the attributes corresponding to the gender and the age that are estimated in step 1002 of FIG. 19B described later.
  • the information processing section 31 displays the image acquired in the process of step 101 on, for example, the upper LCD 22 (step 102 ).
  • the information processing section 31 performs a process of selecting an enemy object EO (step 103 ).
  • the information processing section 31 prompts the user to select the head shape of an enemy object EO.
  • the information processing section 31 may display the list of head shapes as shown in FIG. 9 , and may receive the selection of the user through the GUI.
  • the information processing section 31 sets the acquired face image as a texture of the enemy object EO (step 104 ), and generates the enemy object EO (step 105 ).
  • the enemy object generated in step 105 is an example of a first character object.
  • the information processing section 31 performs the process of step 105 as an example of means for creating a character object.
  • the information processing section 31 executes a game using the generated enemy object EO (step 106 ).
  • the CPU 311 of the information processing section 31 performs the process of step 106 as an example of first game processing means.
  • the type of the game is not limited.
  • the game is, for example, a game simulating a battle with the enemy object EO.
  • the game may be, for example, a game where the user competes with the enemy object EO in score.
  • the information processing section 31 determines whether or not the user has succeeded in the game (step 107 ).
  • the information processing section 31 performs the process of step 107 as an example of means for determining a success or a failure.
  • a “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO.
  • a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score.
  • a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
  • a character object using a face image already collected in the past may be caused to appear.
  • a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
  • the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 101 described above, in addition to data of face images that have been saved up to the current time (step 109 ).
  • the CPU 311 of the information processing section 31 performs the process of step 109 as an example of means for saving.
  • the saved data storage area Do of the game is a storage area where the information processing section 31 that executes the game can perform writing and reading, the storage area constructed in, for example, the main memory 32 , the data storage internal memory 35 , or the data storage external memory 46 .
  • Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to FIGS. 7 and 8 .
  • the user executes the game (first game) in order to save the face image acquired in step 101 in the saved data storage area Do of the game.
  • a character object using a face image that has been saved in the saved data storage area Do by the user up to the current time is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image, and add the new face image to the saved data storage area Do, while reflecting human relationships in the real world and the like.
  • the information processing section 31 To manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn 1 described with reference to FIG. 12 , and saves the face image management information Dn 1 in the data storage internal memory 35 or the data storage external memory 46 . That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn 1 . Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do of the game, as the address of face image data. Furthermore, the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like.
  • the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn 2 described with reference to FIG. 13 . That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn 2 .
  • the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do of the game, or transfer the data through the wireless communication module 36 . Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14 .
  • the information processing section 31 inquires of the user as to whether or not to retry the game (step 108 ). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13 , an operation through the operation buttons 14 , or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 106 .
  • the GUI e.g., a positive icon, a negative icon, or a menu
  • the information processing section 31 discards the face image acquired in step 101 (step 110 ), and ends the process. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 101 , without waiting for an instruction to retry the game in step 108 .
  • a description is given of an example of the process of, in a face image acquisition process, leading the user to acquire a face image with the inner capturing section 24 prior to the two capturing sections of the outer capturing section 23 (the left outer capturing section 23 a and the right outer capturing section 23 b ).
  • the reason why the game apparatus 10 causes the user to first acquire a face image with the inner capturing section 24 is that the acquisition of a face image with the inner capturing section 24 clarifies, for example, the owner and the like of the game apparatus 10 , so as to increase the possibility of restricting the use of the game apparatus 10 by another person.
  • the information processing section 31 first determines whether or not a face image has already been acquired by the inner capturing section 24 (step 121 ).
  • the determination of whether or not a face image has already been acquired by the inner capturing section 24 may be made, for example, with reference to the face image management information Dn 1 shown in FIG. 12 and based on whether or not there is a record where the inner capturing section 24 is set as the source of acquiring the face image.
  • the determination may be made based on whether or not the face image of the owner has already been registered.
  • the information processing section 31 prompts the user to first perform a face image acquisition process with the inner capturing section 24 (step 124 ), and ends the process of this subroutine. More specifically, for example, the information processing section 31 displays on the upper LCD 22 a message indicating “In the game apparatus 10 , if a face image has not already been acquired by the inner capturing section 24 , a face image acquisition process cannot be performed with the outer capturing section 23 ”. Alternatively, the information processing section 31 may request the user to first register the face image of the owner.
  • the information processing section 31 performs a face image management assistance process 3 (step 122 ).
  • the face image management assistance process 3 will be described later with reference to FIG. 19C .
  • the information processing section 31 performs a face image acquisition process with the outer capturing section 23 (step 123 ). For example, when the outer capturing section 23 is used, if the user directs the outer surface 21 D of the upper housing 21 to another person's face in the state where the upper housing 21 is open, said another person's face is displayed on the upper LCD 22 .
  • the information processing section 31 acquires, as data, an image from the outer capturing section 23 that is displayed on the upper LCD 22 , and temporarily stores the acquired data in the main memory 32 .
  • the data of the image is only present in the main memory 32 , and is not saved in the saved data storage area Do.
  • the data present in the main memory 32 is only used in the game in step 129 described later, and as will be described later, is discarded when the game has not been successful and has been ended.
  • the face image acquired in step 123 can also be texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game can be executed. Accordingly, in the process of step 123 , it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section, by a process similar to that of step 101 described above. Further, also when a face image is acquired in step 123 , the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn 2 shown in FIG. 13 . “The corresponding row” means, for example, the row of the attributes corresponding to the gender and the age that are estimated in step 1202 of FIG. 19C described later.
  • the information processing section 31 displays the image acquired in the process of step 123 on, for example, the upper LCD 22 (step 125 ).
  • the information processing section 31 performs a process of selecting an enemy object EO (step 126 ).
  • the information processing section 31 prompts the user to select the head shape of an enemy object EO.
  • the information processing section 31 may display the list of head shapes as shown in FIG. 9 , and may receive the selection of the user through the GUI.
  • the information processing section 31 sets the acquired face image as a texture of the enemy object EO (step 127 ), and generates the enemy object EO (step 128 ).
  • the enemy object generated in step 128 is also an example of the first character object.
  • the information processing section 31 performs the process of step 128 as an example of the means for creating a character object.
  • the information processing section 31 executes a game using the generated enemy object EO (step 129 ).
  • the CPU 311 of the information processing section 31 performs the process of step 129 as an example of the first game processing means.
  • the game executed in step 129 is similar to that of step 106 . That is, the type of the game executed in step 129 varies, and possible examples of the game may include: a game simulating a battle with the enemy object EO; and a game where the user competes with the enemy object EO in score.
  • the information processing section 31 determines whether or not the user has succeeded in the game (step 130 ).
  • the information processing section 31 performs the process of step 130 as an example of the means for determining a success or a failure.
  • a “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO.
  • a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score.
  • a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
  • a character object using a face image already collected in the past may be caused to appear.
  • a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
  • the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 123 described above, in addition to data of face images that have been saved up to the current time (step 132 ), and ends the process of the subroutine.
  • the CPU 311 of the information processing section 31 performs the process of step 132 as an example of the means for saving.
  • Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to FIGS. 7 and 8 .
  • the user executes the game (first game) in order to save the face image acquired in step 123 in the saved data storage area Do of the game.
  • the game for example, a character object using a face image that has been saved in the saved data storage area Do by the user up to the current time is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image, and add the new face image to the saved data storage area Do, while reflecting human relationships in the real world and the like.
  • the information processing section 31 to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn 1 described with reference to FIG. 12 , and saves the face image management information Dn 1 in the data storage internal memory 35 or the data storage external memory 46 . That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn 1 . Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do of the game, as the address of face image data.
  • the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like.
  • the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn 2 described with reference to FIG. 13 . That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn 2 .
  • the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do of the game, or transfer the data through the wireless communication module 36 . Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14 .
  • the information processing section 31 inquires of the user as to whether or not to retry the game (step 131 ). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13 , an operation through the operation buttons 14 , or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 129 .
  • the GUI e.g., a positive icon, a negative icon, or a menu
  • the information processing section 31 discards the face image acquired in step 123 (step 133 ), and ends the process of the subroutine. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 123 , without waiting for an instruction to retry the game in step 131 .
  • FIG. 17 is a flow chart showing an example of a detailed process of the list display process (step 14 of FIG. 14 ).
  • the information processing section 31 first reads already registered face images from the saved data storage area Do of the data storage internal memory 35 or the data storage external memory 46 , and displays the already registered face images on the upper LCD 22 (step 140 ). More specifically, the information processing section 31 acquires the addresses of face image data of the face images from the face image management information Dn 1 saved in the saved data storage area Do. Then, the information processing section 31 may read the face images from the addresses in the data storage internal memory 35 , the data storage external memory 46 , or the like, and may display the face images on the upper LCD 22 .
  • the information processing section 31 waits for an operation of the user (step 141 ). Then, in accordance with an operation of the user, the information processing section 31 determines whether or not a face image is in the state of being selected (step 142 ). The determination of whether or not a face image is in the state of being selected is made based on, when the list of the face images are displayed on the upper LCD 22 as shown in FIGS. 7 and 8 , the state of the operation after the user has pressed the operation buttons 14 or the like, or the state of the operation through the GUI. It should be noted that the face image in the state of being selected is displayed by being surrounded by, for example, the heavy line L 1 , or the heavy line L 2 , as shown in FIGS. 7 and 8 .
  • the information processing section 31 searches for face images related to the face image in the state of being selected, using the face image management information Dn 1 (see FIG. 12 ) (step 143 ).
  • the information processing section 31 performs a process of causing the found face images to react, such as causing the found face images to give looks to the face image in the state of being selected (step 144 ).
  • the process of causing the found face images to react can be performed by, for example, the following procedure. For example, the following are prepared in advance: a plurality of patterns of eyes, in which the orientation of eyes are directed to another face image as shown in FIG. 8 ; and a plurality of patterns of a face, in which the orientation of a face is directed to another face image as shown in FIG. 8 .
  • the corresponding patterns of the orientations of eyes and the orientations of faces are selected.
  • the corresponding patterns of the orientations of eyes and the orientations of faces of the face images may be displayed so as to switch the patterns of the orientations of eyes and the orientations of faces of the already displayed face images. That is, images of eyes determined based on the relationships between the positions of the found face images and the position of the face image in the state of being selected may replace the eye portions of the original face images.
  • display may be performed by switching the entire face images.
  • patterns of eyes may be prepared in advance, in which the orientation of eyes are changed at predetermined angles, e.g., at units of 15 degrees in a 360 degree direction. Then, based on the positional relationships between the face image in the state of being selected and the found face images, angles may be determined, and the patterns of eyes at the angles closest to the determined angles may be selected.
  • patterns of a face image are prepared in which, on the assumption that the case of being directed in the normal direction of the screen is 0 degrees, the orientation is changed in the left-right directions at angles, e.g., 30 degrees, 60 degrees, and 90 degrees. Further, patterns are also prepared in which the orientation is changed in the up-down directions at, for example, approximately 30 degrees.
  • patterns may be prepared in which, for a face image whose orientation has been changed in the left-right direction at an angle of 90 degrees, the orientation is further changed in the up-down direction, i.e., diagonally upward (e.g., 15 degrees upward, 30 degrees upward, and 45 degrees upward) and diagonally downward (e.g., 15 degrees downward, 30 degrees downward, and 45 degrees downward). Then, based on the positional relationships between the face image in the state of being selected and the found face images, angles may be determined, and the angles of faces closest to the corresponding angles may be selected. Further, to emphasize intimacy, an expression such as an animation of a three-dimensional model closing one eye may be displayed. Further, a heart mark and the like may be prepared in advance, and displayed near the face images related to the face image in the state of being selected.
  • step 145 when a face image is not in the state of being selected, the information processing section 31 performs another process (step 145 ).
  • Said another process includes, for example, an operation on another GUI provided on the lower LCD 12 , and a process on operation buttons 14 other than the operation buttons 14 used for the selection of face images (buttons 14 a , 14 b , 14 c , and the like).
  • the information processing section 31 determines whether or not the process is to be ended (step 146 ). For example, when having detected that the button 14 c (B button) has been pressed while the screen shown in FIG. 8 is displayed, the information processing section 31 determines that an instruction has been given to “return”, and ends the process of FIG. 17 . When the process is not to be ended, the information processing section 31 returns to step 140 .
  • FIG. 18 is a flow chart showing an example of a detailed process of the cast determination process (step 16 of FIG. 14 ).
  • the information processing section 31 first displays a list of the head shapes of enemy objects EO (step 160 ). It should be noted that here, the description is given, taking enemy objects EO as an example; however, also when a character object other than the enemy objects EO is generated, a process similar to the following process is performed.
  • the information processing section 31 stores the head shapes of the enemy objects EO in the data storage internal memory 35 in advance, for example, before the shipment of the game apparatus 10 , or at the installation or the upgrading of the image processing program.
  • the information processing section 31 reads the head shapes of the enemy objects EO currently stored in the data storage internal memory 35 , and displays the head shapes of the enemy objects EO in the arrangement as shown in FIG. 9 .
  • the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14 , or the like, and receives the selection of the head shape of an enemy object EO (step 161 ).
  • the information processing section 31 displays a list of face images (step 162 ).
  • the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14 , or the like, and receives the selection of a face image (step 163 ). It should be noted that in the example of the process of FIG. 18 , the information processing section 31 determines a face image by detecting the selection operation in step 163 .
  • the information processing section 31 may automatically determine a face image. For example, the information processing section 31 may select a face image randomly from among the face images accumulated in the saved data storage area Do. Alternatively, the information processing section 31 may save in advance the history of the game of the user using the game apparatus 10 , in the main memory 32 , the external memory 45 , the data storage external memory 46 , the data storage internal memory 35 , or the like, and may select a face image in accordance with the properties, the taste, and the like of the user that are estimated from the history of the user. For example, in accordance with the frequencies of the user selecting face images in the past, the information processing section 31 may determine a face image to be selected next.
  • the information processing section 31 sets the selected face image as a texture of the enemy object EO (step 164 ). Then, the information processing section 31 generates the enemy object EO by texture-mapping the selected face image onto the facial surface portion of the enemy object EO (step 165 ).
  • the enemy object generated in step 165 is an example of a second character object. Then, the information processing section 31 displays the generated enemy object EO on the screen of the upper LCD 22 in the form of, for example, the enemy object EO shown in FIG. 10 .
  • the information processing section 31 performs a process of causing related face images to react (step 166 ). This process is similar to the processes of steps 143 and 144 of FIG. 17 . Further, in accordance with an operation of the user on the GUI, the information processing section 31 determines whether or not the generated enemy object EO is to be fixed (step 167 ). When the enemy object EO is not to be fixed, the information processing section 31 returns to step 162 , and receives the selection of a face image. However, when the enemy object EO is not to be fixed, the information processing section 31 may return to step 160 , and may receive the selection of the head shape of an enemy object EO. On the other hand, when the enemy object EO is to be fixed, the information processing section 31 ends the process.
  • the information processing section 31 performs the game processing shown in step 18 of FIG. 14 , using the fixed enemy object EO. Although not shown in FIG. 18 , however, a menu of the GUI or the like may be prepared so as to end the process of FIG. 18 without fixing the enemy object EO.
  • FIG. 19A is a flow chart showing an example of a detailed process of the face image management assistance process 1 (step 1 A of FIG. 14 ).
  • the information processing section 31 reads the attributes of already acquired face images from the face image attribute aggregate table Dn 2 (step 1 A 1 ). Then, the information processing section 31 searches the read face image attribute aggregate table Dn 2 for an unacquired attribute or an attribute including a small number of acquired face images.
  • An “unacquired attribute” means, for example, an attribute whose number of acquired face images is 0 in the table shown in FIG. 13 .
  • an attribute including a small number of acquired face images means, for example, an attribute whose number of acquired face images is, when sorting is performed using the numbers of acquired face images in the table shown in FIG. 13 as sorting keys, included in predetermined ranks from the bottom.
  • the information processing section 31 performs a process of prompting the user to acquire a face image corresponding to an unacquired attribute (step 1 A 2 ).
  • the information processing section 31 may display on the lower LCD 12 or the upper LCD 22 a message combining the attribute “male”, the attribute “10's”, with the phrase “the number of acquired images is 0”, based on the table shown in FIG. 13 .
  • the information processing section 31 may display a message combining the attribute “male”, the attribute “10's”, with the phrase “the number of acquired images is small”.
  • the number of the combinations of the attributes to be displayed (a row in the table shown in FIG. 13 ), however, is not limited to one, and two or more combinations may be displayed.
  • the information processing section 31 may end the process of FIG. 19A . Subsequently, the information processing section 31 returns to step 8 of FIG. 14 .
  • the description is given, taking the face image management assistance process 1 shown in FIG. 19A , as an example of a detailed process performed at the time of the determination of whether the game is to be ended in FIG. 14 (step 1 A).
  • the face image management assistance process 1 is not limited to the time of the determination of whether the game is to be ended after the execution of the game (step 1 A).
  • the information processing section 31 may perform the face image management assistance process 1 in order to prompt the user to acquire a face image during the list display process (step 14 ), the cast determination process (step 16 ), the execution of the game (step 18 ), or the like.
  • FIG. 19B is a flow chart showing an example of a detailed process of the face image management assistance process 2 (step 100 of FIG. 15 ).
  • the information processing section 31 determines the presence or absence of an acquired image (step 1000 ).
  • the information processing section 31 may store the number of acquired images as, for example, the number of records in the face image management information Dn 1 in the main memory 32 , the external memory 45 , the data storage external memory 46 , the data storage internal memory 35 , or the like.
  • the information processing section 31 ends the process.
  • the information processing section 31 proceeds to step 1001 .
  • the information processing section 31 receives a request to acquire the face image (step 1001 ).
  • the information processing section 31 recognizes the request to acquire the face image, for example, when having received an acquisition instruction through the L button 14 G or the R button 14 H in the state where the face image is displayed on the upper LCD 22 through the inner capturing section 24 or the outer capturing section 23 .
  • the information processing section 31 estimates the attributes, e.g., the gender and the age, of the face image acquired through the inner capturing section 24 or the outer capturing section 23 and displayed on the upper LCD 22 (step 1002 ).
  • the gender can be estimated from the size of the skeleton including the cheekbones and the mandible that are included in the face image, and the dimensions of the face. That is, the information processing section 31 calculates the relative dimensions of the contour of the face relative to the distance between the eyes and the distances between the eyes and the mouth (e.g., the width of the face, and the distance between the eyes and the chin). Then, when the relative dimensions are close to statistically obtained male average values, it may be determined that the face image is male. Further, for example, when the relative dimensions are close to statistically obtained female average values, it may be determined that the face image is female.
  • the information processing section 31 may store, in advance, feature information by gender and by age bracket (e.g., under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over), such as the average positions of parts of faces and the number of wrinkles in the portions of faces. Then, the information processing section may calculate the feature information of the face image, for example, acquired through the outer capturing section 23 and displayed on the upper LCD 22 , and may estimate the age bracket closest to the calculated feature information.
  • age bracket e.g., under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over
  • the information processing section may calculate the feature information of the face image, for example, acquired through the outer capturing section 23 and displayed on the upper LCD 22 , and may estimate the age bracket closest to the calculated feature information.
  • the information processing section 31 prompts the user to acquire a face image having an unacquired attribute (step 1003 ).
  • the information processing section 31 may display on the upper LCD 22 a message prompting the user to acquire a face image having an unacquired attribute. This process is similar to that of FIG. 19A .
  • the information processing section 31 determines whether or not the user has performed an operation of switching acquisition target face images (step 1004 ). For example, when the features, e.g., the distance between the eyes and the distances between the eyes and the mouth, of the face image included in the image acquired through the inner capturing section 24 or the outer capturing section 23 , have changed, the information processing section 31 determines that acquisition target face images have been switched.
  • the information processing section 31 may determine that acquisition target face images have been switched. Then, when acquisition target face images have been switched, the information processing section 31 returns to step 1002 .
  • the information processing section 31 ends the process as it is.
  • “when acquisition target face images have not been switched” is, for example, the case where the user has ended the face image management assistance process 2 through the GUI, the operation button 14 C (B button), or the like.
  • the information processing section 31 may determine that acquisition target face images have not been switched. In this case, the information processing section 31 proceeds to step 101 of FIG. 15 , and performs the face image acquisition process. It should be noted that as has already been described in the process of step 101 of FIG. 15 , the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn 2 shown in FIG. 13 .
  • “when acquisition target face images have not been switched” is, for example, the case where the amount of change in the distance between the eyes and the amounts of change in the distances between the eyes and the mouth are within tolerances.
  • the information processing section 31 may determine that the acquisition instruction has not been canceled.
  • FIG. 19C is a flow chart showing an example of a detailed process of the face image management assistance process 3 (step 122 of FIG. 16 ).
  • the process of FIG. 19C (steps 1201 through 1204 ) is similar to steps 1001 through 1004 in the process of FIG. 19B , and therefore is not described.
  • the information processing section 31 leads the user to preferentially acquire a face image corresponding to an unacquired attribute. Such a process makes it possible to assist a face image collection process of a user who wishes to acquire face images having as balanced attributes as possible.
  • age brackets are classified as under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over.
  • the present invention is not limited to such classification of age brackets.
  • the age brackets may be further classified into smaller categories.
  • age brackets may be roughly classified, such as children, adults, and the elderly.
  • the information processing section 31 when having received an acquisition instruction through the L button 14 G or the R button 14 H, the information processing section 31 recognizes a request to acquire a face image.
  • the information processing section 31 may estimate the attributes, e.g., the gender and the age, of the face image.
  • the information processing section 31 may specify the attributes of the face image from the acquired face image.
  • the game according to the present embodiment is a so-called shooting game where the player, as a main character of the game, shoots down enemy characters that appear in a virtual three-dimensional space prepared as a game world.
  • the virtual three-dimensional space forming the game world (a virtual space (also referred to as a “game space”)) is displayed on a display screen of the game apparatus 10 (e.g., the upper LCD 22 ) from the player's point of view (a so-called first-person point of view).
  • display may be performed from a third-person point of view.
  • display is performed by combining an image of the real world acquired by the capturing section included in the game apparatus 10 (hereinafter referred to as a “real world image”), with a virtual world image representing the virtual space.
  • the virtual space is divided into an area closer to the virtual camera (hereinafter referred to as a “front area”) and an area further from the virtual camera (hereinafter referred to as a “back area”).
  • an image representing a virtual object present in the front area is displayed in front of the real world image, and the virtual object present in the back area is displayed behind the real world image. More specifically, as will be described later, combination is made such that the virtual object present in the front area is given preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
  • the method of combining the real world image with the virtual world image is not limited.
  • the real world image may be rendered with the virtual object by a common virtual camera such that real world image may be present as an object in the same virtual space as the virtual object (more specifically, for example, by being attached as a texture of a virtual object).
  • a first rendered image may be obtained by rendering the real world image from a first virtual camera (hereinafter referred to as a “real world drawing camera”), and a second rendered image may be obtained by rendering the virtual object from a second virtual camera (hereinafter referred to as a “virtual world drawing camera”). Then, the first rendered image may be combined with the second rendered image such that the virtual object present in the front area is give preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
  • a first rendered image may be obtained by rendering the real world image from a first virtual camera (hereinafter referred to as a “real world drawing camera”)
  • a second rendered image may be obtained by rendering the virtual object from a second virtual camera (hereinafter referred to as a “virtual world drawing camera”).
  • the first rendered image may be combined with the second rendered image such that the virtual object present in the front area is give preference over the real world image, and the real world image is given preference over the virtual object present in the back area
  • the object to which the real world image is applied as a texture may be placed at a position, which is the boundary between the front area and the back area, and may be drawn together with the virtual object, such as an enemy object, as viewed from the common virtual camera.
  • the object to which the real world image is attached is an object having a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera, and the real world image may be attached to this surface (hereinafter referred to as a “boundary surface”) as a texture.
  • the second rendered image is obtained by rendering the virtual object while making a depth determination (determination by Z-buffering) based on the boundary surface between the front area and the back area (hereinafter referred to simply as a “boundary surface”)
  • the first rendered image is obtained by performing rendering by attaching the real world image as a texture to a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera. Then, when the second rendered image is combined with the first rendered image such that the second rendered image is given preference over the first rendered image, a combined image is generated, in which the real world image seems to be present on the boundary surface.
  • the relationships between the distance from, and the angle of view of, the virtual camera and the size of the object of the real world image are set such that the real world image includes the range of the field of view of the virtual camera.
  • first drawing method referred to as a “first drawing method”
  • second drawing method referred to as a “second drawing method”.
  • a part of the real world image is opened, and display is performed such that the virtual space in the back area can be viewed through the opening.
  • an enemy character object is present in the front area, and when predetermined conditions have been satisfied, a special enemy character (a so-called “boss character”) appears in the back area. This stage is completed by shooting down the boss character. Several stages are prepared, and the game is completed by completing all the stages. In contrast, when predetermined game over conditions have been satisfied, the game is over.
  • data indicating the position of the opening may be set on the boundary surface of the screen object. More specifically, the non-transparency of a texture to be applied to the boundary surface (a so-called ⁇ -texture) may indicate open or unopen. Further, in the second drawing method, data indicating the position of the opening may be set on the boundary surface.
  • the open/unopen state is set in the real world image.
  • another image processing may be performed on the real world image.
  • given image processing can be performed by common technical knowledge of those skilled in the art, such as attaching dirt to the real world image, or pixelizating the real world image.
  • data may be set that indicates the position where image processing is performed on the boundary surface.
  • a game screen is displayed that represents the virtual space having such an improved sense of depth that the existence of the virtual space (back area) is felt also behind the real image.
  • the real world image may be a regular image captured by a monocular camera, or may be a stereo image captured by a compound eye camera.
  • an image captured by the outer capturing section 23 is used as the real world image. That is, a real world image in the periphery of the player captured by the outer capturing section 23 (a real-world moving image acquired in real time) is used during the game play. Accordingly, when the user (the player of the game) holding the game apparatus 10 has changed the imaging range of the outer capturing section 23 by changing the orientation of the game apparatus 10 in the left-right direction or the up-down direction during the game play, the real world image displayed on the upper LCD 22 also changes so as to follow the change in the imaging range.
  • the change in the orientation of the game apparatus 10 during the game play is made roughly in accordance with: (1) the player's intention; or (2) the intention (scenario) of the game.
  • the real world image captured by the outer capturing section 23 changes. This makes it possible to intentionally change the real world image displayed on the upper LCD 22 .
  • the angular velocity sensor 40 of the game apparatus 10 detects the change in the orientation of the game apparatus 10 , and the orientation of the virtual camera is changed in accordance with the detected change. More specifically, the current orientation of the virtual camera is changed in the direction of the change in the orientation of the outer capturing section 23 . Further, the current orientation of the virtual camera is changed by the amount of change (angle) in the orientation of the outer capturing section 23 . That is, when the orientation of the game apparatus 10 is changed, the real world image changes, and the displayed range of the virtual space changes. That is, a change in the orientation of the game apparatus 10 changes the real world image in conjunction with the virtual world image. This makes it possible to display a combined image as if the real world is associated with the virtual world. It should be noted that in the present embodiment, the position of the virtual camera is not changed. Alternatively, the position of the virtual camera may be changed by detecting the movement of the game apparatus 10 .
  • FIG. 20A shows an example of the virtual space according to the present embodiment.
  • FIG. 20B shows the relationship between a screen model and an ⁇ -texture according to the present embodiment.
  • a screen object may be formed by, as shown in FIG. 20A , setting a spherical model (the screen model described above) having its center at the position of the virtual camera in the virtual space, and attaching the real world image to the inner surface of the sphere.
  • the real world image is attached as a texture to the screen model, in the entire portion of the viewing volume of the virtual camera.
  • the remaining portion of the screen model is set to transparent, and therefore is not viewed on the screen.
  • the boundary surface is a spherical surface, that is, as shown in FIG. 20A , the area closer to the virtual camera than the surface of the sphere is the front area (corresponding to a “second area” according to the present invention), and the area further from the virtual camera than the surface of the sphere is the back area (corresponding to a “first area” according to the present invention).
  • a planar polygon to which a texture of the real world image is attached is placed in the virtual space.
  • the relative position of the planar polygon relative to the real world drawing camera is always fixed. That is, the planar polygon is placed so as to have a certain distance from the real world drawing camera, and is placed such that the normal direction of the planar polygon coincides with the point of view (optical axis) of the real world drawing camera.
  • planar polygon is set to include the range of the field of view of the real world drawing camera.
  • the size of the planar polygon and the distance of the planar polygon from the virtual camera are set such that the planar polygon can include the range of the field of view of the virtual camera.
  • the real world image is attached to the entire surface of the planar polygon on its virtual camera side.
  • display is performed such that the real world image corresponds to the entire area of an image generated by the virtual camera.
  • the boundary surface may be cylindrical.
  • FIG. 21 shows another example of the virtual space according to the present embodiment.
  • a virtual cylindrical peripheral surface (boundary surface) is placed, whose center axis is a vertical axis extending through the position of the virtual camera (in the present embodiment, it is assumed that a Y-axis of the virtual space corresponds to the vertical direction, and an X-axis and a Z-axis correspond to the horizontal directions).
  • the cylindrical peripheral surface is not an object to be viewed, but is an object used for an opening process.
  • the outer peripheral surface of the cylinder divides the virtual space into the first space where the virtual camera is placed (corresponding to the “second area” according to the present invention), and the second space existing around the first space (corresponding to the “first area” according to the present invention).
  • an opening is provided in the real world image so that the player recognizes the existence of the back area behind the real world image. More clearly, the portion of the opening included in the real world image is displayed in a transparent or semi-transparent manner, and is combined with the world behind this portion. With this, in the game, the occurrence of a predetermined event triggers the opening (removal) of a part of the real world image, and an image representing another virtual space existing behind the real world image (back area) is displayed through the opening.
  • the boundary surface is a spherical surface, and such a process of displaying the back area by providing an opening in the real world image is achieved in the first drawing method by a texture attached to the inner surface of the spherical screen object described above, as shown in FIGS. 20A and 20B .
  • this texture is referred to as a “screen ⁇ -texture” (opening determination data described later).
  • the screen ⁇ -texture is attached to a portion that completes a circuit by 360 degrees around the virtual camera at least in a certain direction. More specifically, as shown in FIG.
  • the screen ⁇ -texture is attached to a central portion of the sphere, i.e., a portion that completes a circuit by 360 degrees around the position of the virtual camera in a direction parallel to the XY-plane and has a predetermined width in the Y-direction (hereinafter referred to as an “ ⁇ -texture-applied portion”).
  • the process as described above can simplify data included in the screen ⁇ -texture.
  • the screen ⁇ -texture has a rectangular shape.
  • the attachment of the ⁇ -texture to the portion shown in FIG. 20A causes pieces of information of the dots on the screen ⁇ -texture to correspond to sets of coordinates of the ⁇ -texture-applied portion in the screen object.
  • the screen object to which the real world image is attached and on which the ⁇ -texture is set is drawn from the virtual camera, and therefore, drawing is performed such that the real world image having an opening is present on the boundary surface (the inner surface of the sphere).
  • the portion corresponding to the real world image is calculated by drawing from the virtual camera.
  • data indicating the position of an opening is set on the boundary surface of the virtual space (here, the inner surface of the sphere).
  • data is set that indicates the presence or absence of an opening at each point of the boundary surface.
  • a spherical object similar to the above is placed in the virtual world where a virtual object is present, and a similar ⁇ -texture is set on the spherical object.
  • rendering is performed by applying to the planar polygon described above an ⁇ -texture corresponding to the portion drawn by the virtual world drawing camera, the corresponding ⁇ -texture included in the ⁇ -texture set on the spherical object.
  • this spherical object is an object used only to calculate an opening, but is an object not drawn when the virtual world is drawn.
  • data indicating an opening is data having information of each point of the boundary surface.
  • the data may be information defining the position of an opening in the boundary surface by a calculation formula.
  • a polygon In the second space, a polygon (object) is placed, to which a background image (texture) of the second space included in the field of view of the virtual camera through an opening is to be attached.
  • the background of the second space is occasionally referred to as a “back wall”.
  • objects are placed so as to represent enemy characters and various characters representing bullets for shooting down the enemy characters.
  • predetermined objects e.g., some of the enemy characters.
  • the objects placed in the virtual space move in the virtual space in accordance with logic (algorithm) programmed in advance.
  • some of the enemy characters can move between the first space and the second space through an opening formed in the boundary surface, or can move between the first space and the second space by forming an opening in the boundary surface themselves.
  • a particular event for forming an opening in the game is, for example, an event where an enemy character collides with the boundary surface (a collision event).
  • the event is where in the progression of the game scenario, the boundary surface is destroyed based on predetermined timing, and an enemy character present in the second space enters the first space (an enemy character appearance event).
  • an opening may be automatically formed in accordance with the passage of time.
  • an opening may be repaired in accordance with a predetermined game operation of the player. For example, the player may reduce (repair) a formed opening by hitting the opening with a bullet.
  • FIG. 22 shows a virtual three-dimensional space (game world) defined in the game program, which is an example of the image processing program according to the embodiment.
  • the boundary surface is spherical; however, in FIG. 22 , the boundary surface is shown as cylindrical for convenience.
  • display is performed on the upper LCD 22 of the game apparatus 10 such that the virtual world image representing the virtual three-dimensional space and the real world image are combined together.
  • the virtual space in the game according to the present embodiment is divided into the first space 1 and the second space 2 by the boundary surface 3 formed of the spherical surface having its central axis extending through the position of the virtual camera.
  • a camera image CI which is a real world image captured by a real camera built into the game apparatus 10 ( FIG. 23 ), is combined with the virtual world image as if the camera image CI is present at a position on the boundary surface 3 , by the processes of steps 81 and 82 described later in the first drawing method, or by the processes of steps 83 through step 85 described later in the second drawing method.
  • the real world image is a planar view image.
  • the virtual world image is also a planar view image. That is, a planar view image is displayed on the upper LCD 22 .
  • the real world image may be a stereoscopically visible image.
  • the present embodiment is not limited by the type of the real world image.
  • the camera image CI may be a still image, or may be a real-time real world image (moving image). In the game according to the present embodiment program, the camera image CI is a real-time real world image. Further, the camera image CI, which is a real world image, is not limited by the type of the camera.
  • the camera image CI may be an image obtained by a camera that can be externally connected to the game apparatus 10 .
  • the camera image CI may be an image acquired from the outer capturing section 23 (compound eye camera) and/or the inner capturing section 24 (monocular camera).
  • the camera image CI is an image acquired using one of the left outer capturing section 23 a and the right outer capturing section 23 b of the outer capturing section 23 as a monocular camera.
  • the first space 1 is a space closer when viewed from the virtual camera than the boundary surface 3 , and is also a space surrounded by the boundary surface 3 .
  • the second space 2 is a space behind the boundary surface 3 as viewed from the virtual camera.
  • a back wall BW surrounding the boundary surface 3 is present. That is, the second space 2 is a space present between the boundary surface 3 and the back wall BW.
  • a given image is attached to the back wall BW.
  • an image representing outer space prepared in advance is attached, and display is performed such that the second space 2 , which is outer space, exists behind the first space 1 . That is, the first space 1 , the boundary surface 3 , the second space 2 , and the back wall BW are placed in the order from the closer area to the further area, as viewed from the virtual camera.
  • the image processing program according to the present invention is not limited to a game program, and these settings and rules do not limit the image processing program according to the present invention.
  • enemy objects EO can move in the virtual three-dimensional space, and can move between the first space 1 and the second space 2 through the boundary surface 3 described above.
  • representation is made such that on an image displayed on the upper LCD 22 , the enemy object EO moves from the further area to the closer area, or moves from the closer area to the further area, by passing through the real world image.
  • FIGS. 22 and 24 show the state where an enemy object EO moves between the first space 1 and the second space 2 by forming an opening in the boundary surface 3 or passing through an opening already present in the boundary surface 3 .
  • objects present in the first space 1 or the second space 2 have three types: enemy objects EO, a bullet object BO, and a back wall BW.
  • the image processing program according to the present invention is not limited to the types of the objects.
  • objects are virtual physical bodies present in the virtual space (the first space 1 and the second space 2 ).
  • given objects such as obstacle objects, may be present.
  • FIGS. 23 through 26 show examples of the game screen displayed on the upper LCD 22 . Descriptions are given below of examples of the forms of display shown in the respective figures.
  • an aiming cursor AL which is displayed commonly in FIGS. 23 through 26 .
  • the aiming cursor AL for a bullet object BO is displayed commonly on the upper LCD 22 , the bullet object BO fired in accordance with an attack operation using the game apparatus 10 (e.g., pressing the button 14 B (A button)).
  • the aiming cursor AL is set so as to be directed in a predetermined direction in accordance with the program executed by the game apparatus 10 .
  • the aiming cursor AL is set so as to be fixed in the direction of the line of sight of the virtual camera, i.e., at the center of the screen of the upper LCD 22 .
  • the direction of the line of sight of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) is changed in accordance with the imaging direction of the outer capturing section 23 .
  • the player can change the direction of the aiming cursor AL in the virtual space by changing the orientation of the game apparatus 10 .
  • the player performs an attack operation by, for example, pressing the button 14 B (A button) of the game apparatus 10 with the thumb of the right hand holding the lower housing 11 .
  • the player fires the bullet object BO by the attack operation, to thereby vanquish an enemy object EO and repair an opening present in the boundary surface 3 , in the game according to the present embodiment.
  • an enemy object EO present in the first space 1 and a camera image CI captured by the real camera built into the game apparatus 10 are displayed on the upper LCD 22 .
  • the enemy object EO is arbitrarily set.
  • the enemy object EO is, for example, an object obtained by using, as a texture, an image (e.g., a photograph of a person's face) stored in the data storage external memory 46 or the like of the game apparatus 10 , and attaching the image to a three-dimensional polygon model of a predetermined shape (a polygon model representing a three-dimensional shape of a human head) by a predetermined method.
  • an image e.g., a photograph of a person's face
  • a predetermined shape a polygon model representing a three-dimensional shape of a human head
  • the camera image CI displayed on the upper LCD 22 is, as described above, a real-time real world image captured by the real camera built into the game apparatus 10 .
  • the camera image CI may be an image (e.g., a photograph of a landscape) stored in the data storage external memory 46 or the like of the game apparatus 10 .
  • the enemy object EO can arbitrarily move.
  • the enemy object EO present in the first space 1 can move to the second space 2 .
  • FIG. 24 shows an example of the state where the enemy object EO present in the first space 1 moves from the first space 1 to the second space 2 .
  • the enemy object EO present in the first space 1 moves to the second space 2 by forming an opening in the boundary surface 3 .
  • the enemy object EO having moved to the second space 2 is displayed as a shadow (silhouette model) ES at a position in an unopen area in the boundary surface 3 , as viewed from the virtual camera.
  • the second space 2 is viewed through the opening in the boundary surface 3 . That is, when an opening is present in the boundary surface 3 in the field of view of the virtual camera, a part of an image of the second space 2 is displayed through the opening, on the upper LCD 22 .
  • the image of the second space 2 is specifically objects present in the second space 2 , such as the enemy object EO and a back wall BW that are present in the second space 2 .
  • the shadow ES represents the shadow of the enemy object EO.
  • FIG. 27A shows silhouette models of the shadow of the enemy object EO as viewed from above.
  • FIG. 27B is an example of silhouette models of the shadow of the enemy object EO. As shown in FIGS.
  • silhouette models are set to correspond to a plurality of orientations.
  • the silhouette models are, for example, eight planar polygons shown in FIG. 27A .
  • the silhouette models (eight planar polygons) are placed at the same position as that of the enemy object EO, which is a substance model. Further, the planar polygons have sizes included in the substance model (do not protrude beyond the substance model).
  • a texture is attached that is obtained by drawing a shadow image of the enemy object EO as viewed in the normal direction of the surface of the planar polygon.
  • the shadow ES is displayed by drawing the corresponding silhouette model.
  • the silhouette model is present in the back of the substance model, and therefore, the substance model of the enemy object EO is drawn. Accordingly, the silhouette model is not drawn, and therefore, the shadow is not displayed. This is because the silhouette model is set to be included in the substance model of the enemy object EO.
  • An image of an object present in the first space 1 (2) in an unopen area in the real world image, a combined image of a shadow image of an object present in the second space 2 and the real world image (e.g., a semi-transparent shadow image is combined with the real world image); and (3) in an open area in the real world image, an image (substance image) of an object present in the second space 2 is preferentially combined, and a back wall image is combined in the back of the image.
  • a scene image is present across an open area and an unopen area.
  • FIG. 25 shows such a state where the enemy object EO present in the second space 2 has moved to the edge of an opening set in the boundary surface 3 .
  • the enemy object EO present in the second space 2 is displayed on the upper LCD 22 such that: an image of the enemy object EO is displayed as it is in the region of the second space 2 that can be viewed through the opening, as viewed from the virtual camera; and the shadow ES is displayed in the region of the second space 2 that cannot be viewed through the opening, as viewed from the virtual camera.
  • FIG. 28 shows an example of the non-transparencies (alpha values) set for objects in the present embodiment.
  • a non-transparency of 1 is set in the entire model.
  • a texture of each of the silhouette models (planar polygons) of the enemy object a non-transparency of 1 is set in the entire shadow image.
  • a bullet model is set in a similar manner.
  • a non-transparency of 0.6 is set in the entire model.
  • a non-transparency of 0.2 is set as a material, and a non-transparency of 1 or 0 is set on each point of the ⁇ -texture, which is a texture of the screen object. “1” indicates an unopen portion, and “0” indicates an open portion. That is, for the screen object, two types of settings: that of the material and that of the texture, are made as non-transparency values.
  • a depth determinations is valid between each pair of: an enemy object; a bullet object; a semi-transparent enemy object, an effect object, and the screen object.
  • a depth determination is valid “between the shadow planar polygon and the enemy object”, “between the shadow planar polygon and the bullet object”, “between the shadow planar polygon and the semi-transparent enemy object”, and “between the shadow planar polygon and the effect object”.
  • a depth determination is invalid between the shadow planar polygon and the screen object.
  • rendering is performed in accordance with a normal perspective projection.
  • a hidden surface is removed in accordance with the depth direction from the virtual camera.
  • a depth determination is invalid, an object is rendered even if a target object is present in an area closer to the virtual camera than that of the object.
  • the substance of the enemy object, the bullet object, the semi-transparent enemy object, and the effect object are drawn by the following formula.
  • the screen object is drawn by the following formula.
  • color of object (color of real world image) ⁇ non-transparency of texture of object+color of background ⁇ (1 ⁇ non-transparency of texture of object)”
  • the silhouette model of the enemy object is drawn by the following formula.
  • non-transparency of material of background is “non-transparency of material of screen object (boundary surface 3 )”.
  • FIG. 26 shows the state where an opening present in the boundary surface 3 is closed by hitting it with the bullet object BO.
  • the bullet object BO has collided with an unopen area present in the boundary surface 3
  • data of the unopen state is set for a boundary surface present in a certain range from the collision point.
  • the bullet object BO having collided with the opening disappears (thus, the bullet object BO has disappeared in FIG. 26 ).
  • the bullet object BO moves to the second space by passing through the opening.
  • the real-time real world image captured by the real camera built into the game apparatus 10 is displayed as an image such that the real-time real world image seems to be present on the boundary surface 3 .
  • a change in the direction of the game apparatus 10 in real space also changes the imaging range captured by the game apparatus 10 , and therefore also changes the camera image CI displayed on the upper LCD 22 .
  • the game apparatus 10 changes the position and the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space in accordance with the motion of the game apparatus 10 in real space.
  • the enemy object EO displayed as if placed in real space and an opening present in the boundary surface 3 are displayed as if placed at the same positions in real space even when the direction of the game apparatus 10 has changed in real space.
  • the imaging direction of the real camera of the game apparatus 10 is turned left.
  • the display position of the enemy object EO displayed on the upper LCD 22 and the opening present in the boundary surface 3 move in the direction opposite to the turn in the imaging direction of the real camera (in the right direction), that is, the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space, where the enemy object EO and the opening present in the boundary surface 3 are placed, moves to the left as does that of the real camera.
  • the enemy object EO and the opening present in the boundary surface 3 are displayed on the upper LCD 22 as if placed in a real space represented by the camera image CI.
  • FIG. 29 is a flow chart showing an example of the operation of image processing performed by the game apparatus 10 executing the image processing program.
  • FIG. 30 is a subroutine flow chart showing an example of a detailed operation of an enemy-object-related process performed in step 53 of FIG. 29 .
  • FIG. 31 is a subroutine flow chart showing an example of a detailed operation of a bullet-object-related process performed in step 54 of FIG. 29 .
  • FIGS. 32A and 32B are each a subroutine flow chart showing an example of a detailed operation of a display image updating process (the first drawing method and the second drawing method) performed in step 57 of FIG. 29 .
  • FIG. 29 a description is given of the operation of the information processing section 31 .
  • the CPU 311 executes a boot program (not shown). This causes the programs stored in the built-in memory, the external memory 45 , or the data storage external memory 46 , to be loaded into the main memory 32 .
  • the steps shown in FIG. 29 are performed. It should be noted that in FIGS. 29 through 32A , processes not directly related to the present invention and peripheral processes are not described.
  • the information processing section 31 performs the initialization of the image processing (step 51 ), and proceeds to the subsequent step. For example, the information processing section 31 sets the initial position and the initial direction of the virtual camera for generating a virtual world image (an image of the virtual space) in the virtual camera data Dj, and sets the coordinate axes (e.g., X, Y, and Z axes) of the virtual space where the virtual camera is placed. Subsequently, the information processing section 31 acquires various data from each component of the game apparatus 10 (step 52 ), and proceeds to the subsequent step 53 . For example, the information processing section 31 updates the real camera image data Db using a camera image captured by the currently selected capturing section (the outer capturing section 23 in the present embodiment).
  • the information processing section 31 updates the real camera image data Db using a camera image captured by the currently selected capturing section (the outer capturing section 23 in the present embodiment).
  • the information processing section 31 acquires data indicating that the operation button 14 or the analog stick 15 has been operated, to thereby update the controller data Da 1 . Further, the information processing section 31 acquires angular velocity data indicating the angular velocities detected by the angular velocity sensor 40 , to thereby update the angular velocity data Da 2 .
  • the information processing section 31 performs an enemy-object-related process (step 53 ), and proceeds to the subsequent step 54 .
  • the enemy-object-related process is described below.
  • the information processing section 31 determines whether or not conditions for the appearance of an enemy object EO have been satisfied (step 61 ).
  • the conditions for the appearance of an enemy object EO may be: that the enemy object EO appears at predetermined time intervals; that in accordance with the disappearance of the enemy object EO from the virtual world, a new enemy object EO appears; or that the enemy object EO appears at a random time.
  • the conditions for the appearance of an enemy object EO is, for example, set by the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 proceeds to the subsequent step 62 .
  • the information processing section 31 proceeds to the subsequent step 63 .
  • step 62 the information processing section 31 generates and initializes the enemy object data Df corresponding to the enemy object EO that has satisfied the conditions for the appearance, and proceeds to the subsequent step 63 .
  • the information processing section 31 acquires the substance data Df 1 , the silhouette data Df 2 , the opening shape data Df 3 , and data of polygons corresponding to the enemy object EO, using the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 generates the enemy object data Df including the above items of data.
  • the information processing section 31 initializes: data indicating the placement direction and the placement position of the polygons corresponding to the enemy object EO in the virtual space; and data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the generated enemy object data Df.
  • the initialization is made by a known method.
  • the information processing section 31 moves the enemy object EO placed in the virtual space (step 63 ), and proceeds to the subsequent step 64 .
  • the information processing section 31 updates data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df.
  • the information processing section 31 updates the data indicating the placement direction of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving direction.
  • the information processing section 31 may update the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df.
  • the update of the data indicating the moving velocity and the moving direction allows the enemy object EO to move in the virtual space at a given velocity in a given direction.
  • the information processing section 31 determines whether or not the enemy object EO has reached a certain distance from the position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) (step 64 ). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with data indicating the placement position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method), the data included in the virtual camera data Dj.
  • the information processing section 31 determines that the enemy object EO has reached the certain distance from the position of the virtual camera, and when the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO has not reached the certain distance from the position of the virtual camera.
  • predetermined conditions e.g., the distance between the placement position of the enemy object EO and the placement position of the virtual camera has fallen below a predetermined value
  • the information processing section 31 determines that the enemy object EO has reached the certain distance from the position of the virtual camera, and when the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO has not reached the certain distance from the position of the virtual camera.
  • the information processing section 31 proceeds to the subsequent step 65 .
  • the information processing section 31 proceeds to step 66 .
  • step 65 the information processing section 31 performs a point deduction process, and proceeds to the subsequent step 66 .
  • the information processing section 31 deducts a predetermined value from the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the deduction.
  • the information processing section 31 may perform a process of causing the enemy object EO having reached the certain distance from the position of the virtual camera, to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having reached the certain distance from the position of the virtual camera, such that the enemy object EO is not present in the virtual space).
  • the predetermined value in the point deduction process may be a given value, and for example, may be set by the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 determines whether or not the enemy object EO is to pass through the boundary surface 3 (the enemy object EO is to move between the first space 1 and the second space 2 ). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the boundary surface 3 , the data included in the boundary surface data Dd. Then, when the two items of data have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO is to pass through the boundary surface 3 . When the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO is not to pass through the boundary surface 3 .
  • the predetermined conditions are, for example, that the coordinates (placement position) of the enemy object EO in the virtual space satisfy conditional equations for the spherical surface of the boundary surface 3 .
  • the data indicating the placement position of the boundary surface 3 in the virtual space indicates the existence range of the boundary surface 3 in the virtual space, and is, for example, conditional equations for the spherical surface (the shape of the boundary surface 3 according to the present embodiment).
  • the placement position of the enemy object EO satisfies the conditional equations, the enemy object EO is present on the boundary surface 3 in the virtual space. In the present embodiment, for example, in such a case, it is determined that the enemy object EO is to pass through the boundary surface 3 .
  • the information processing section 31 proceeds to the subsequent step 67 .
  • the information processing section 31 ends the process of this subroutine.
  • step 67 the information processing section 31 performs a process of updating the opening determination data included in the boundary surface data Dd, and ends the process of the subroutine.
  • This process is a process for registering, in the boundary surface data Dd, information of an opening produced in the boundary surface 3 by the enemy object EO passing through the boundary surface 3 .
  • the information processing section 31 multiplies: the alpha values of the opening determination data of an area having its center at a position corresponding to the position where the enemy object EO passes through the boundary surface 3 in the virtual space, the opening determination data included in the boundary surface data Dd; by the alpha values of the opening shape data Df 3 .
  • the opening shape data Df 3 is texture data in which alpha values of “0” are stored and which has its center at the placement position of the enemy object EO. Accordingly, based on the multiplication, the alpha values of the opening determination data of the area where the opening is generated so as to have its center at the placement position of the enemy object EO (the coordinates of the position where the enemy object EO passes through the boundary surface 3 ) are “0”. That is, the information processing section 31 can update the state of the boundary surface (specifically, the opening determination data) without determining whether or not an opening is already present in the boundary surface 3 . It should be noted that it may be determined whether or not an opening is already present at the position of the collision between the enemy object and the boundary surface. Then, when an opening is not present, an effect may be displayed such that a real world image corresponding to the collision position flies as fragments.
  • the information processing section 31 may perform a process of staging the generation of the opening (e.g., causing a wall to collapse at the position where the opening is generated). In this case, the information processing section 31 needs to determine whether or not the position where the enemy object EO passes through the boundary surface 3 (the range where the opening is to be generated) has already been open. The information processing section 31 can determine whether or not the range where the opening is to be generated has already been open, by, for example, multiplying: data obtained by inverting the alpha values of the opening shape data Df 3 from “0” to “1”; by the alpha values of the opening determination data multiplied as described above.
  • the alpha values of the opening determination data are “0”.
  • the multiplication results are “0”.
  • the alpha values of the opening determination data are not “0”.
  • the opening shape data Df 3 of the enemy object EO is texture data in which alpha values of “0” are stored so as to correspond to the shape of the enemy object EO.
  • the information processing section 31 may convert the alpha values of the texture data into “1”, based on a predetermined event.
  • the alpha values of the opening shape data Df 3 are “1”.
  • the alpha values of the opening determination data are not changed.
  • the enemy object EO passes through the boundary surface 3 without forming an opening. That is, this makes it possible to stage the enemy object EO as if it slips through the boundary surface 3 (see FIG. 22 ).
  • the predetermined event may be time intervals defined by random numbers or predetermined intervals, or may be the satisfaction of predetermined conditions in the game. These events may be, for example, set by the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 performs a bullet-object-related process (step 54 ), and proceeds to the subsequent step 55 .
  • the bullet-object-related process is described below.
  • the information processing section 31 moves a bullet object BO in the virtual space in accordance with a moving velocity vector that is set (step 71 ), and proceeds to the subsequent step 72 .
  • the information processing section 31 updates data indicating the placement direction and the placement position of the bullet object BO, based on data indicating the moving velocity vector, the data included in the bullet object data Dg.
  • the information processing section 31 may update the data indicating the moving velocity vector, by a known method.
  • the information processing section 31 may change the method of updating the data indicating the moving velocity vector. For example, when the bullet object BO is a ball, the information processing section 31 may update the data indicating the moving velocity vector, taking into account the effect of a gravity in the vertical direction in the virtual space.
  • the information processing section 31 determines whether or not the user of the game apparatus 10 has performed a firing operation (step 72 ). For example, with reference to the controller data Da 1 , the information processing section 31 determines whether or not the user has performed a predetermined firing operation (e.g., pressing the button 14 B (A button)). When the firing operation has been performed, the information processing section 31 proceeds to the subsequent step 73 . On the other hand, when the firing operation has not been performed, the information processing section 31 proceeds to the subsequent step 74 .
  • a predetermined firing operation e.g., pressing the button 14 B (A button)
  • step 73 in accordance with the firing operation, the information processing section 31 places the bullet object BO at the position of the virtual camera in the virtual space, sets the moving velocity vector of the bullet object BO, and proceeds to the subsequent step 74 .
  • the information processing section 31 generates the bullet object data Dg corresponding to the firing operation.
  • the information processing section 31 stores the data indicating the placement position and the placement direction (the direction of the line of sight) of the virtual camera, the data included in the virtual camera data Dj, in the data indicating the placement position and the placement direction of the bullet object BO, the data included in the generated bullet object data Dg.
  • the information processing section 31 stores a given value in the data indicating the moving velocity vector, the data included in the generated bullet object data Dg.
  • the value to be stored in the data indicating the moving velocity vector may be set by the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, by comparing the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, when the data indicating the placement position of the enemy object EO and the data indicating the placement position of the bullet object BO have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO and the bullet object BO have made contact with each other.
  • the information processing section 31 determines that the enemy object EO and the bullet object BO have not made contact with each other.
  • the predetermined conditions are, for example, that the distance between the placement position of the enemy object EO and the placement position of the bullet object BO falls below a predetermined value.
  • the predetermined value may be, for example, a value based on the size of the enemy object EO.
  • the information processing section 31 proceeds to the subsequent step 75 .
  • the information processing section 31 proceeds to the subsequent step 76 .
  • step 75 the information processing section 31 performs a point addition process, and proceeds to the subsequent step 76 .
  • the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition.
  • the information processing section 31 performs a process of causing both objects having made contact with each other based on the determination in step 84 described above (i.e., the enemy object EO and the bullet object BO), to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having made contact with the bullet object BO, and the bullet object data Dg concerning the bullet object BO having made contact with the enemy object EO, such that the enemy object EO and the bullet object BO are not present in the virtual space).
  • the predetermined points in the point addition process may be a given value, and may be, for example, set by the group of various programs Pa stored in the main memory 32 .
  • step 76 the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3 . For example, using the placement position of the bullet object BO included in the bullet object data Dg and the opening determination data, the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3 .
  • the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, satisfies conditional equations for the spherical surface of the boundary surface 3 , as in the process of the enemy object EO. Then, when the data indicating the placement position of the bullet object BO does not satisfy the conditional equations for the spherical surface, the information processing section 31 determines that the bullet object BO has not made contact with the boundary surface 3 . On the other hand, when the data indicating the placement position of the bullet object BO satisfies the conditional equations for the spherical surface of the boundary surface 3 , the bullet object BO is present on the boundary surface 3 in the virtual space.
  • the information processing section 31 acquires the alpha values of the opening determination data of a predetermined area having its center at a position corresponding to the position where the bullet object BO is present on the boundary surface 3 .
  • the predetermined area is a predetermined area having its center at the contact point of the bullet object BO and the boundary surface 3 .
  • the information processing section 31 determines that the bullet object BO has made contact with an unopen area in the boundary surface 3 .
  • the information processing section 31 proceeds to the subsequent step 77 .
  • the information processing section 31 proceeds to the subsequent step 78 .
  • step 77 the information processing section 31 performs a process of updating the opening determination data, and proceeds to the subsequent step 78 .
  • the information processing section 31 updates, in the boundary surface 3 , the alpha values of the opening determination data of the predetermined area having its center at the position corresponding to the placement position of the bullet object BO that has made contact with the unopen area in the boundary surface 3 based on the determination, to alpha values of “1”, which correspond to an unopen area.
  • the bullet object BO has made contact with the unopen area by this updating process, all the alpha values of the opening determination data in a predetermined area having its center at the contact point are updated to “1”.
  • the alpha values of the opening determination data of this part are also updated to “1”. That is, when the bullet object BO has made contact with the edge of an opening provided in the boundary surface 3 , the opening included in a predetermined area having its center at the position of the contact is repaired to the state of being unopen.
  • the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 76 , to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the unopen area in the boundary surface 3 , such that the bullet object BO is not present in the virtual space).
  • the predetermined area used in the updating process may be a given area, and may be, for example, set by the group of various programs Pa stored in the main memory 32 .
  • the information processing section 31 determines whether or not the bullet object BO has reached a predetermined position in the virtual space.
  • the predetermined position may be, for example, the position where a back wall BW is present in the virtual space.
  • the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, indicates that the bullet object BO has collided with the back wall BW.
  • the information processing section 31 proceeds to the subsequent step 77 .
  • the information processing section 31 ends the process of this subroutine.
  • step 77 the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space, and ends the process of the subroutine.
  • the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO such that the bullet object BO is not present in the virtual space).
  • the information processing section 31 calculates the motion of the game apparatus 10 (step 55 ), and proceeds to the subsequent step 56 .
  • the information processing section 31 calculates the motion of the game apparatus 10 (e.g., a change in the imaging direction of the real camera provided in the game apparatus 10 ) using the angular velocities indicated by the angular velocity data Da 2 , to thereby update the motion data Di using the calculated motion.
  • the orientation of the entire game apparatus 10 also changes, and therefore, angular velocities corresponding to the change are generated in the game apparatus 10 .
  • the angular velocity sensor 40 detects the angular velocities generated in the game apparatus 10 , whereby data indicating the angular velocities is stored in the angular velocity data Da 2 .
  • the information processing section 31 can calculate the direction and the amount (angle) that have changed in the imaging direction of the real camera provided in the game apparatus 10 , as the motion of the game apparatus 10 .
  • the information processing section 31 changes the position of the virtual camera in the virtual space (step 56 ), and proceeds to the subsequent step 57 .
  • the information processing section 31 uses the motion data Di, to impart the same changes as those in the imaging direction of the real camera of the game apparatus 10 in real space, to the virtual camera in the virtual space, to thereby update the virtual camera data Dj using the position and the direction of the virtual camera after the changes.
  • the imaging direction of the real camera of the game apparatus 10 in real space has turned left by A°
  • the direction of the virtual camera in the virtual space also turns left by A°.
  • the enemy object EO and the bullet object BO displayed as if placed in real space are displayed as if placed at the same positions in real space even when the direction and the position of the game apparatus 10 have changed in real space.
  • FIG. 32A is the display image updating process in the first drawing method.
  • FIG. 32B is the display image updating process in the second drawing method.
  • the information processing section 31 performs a process of attaching the real camera image acquired in step 52 to the screen object (boundary surface 3 ) included in the viewing volume of the virtual camera (step 81 ), and proceeds to the subsequent step 82 .
  • the information processing section 31 updates the texture data of the real camera image included in the real world image data Dc, using the real camera image data Db updated in step 52 .
  • the information processing section 31 obtains the point where the direction of the line of sight of the virtual camera overlaps the boundary surface 3 , using the data indicating the placement direction and the placement position of the virtual camera in the virtual space, the data included in the virtual camera data Dj.
  • the information processing section 31 attaches the texture data of the real camera image included in the real world image data Dc, such that the obtained point is the center, to thereby update the boundary surface data Dd. At this time, the information processing section 31 acquires the opening determination data set for the area to which the texture data is attached, such that the opening determination data corresponds to the area corresponding to all the pixels of the texture data. Then, the information processing section 31 applies to the texture data the alpha values (“0” or “0.2”) set in the acquired opening determination data. Specifically, the information processing section 31 multiplies: color information of all the pixels of the texture data of the real camera image to be attached; by the alpha values at the corresponding positions of the opening determination data. By this process, an opening is represented in the real world image as described above.
  • an alpha value of “0.2” (an unopen area) stored in the opening determination data is handled as an alpha value of “1” set as the material described above.
  • the texture data of the real camera image to be attached to the boundary surface 3 is image data of an area that is wider than the field of view of a virtual camera C 0 .
  • the information processing section 31 generates a display image by a process of rendering the virtual space (step 82 ), and ends the process of this subroutine.
  • the information processing section 31 generates an image obtained by rendering the virtual space where the boundary surface 3 (screen object), the enemy object EO, the bullet object BO, and the back wall BW are placed, to thereby update the rendered image data of the virtual space using the generated image, the rendered image data included in the rendered image data Dk. Further, the information processing section 31 updates the display image data Dl using the rendered image data of the virtual space.
  • FIG. 33 shows an example of the placement of the enemy object EO, the bullet object BO, the boundary surface 3 (the screen object in which the opening determination data is set), and the back wall BW in the virtual space.
  • the enemy object EO, the bullet object BO, the boundary surface 3 , and the back wall BW are each placed in accordance with the data indicating the placement position included in the corresponding one of the enemy object data Df, the bullet object data Dg, the boundary surface data Dd, and the back wall image data De.
  • the virtual camera C 0 for rendering the virtual space is placed in accordance with the data indicating the placement direction and the placement position, the data included in the virtual camera data Dj.
  • the information processing section 31 renders with a perspective projection from the virtual camera C 0 the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, so as to include the boundary surface 3 .
  • the information processing section 31 takes into account the information about the priority of drawing.
  • an object present in the second space 2 is not drawn due to the presence of the boundary surface 3 .
  • an opening is provided in the boundary surface 3 (real world image), so that a part of the second space 2 can be viewed through the opening. Further, the shadow of the object present in the second space 2 is drawn in combination with the real world image.
  • the information processing section 31 performs the rendering process using the information about the priority of drawing. It should be noted that in the image processing program according to the present embodiment, alpha values are used as an example of the priority of drawing.
  • the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3 .
  • the boundary surface 3 is the screen object to which the texture data of the real camera image is applied in the direction of the field of view (the range of the field of view) of the virtual camera C 0 in step 81 described above.
  • the opening determination data corresponding to each position is applied to the texture data of the real camera image. Accordingly, in the range of the field of view of the virtual camera C 0 , the real world image to which the opening determination data is applied is present.
  • the information processing section 31 draws (renders) images of a virtual object and the back wall BW that are present in the second space 2 , in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall BW that are present in the second space 2 . That is, in the image to be displayed, the real world image attached in step 81 described above is drawn in the portion corresponding to this area.
  • rendering is performed such that image data included in the substance data Df 1 or the back wall image data De is drawn. Then, on the upper LCD 22 , images of the virtual object and the back wall BW are displayed in the portion corresponding to this area.
  • the virtual object and the back wall BW that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22 , the real world image is drawn in the portion corresponding to this area.
  • the shadow ES silica model of the enemy object EO present in the second space 2 described above, however, a depth determination is set to invalid between the boundary surface 3 and the shadow ES.
  • alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3 , and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored (an area having the opening determination data in which alpha values of “0.2” are stored). With this, an image of the shadow ES is drawn on the real world image.
  • the silhouette model is hidden by the substance model, and therefore is not drawn.
  • the shape of the boundary surface 3 is a central portion of a spherical surface, and therefore, the opening determination data may not be present depending on the direction of the line of sight of the virtual camera C 0 .
  • the above process is performed on the assumption that the opening determination data is present in which alpha values of “0.2” are stored in a simulated manner. That is, an area where the opening determination data is not present is handled as an area where alpha values of “1”, which indicate an unopen area, are stored.
  • the silhouette data Df 2 included in the enemy object data Df corresponding to the enemy object EO is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object EO, and to each planar polygon, a texture of the silhouette image of the enemy object EO as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object EO in the second space 2 is reflected.
  • the information processing section 31 performs the rendering process such that the image data included in the aiming cursor image data Dm is preferentially drawn at the center of the field of view of the virtual camera C 0 (the center of the image to be rendered).
  • the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual camera C 0 (an image including the aiming cursor AL), to thereby update the rendered image data of the virtual space (step 82 ). Then, the information processing section 31 updates the display image data Dl, using the updated rendered image data of the virtual space.
  • the information processing section 31 performs a process of rendering the real camera image acquired in step 52 described above (step 83 ), and proceeds to the subsequent step 84 .
  • the information processing section 31 updates the texture data of the real camera image included in the real world image data Dc using the real camera image Db updated in step 52 described above.
  • the information processing section 31 generates an image obtained by rendering the real camera image using the updated real world image data Dc, to thereby update the rendered image data of the real camera image using the generated image, the rendered image data included in the rendered image data Dk.
  • FIGS. 35 and 36 a description is given below of an example of the rendering process of the real camera image.
  • the information processing section 31 sets, as a texture, a real camera image obtained from the real camera of the game apparatus 10 , and generates a planar polygon on which the texture is mapped. Then, the information processing section 31 generates, as a real world image, an image obtained by rendering the planar polygon with a parallel projection from a real world image drawing camera C 1 .
  • a description is given of an example of the method of generating a real world image in the case where the entire real camera image obtained from the real camera of the game apparatus 10 is displayed on the entire display screen of the upper LCD 22 .
  • the combined image according to the present embodiment (the combined image of a real world image and a virtual world image) is displayed on the entire display screen of the upper LCD 22 .
  • the combined image may be displayed in a part of the display screen of the upper LCD 22 .
  • the entire real camera image is displayed in the entire combined image.
  • a planar polygon is considered, on which a texture having i pixels is mapped in 1 unit of a coordinate system of the virtual space where the planar polygon is placed.
  • a texture having i pixels ⁇ i pixels is mapped onto an area of 1 unit ⁇ 1 unit of the coordinate system.
  • the display screen of the upper LCD 22 has horizontal W dots ⁇ vertical H dots
  • the entire texture of the real camera image corresponds to the entire display screen having W dots ⁇ H dots. That is, it is assumed that the size of the texture data of the camera image is horizontal W pixels ⁇ vertical H pixels.
  • the planar polygon only needs to be placed such that 1 dot ⁇ 1 dot on the display screen corresponds to a texture of 1 pixel ⁇ 1 pixel in the real camera image, and the above coordinate system only needs to be defined as shown in FIG. 36 . That is, an XY coordinate system of the virtual space where the planar polygon is placed is set such that the width of the planar polygon, on the entire main surface of which the texture of the camera image is mapped, corresponds to W/i units of the coordinate system, and the height of the planar polygon corresponds to H/i units of the coordinate system.
  • the planar polygon is placed such that when the center of the main surface of the planar polygon, on which the texture is mapped, coincides with the origin of the XY coordinate system of the virtual space, the horizontal direction of the planar polygon corresponds to the X-axis direction (the right direction is the X-axis positive direction), and the vertical direction of the planar polygon corresponds to the Y-axis direction (the up direction is the Y-axis positive direction).
  • an area of 1 unit ⁇ 1 unit in the above coordinate system corresponds to an area of i pixels ⁇ i pixels in the texture, and therefore, an area of horizontal (W/i) ⁇ vertical (H/i) in the planar polygon corresponds to the size of W pixels ⁇ H pixels in the texture.
  • planar polygon placed in the coordinate system of the virtual space is rendered with a parallel projection such that 1 pixel in the real camera image (texture) corresponds to 1 dot on the display screen.
  • a real world image is generated that corresponds to the camera image obtained from the real camera of the game apparatus 10 .
  • the texture data of the real camera image included in the real world image data Dc is updated by the real camera image data Db.
  • the information processing section 31 updates the texture data by a given method.
  • the information processing section 31 may update the texture data, using an image obtained by enlarging or reducing the sizes of horizontal A and vertical B of the image in the real camera image data Db so as to coincide with an image having a size of W ⁇ H (an image of the texture data).
  • the information processing section 31 may update the texture data by clipping an image having a size of W ⁇ H (an image of the texture data) from a predetermined position in the image in the real camera image data Db.
  • W ⁇ H an image of the texture data
  • at least one of the sizes of horizontal A and vertical B of the image in the real camera image data Db is smaller than the sizes of horizontal W and vertical H in the texture data.
  • the information processing section 31 may update the texture data by enlarging the image in the real camera image data Db so as to excess the size of the texture data, and subsequently clipping an image having a size of W ⁇ H (an image of the texture data) from a predetermined position in the enlarged image.
  • the horizontal ⁇ vertical size of the display screen of the upper LCD 22 coincides with the horizontal ⁇ vertical size of the texture data in the real camera image; however, these sizes do not need to coincide with each other. In this case, the size of the display screen of the upper LCD 22 and the size of the real world image do not coincide with each other.
  • the information processing section 31 may change the size of the real world image by a known method when the real world image is displayed on the display screen of the upper LCD 22 .
  • the information processing section 31 performs a process of rendering the virtual space (step 84 ), and proceeds to the subsequent step 85 .
  • the information processing section 31 generates, taking the opening determination data into account, an image obtained by rendering the virtual space where the enemy object EO, the bullet object BO, and the back wall BW are placed, to thereby update the rendered image data of the virtual space using the generated image, the rendered image data included in the rendered image data Dk.
  • FIGS. 37 through 39 an example of the rendering process is described below.
  • FIG. 37 shows an example of the placement of the enemy object EO, the bullet object BO, the boundary surface 3 (opening determination data), and the back wall BW in the virtual space.
  • the enemy object EO, the bullet object BO, the boundary surface 3 , and the back wall BW are each placed in accordance with the data indicating the placement position included in the corresponding one of the enemy object data Df, the bullet object data Dg, the boundary surface data Dd, and the back wall image data De.
  • a virtual world drawing camera C 2 for rendering the virtual space is placed in accordance with the data indicating the placement direction and the placement position, the data included in the virtual camera data Dj.
  • a real image in which an opening is provided is generated by multiplying the opening determination data by color information of the real world image (the rendered image data of the real camera image).
  • 1 horizontal coordinate unit ⁇ 1 vertical coordinate unit in the rendered image data of the real camera image corresponds to 1 horizontal coordinate unit ⁇ 1 vertical coordinate unit of the boundary surface 3 (specifically, the opening determination data) in the virtual space. That is, it is assumed that when the boundary surface 3 is viewed from the virtual world drawing camera C 2 shown in FIG. 37 or 38 with a perspective projection, the range of the boundary surface 3 in the field of view of the virtual world drawing camera C 2 corresponds to the horizontal ⁇ vertical size of the rendered image data of the real camera image.
  • FIG. 39 shows an example of the positional relationship between the virtual world drawing camera C 2 and the boundary surface 3 .
  • 1 horizontal coordinate unit ⁇ 1 vertical coordinate unit in the opening determination data corresponds to 1 horizontal coordinate unit ⁇ 1 vertical coordinate unit in the rendered image data of the real camera image.
  • H is the number of vertical dots on the display screen of the upper LCD 22
  • i is the number of pixels in the texture to be mapped onto 1 unit of the coordinate system of the virtual space.
  • the range of the boundary surface 3 in the field of view of the virtual world drawing camera C 2 has a size of W ⁇ H.
  • the information processing section 31 generates an image obtained by rendering the virtual space such that the boundary surface 3 is present at the position described above.
  • the information processing section 31 performs the rendering process taking into account the combination of the real world image to be made later.
  • An example of the rendering process is specifically described below.
  • the information processing section 31 renders with a perspective projection from the virtual world drawing camera C 2 the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, such that the boundary surface 3 is present as shown in FIG. 37 (or FIG. 38 ). At this time, the information processing section 31 takes into account the information about the priority of drawing. In a normal perspective projection, rendering is performed such that an object present closer when viewed from the virtual camera in the virtual space is preferentially drawn. Accordingly, in the normal perspective projection, an object present in the second space 2 is not drawn due to the presence of the boundary surface 3 . In the game according to the present embodiment, an opening is provided in the boundary surface 3 (real world image), so that a part of the second space 2 can be viewed through the opening.
  • the shadow of the object present in the second space 2 is drawn in combination with the real world image. This makes it possible to give the user a feeling as if the virtual world further exists beyond the real world image.
  • the information processing section 31 performs the rendering process using the information about the priority of drawing. It should be noted that in the image processing program according to the present embodiment, alpha values are used as an example of the information about the priority of drawing.
  • the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3 .
  • the opening determination data is set in the boundary surface 3 .
  • the opening determination data is texture data of a rectangle in which alpha values are stored, and sets of coordinates in the texture data correspond to positions on the boundary surface in the virtual space.
  • the information processing section 31 can specify an area of the opening determination data in the range of the field of view of the virtual world drawing camera C 2 , the area corresponding to the object present in the second space 2 .
  • the information processing section 31 draws (renders) images of a virtual object and the back wall that are present in the second space 2 , in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall that are present in the second space 2 . That is, in the image to be displayed, a real world image is drawn in the portion corresponding to this area by a combination process in step 85 described later.
  • the virtual object and the back wall that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22 , a real world image is drawn in the portion corresponding to this area by the combination process in step 85 described later.
  • a depth determination is set to invalid between the shadow ES and the boundary surface 3 .
  • alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3 , and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored.
  • the shadow ES of the enemy object EO is drawn on the real world image.
  • the silhouette model of the enemy object EO has a size included in the substance model and is placed in such a manner, and such that a depth determination is set to valid between the substance model of the enemy object EO and the silhouette model, the silhouette model is hidden by the substance model, and therefore is not drawn.
  • the shape of the boundary surface 3 is a central portion of a spherical surface, and therefore, the opening determination data may not be present depending on the direction of the field of view of the virtual world drawing camera C 2 .
  • the above process is performed on the assumption that the opening determination data is present in which alpha values of “0.2” are stored in a simulated manner. That is, an area where the opening determination data is not present is handled as an area where alpha values of “1”, which indicate an unopen area, are stored.
  • the silhouette data Df 2 included in the enemy object data Df corresponding to the enemy object EO is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object, and to each planar polygon, a texture of the silhouette image of the enemy object as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow ES of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object in the second space 2 is reflected.
  • the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual world drawing camera C 2 , to thereby update the rendered image data of the virtual space (step 84 of FIG. 32B ).
  • the image generated by this process is an image obtained by excluding the real world image from the display image shown in FIG. 40 .
  • the information processing section 31 generates a display image obtained by combining the real world image with the virtual space image (step 85 ), and ends the process of this subroutine.
  • the information processing section 31 generates a combined image of the real world image and the virtual space image by combining the rendered image data of the real camera image with the rendered image of the virtual space such that the rendered image of the virtual space is given preference. Then, the information processing section 31 generates a display image by preferentially combining the image data included in the aiming cursor image data at the center of the combined image (the center of the field of view of the virtual world drawing camera C 2 ) ( FIG. 40 ).
  • FIG. 40 shows an example of the display image generated by the first drawing method or the second drawing method. It should be noted that when a virtual space image is not stored in the rendered image data of the virtual space, the information processing section 31 may store the real world image stored in the rendered image data of the camera image as it is in the display image data Dl.
  • the updating process of the display image is completed by the first drawing method or the second drawing method.
  • the information processing section 31 displays the display image on the upper LCD 22 (step 60 ), and proceeds to the subsequent step.
  • the CPU 311 of the information processing section 31 stores the display image data Dl updated in the step 57 described above (the display image) in the VRAM 313 .
  • the GPU 312 of the information processing section 31 outputs the display image drawn in the VRAM 313 to the upper LCD 22 , whereby the display image is displayed on the upper LCD 22 .
  • the information processing section 31 determines whether or not the game is to be ended (step 59 ).
  • Conditions for ending the game may be, for example: that the predetermined conditions described above (the game is completed or the game is over) have been satisfied; or that the user has performed an operation for ending the game.
  • the information processing section 31 proceeds to step 52 described above, and repeats the same process.
  • the information processing section 31 ends the process of the flow chart.
  • the face image acquired in the face image acquisition process is not released for the user until the user succeeds in the game (first game). That is, the user cannot save the acquired face image in the saved data storage area Do of the game until the user succeeds in the game. Further, the user cannot, for example, copy, modify, or transfer the acquired face image. On the other hand, when the user stops retrying the game in the state where the user has failed in the game, the acquired face image is discarded, and the process ends.
  • the face image is immediately discarded, and the process ends. Accordingly, until the acquired face image is handed over, that is, until the face image is saved in the saved data storage area Do, the user is fixated on the game, and pursues a success with enthusiasm. That is, based on the image processing program according to the present embodiment, the user can tackle the game very seriously.
  • the operation of the user on the GUI at the start of the game is an instruction to “acquire a face image with the inner capturing section 24 ” (“Yes” in step 9 of FIG. 14 ), and the face image acquisition process 1 (step 10 of FIG. 14 ) is performed
  • the following effects are also expected. That is, when a face image is acquired by performing capturing with the inner capturing section 24 at the start of the game (typically, before the first game is started), face images different from capture to capture are obtained.
  • a desire for a success in the game is increased.
  • a similar effect is expected also in the case of performing the face image acquisition process 2 by the outer capturing section 23 (step 12 of FIG. 14 ). This is because also when a face image is acquired by performing capturing with the outer capturing section 23 at the start of the game (typically, before the first game is started), face images different from capture to capture are obtained.
  • the user when the user has succeeded in the first game, the user can collect, in the saved data storage area Do, various face images, such as a face image of the user themselves, face images of people around the user, a face image included in an image obtained by a video device, and a face image of a living thing owned by the user.
  • the game apparatus 10 can display the collected face images, for example, on the screen as shown in FIGS. 7 and 8 . Then, the game apparatus 10 represents the state where, for example, on the screen as shown in FIG. 8 , face images related to the face image in the state of being selected show reactions.
  • Examples of the reactions include: giving a look to the face image in the state of being selected with one eye closed; and turning its face to the face image. Accordingly, in a virtual reality world including the face images collected in the game apparatus 10 , the game apparatus 10 can represent relationships based on human relationships, intimacies, and the like in the real world. As a result, it is possible to cause the user having collected the face images, a sense of affinity for the virtual reality world including the face images, a familiarity with the collected face images, emotions similar to those toward people or living things in the real world, and the like.
  • an enemy object EO by texture-mapping a face image selected from among the collected face images onto the facial surface portion of the enemy object EO, and execute the game.
  • the user can freely determine a cast by attaching a face image selected from among the collected face images to the enemy object EO that appears in the game. Accordingly, during the execution of the game, the user can enhance the possibility of becoming increasingly enthusiastic about the game, by an effect obtained from the face of the enemy object EO.
  • an enemy object EO is generated by attaching a face image to the enemy object EO.
  • Such a process is not limited to the generation of an enemy object EO, and can also be applied to the generation of character objects in general that appear in the game.
  • a face image acquired by the user may be attached to an agent who guides an operation on the game apparatus 10 or the progression of the game.
  • a face image acquired by the user may be attached to characters that appear in the game apparatus 10 , such as: a character object representing the user themselves; a character object that appears in the game in a friendly relationship with the user; a character object representing the owner of the game apparatus; and the like.
  • a person's face is assumed to be a face image; however, the present invention is not limited to a face image of a person, and can also be applied to a face image of an animal.
  • face images may be collected by performing the face image acquisition process described in the first embodiment, in order to acquire face images of various animals, such as mammals, e.g., dogs, cats, and horses, birds, fish, reptiles, amphibians, and insects.
  • mammals e.g., dogs, cats, and horses, birds, fish, reptiles, amphibians, and insects.
  • the game apparatus 10 it is possible to represent the relationships between people and animals in the real world, such that, as shown in the relationships between the people on the screen shown in FIG.
  • the game apparatus 10 can reflect emotions, consciousness, and real-world relationships on the virtual world such that when a face image of the pet has entered the state of being selected, face images corresponding to the master and their family smile at, or give looks to, the pet. Then, it is possible to execute the game while making the user conscious of real-world relationships, by attaching collected faces to enemy objects EO and other character objects by the cast determination process.
  • the relationship between pet and master, the relationships between the master and their family, and the like may be defined by the face image management information Dn 1 through a UIF (user interface), so that reference can be made to these relationships for the relationships between face images. It may be set such that emotions such as love and hate, and good and bad emotions toward a pet of a loved person and a pet of a hated person can be defined. Alternatively, for example, setting may be stored in the face image management information Dn 1 such that an animal whose face image has succeeded in being saved in the saved data storage area Do of the game when the result of the game executed with a face image of the master has been successful is in an intimate relationship with the master. With the game apparatus 10 , the user can execute a game in which a character object is generated and on which consciousness in the real world is reflected, based on the various face images collected as described above.
  • the user is led to acquire a face image by performing capturing with the inner capturing section 24 prior to the two capturing sections of the outer capturing section 23 (the left outer capturing section 23 a and the right outer capturing section 23 b ).
  • the inner capturing section 24 is used mainly to capture the user who operates the game apparatus 10 , and therefore is difficult to be used to capture a person other than the user. Further, the inner capturing section 24 is used mainly to capture the user who operates the game apparatus 10 , and therefore is suitable to capture the owner.
  • this process has an effect in prohibiting the use of the outer capturing section 23 in the state where neither the user nor the owner of the game apparatus 10 has a face image saved in the game apparatus 10 .
  • This makes it possible to increase the possibility of, for example, prohibiting a third person from capturing an image, using the game apparatus 10 whose owner is not specified or the game apparatus 10 whose user is not specified.
  • the information processing section 31 leads the user to preferentially capture a face image corresponding to an unacquired attribute. Such a process makes it possible to assist a face image collection process of a user who wishes to acquire face images having as balanced attributes as possible.
  • display is performed such that a real world image obtained from a real camera and a virtual space image including an object present behind the real world image are combined.
  • the image processing program it is possible to generate an image capable of attracting the user's interest, by performing drawing so as to represent unreality in a background in which a real world image is used.
  • a substance image of the object is displayed in the real world image (boundary surface 3 ), in an area where an opening is present.
  • a shadow image of the object is displayed in the real world image, in an area where an opening is not present (see FIG. 24 ).
  • the substance image and the shadow image are each an image corresponding to the orientation based on the placement direction or the moving direction of the object in the virtual space.
  • the image processing program according to the present embodiment it is possible to generate an image in which the user can recognize the activities, such as the number and the moving directions, of objects present behind the real world image.
  • an image of an unreal space such as an image of outer space
  • image data of the back wall BW can be used as image data of the back wall BW.
  • the image of the unreal space can be viewed through an opening in the real world image.
  • the opening is specified at a position in the virtual space. Then, the orientation of the real camera and the orientation of the virtual camera are associated together.
  • the image processing program according to the present embodiment it is possible to provide an opening at a position corresponding to the orientation of the real camera, and represent the opening at the same position in the real world image. That is, in the image processing program according to the present embodiment, even when the orientation of the real camera has changed, the opening is represented at the same position in real space. This makes it possible to generate an image that can be recognized by the user as if real space is linked with the unreal space.
  • the real world image in which an opening is represented is generated by the multiplication of the real world image obtained from the real camera and alpha values.
  • an opening in the real world image that is generated by the enemy object EO passing through the boundary surface 3 is generated by multiplying: the opening shape data Df 3 included in the enemy object data Df; by the opening determination data corresponding to a predetermined position.
  • the first game is executed in the face image acquisition process 1 (step 10 of FIG. 14 ) and the face image acquisition process 2 (step 12 of FIG. 14 ).
  • the game apparatus 10 permits the image acquired in the face image acquisition process 1 and the face image acquisition process 2 to be stored in the saved data storage area Do. Then, if the user succeeds in the first game, the user can sequentially add face images acquired by a similar process to the saved data storage area Do.
  • the game apparatus 10 creates character objects, such as enemy objects EO, for example, in accordance with an operation of the user or automatically. Then, the game apparatus 10 causes the character objects created based on the face images collected by the user, to appear for the user in the first game (step 106 of FIG. 15 and step 129 of FIG. 16 ), the second game (step 18 of FIG. 14 ), and the like, and provides the user with a virtual world on which human relationships and the like in the real world are reflected. Accordingly, in such a virtual world on which the real world is reflected, the user can enjoy executing, for example, the game as shown in FIGS. 20A through 26 .
  • the game according to the present embodiment is also provided to the user by the information processing section 31 of the game apparatus 10 executing the image processing program expanded in the main memory 32 .
  • the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16 ) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12 , the processes shown in FIG. 14 .
  • the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • FIG. 41 is an example of a screen displayed on the upper LCD 22 of the game apparatus 10 according to the present embodiment.
  • the procedure of the creation of this screen is similar to that of the first embodiment. That is, the case is assumed where, for example, the user holds the lower housing 11 with both hands as shown in FIG. 4 , such that the lower housing 11 and the upper housing 21 of the game apparatus 10 are in the open state. At this time, the user can view the display screen of the upper LCD 22 . Further, in this state, the outer capturing section 23 can, for example, capture space ahead in the line of sight of the user. During the execution of the game according to the present embodiment, the game apparatus 10 displays in the background of the screen an image captured by the outer capturing section 23 .
  • the information processing section 31 texture-maps, on a frame-by-frame basis, an image captured by the outer capturing section 23 onto the background portion of the screen of the game.
  • an image acquired through the outer capturing section 23 in the direction of the line of sight of the outer capturing section 23 after the change is displayed in the background of the game. That is, in the background of the screen shown in FIG. 41 , an image acquired from the direction in which the user has directed the outer capturing section 23 of the game apparatus 10 is embedded.
  • an enemy object EO 1 is displayed, the enemy object EO 1 created in accordance with the procedure described with reference to the example of the screen shown in FIG. 10 and the cast determination process in FIG. 18 .
  • Display is performed such that a face image selected in the cast determination process in FIG. 18 is texture-mapped on the facial surface portion of the enemy object EO 1 .
  • the facial surface portion of the enemy object EO 1 does not necessarily need to be formed by texture mapping.
  • the enemy object EO 1 may be displayed by simply combining the peripheral portion H 13 of the head shape of the enemy object EO shown in FIG. 9 with the face image.
  • an expression is used, such as “a face image is attached to the facial surface portion of an enemy object”.
  • FIG. 41 around the enemy object EO 1 , enemy objects EO 2 through EO 7 , which are smaller than the enemy object EO 1 , are displayed. As described above, on the screen shown in FIG. 41 , seven enemy objects EO in total are displayed, namely the enemy objects EO 1 through EO 7 . In the process of the game apparatus 10 according to the present embodiment, however, the number of enemy objects EO is not limited to seven. It should be noted that as has already been described in the first embodiment, when the enemy objects EO 1 through EO 7 do not need to be distinguished from one another, the enemy objects EO are referred to as “enemy objects EO”.
  • any one or more of the enemy objects EO 1 through EO 7 e.g., to the enemy object EO 6
  • the same face image as that of the enemy object EO 1 is attached.
  • face images different from that of the enemy object EO 1 are attached.
  • an aiming cursor AL for attacking the enemy object EO 1 and the like is displayed on the screen shown in FIG. 41 .
  • the positional relationships and the relative movement relationships between the aiming cursor AL and the background, and between the aiming cursor AL and the enemy objects EO 1 and the like, are similar to those described in the first embodiment ( FIGS. 23 through 26 ).
  • the enemy object EO 1 freely moves around on the screen shown in FIG. 41 . More specifically, the enemy object EO 1 freely moves around in a virtual space having an image captured by the outer capturing section 23 as its background. Accordingly, it seems to the user viewing the upper LCD 22 as if the enemy object EO 1 freely moves in the space where the user themselves is placed. Further, the enemy objects EO 2 through EO 7 are placed around the enemy object EO 1 .
  • the user when the user has changed the orientation of the game apparatus 10 relative to the enemy objects EO 1 through EO 7 that freely move around in the virtual space, the user can point the aiming cursor AL displayed on the screen at the enemy objects EO 1 through EO 7 .
  • the operation button 14 B A button
  • the user can fire a bullet at the enemy objects EO 1 through EO 7 .
  • an attack on, among the enemy objects EO 1 through EO 7 , those other than one having the same face image as that of the enemy object EO 1 is not a valid attack.
  • the user scores points, or the enemy objects lose points.
  • the enemy objects EO 2 through EO 7 each of which is smaller in dimensions than the enemy object EO 1 , have been attacked by the user, the user scores more points.
  • the enemy objects EO 2 through EO 7 lose more points than when the enemy object EO 1 has been attacked.
  • An attack on, among the enemy objects EO 2 through EO 7 , those having face images different from that of the enemy object EO 1 is an invalid attack. That is, the user is obliged to attack an enemy object having the same face image as that of the enemy object EO 1 .
  • an enemy object having a face image different from that of the enemy object EO 1 is referred to as a “misidentification object”.
  • the enemy objects EO 2 through EO 7 have head shapes of the same type.
  • any of the enemy objects EO 2 through EO 5 , EO 7 , and the like, which are misidentification objects may have head shapes of different types.
  • the enemy objects EO are referred to simply as “enemy objects EO”.
  • FIG. 42 is a flow chart showing an example of the operation of the information processing section 31 .
  • the information processing section 31 receives the selection of a face image, and generates enemy objects (step 30 ).
  • the process of step 30 is, for example, similar to the cast determination process in FIG. 18 .
  • the misidentification objects may be generated by, for example, attaching face images other than the face image of the enemy objects EO specified in step 30 , to the facial surface portion of the head shape of the enemy objects EO.
  • the specification of the face images of the misidentification objects is not limited.
  • the face images of the misidentification objects may be selected from among face images already acquired by the user, as shown in FIGS. 7 and 8 .
  • the face images of the misidentification objects may be stored in advance in the data storage internal memory 35 before the shipment of the game apparatus 10 .
  • the face images of the misidentification objects may be stored in the data storage internal memory 35 simultaneously at the installation or the upgrading of the image processing program.
  • face images obtained by deforming the face image of the enemy objects EO for example, face images obtained by switching parts of the face, such as eyes, nose, and mouth, with those of another face image, may be used for the misidentification objects.
  • the information processing section 31 starts the game of the enemy objects EO and the misidentification objects (step 32 ). Then, the information processing section 31 determines whether or not the user has made an attack (step 33 ).
  • the attack of the user is detected by a trigger input, for example, the pressing of the operation buttons 14 B in the state where the aiming cursor AL shown in FIG. 41 is pointed at the enemy objects EO.
  • the information processing section 31 determines whether or not the attack has been made on an appropriate enemy object EO (step 35 ).
  • the information processing section 31 destroys the enemy object EO, and adds points to the score of the user (step 36 ).
  • step 35 when an attack on a misidentification object, which is not the appropriate enemy objects EO, has been detected, the information processing section 31 performs nothing on the assumption that the attack is invalid. Further, in the determination in step 33 , when the user has not made an attack, the information processing section 31 performs another process (step 34 ).
  • Said another process is, for example, a process specific to each game. Examples of said another process include: a process of propagating the enemy object EO 6 and the misidentification objects EO 2 through EO 5 in FIG. 41 ; and a process of switching the position of the enemy object EO 6 and the positions of the misidentification objects EO 2 through EO 7 and the like in FIG. 41 .
  • the information processing section 31 determines whether or not the game is to be ended (step 37 ).
  • the game is ended, for example, when the user has destroyed all the propagating enemy objects EO, or when the score of the user has exceeded a reference value.
  • the game is ended, for example, when the enemy objects EO have propagated so as to exceed a predetermined limit, or when the points lost by the user have exceeded a predetermined limit.
  • the information processing section 31 returns to step 33 .
  • the game is executed by confusing the user in combination of appropriate enemy objects EO and misidentification objects. Accordingly, the user needs to correctly recognize the face images of the enemy objects EO. As a result, the user requires a capacity to distinguish the enemy objects EO and concentration.
  • the game apparatus 10 according to the present embodiment makes it possible to cause the user a sense of tension when the game is executed, or to stimulate the user's brain while the user recognizes the face images.
  • the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game.
  • the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the present embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15 , when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game.
  • the user tackles the game increasingly enthusiastically in order to save the acquired face image.
  • the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16 ) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12 , the processes shown in FIG. 14 .
  • the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • the information processing section 31 of the game apparatus 10 executes the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of FIG. 14 ) and the second game (the game executed in step 18 of FIG. 14 ). Further, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment also as the first game according to the first embodiment (the game executed in step 106 of FIG. 15 and step 129 of FIG. 16 ).
  • the enemy object EO is formed by combining the peripheral portion of the enemy object EO (see H 13 in FIG. 9 ) with a face image of the user.
  • a face image of a person close to the user may be used instead of the face image of the user.
  • the face image of the user (or the face image of the close person) may represent the state of being constrained by the enemy object EO.
  • representation may be made such that the face image of the user (or the face image of the close person) is released from the enemy object EO.
  • the face image of the user (or the face image of the close person) constrained by the enemy object EO gradually deforms. Then, when the limit of the deformation has exceeded a certain limit, the game may be ended.
  • FIG. 43 is an example of a screen displayed on the upper LCD 22 according to the present embodiment.
  • an enemy object EO 1 an example of a first character object
  • an enemy object EO 11 an example of a second character object
  • misidentification objects EO 12 through EO 16 examples of a third character object
  • the same face image as that of the enemy object EO 1 is attached.
  • the configuration of the misidentification objects EO 12 through EO 16 is similar to the second embodiment, and face images different from that of the enemy object EO 1 are attached to the misidentification objects EO 12 through EO 16 .
  • the configuration of FIG. 43 is illustrative, and the number of enemy objects being smaller in dimensions than the enemy object EO 1 , namely the enemy object EO 11 , is not limited to one.
  • the aiming cursor AL at the enemy object EO 1 , which is larger in dimensions, and therefore, even when the user has attacked the enemy object EO 1 and a bullet has hit the enemy object EO 1 , the points scored by the user or the damage inflicted on the enemy object EO 1 are small. Further, it is difficult to point the aiming cursor AL at the enemy object EO 11 , which is smaller in dimensions, and therefore, when the user has attacked the enemy object EO 11 and a bullet has hit the enemy object EO 11 , the points scored by the user or the damage inflicted on the enemy object EO 11 are greater than those in the case of the enemy object EO 1 .
  • the misidentification objects EO 12 through EO 16 when the misidentification objects EO 12 through EO 16 have been attacked by the user, a part of the face image attached to the enemy object EO 1 is replaced with that of another face image. For example, in the case of FIG. 43 , in the face image attached to the enemy object EO 1 , an eyebrow and an eye are replaced with an eyebrow and an eye of the face image EO 13 .
  • the misidentification objects EO 12 through EO 16 lead to the deformation of the enemy object EO 1 by confusing the user.
  • the misidentification objects EO 12 through EO 16 have head shapes of the same type. Any of the misidentification objects EO 12 through EO 16 , however, may have head shapes of different types.
  • FIG. 44 is a flow chart showing an example of the operation of the information processing section 31 .
  • the processes of steps 40 through 42 are similar to the processes of steps 30 through 32 of FIG. 42 , and therefore are not described.
  • the information processing section 31 reduces the deformation of the face image, and brings the face image of the enemy object EO 1 closer to the face image that is originally attached (step 44 ).
  • the enemy object EO 11 shown in FIG. 43 which is smaller in dimensions, has been attacked by the user
  • the user may score more points than an attack on the enemy object EO 1 , which is larger in dimensions.
  • the degree of the reduction of the deformation of the face image may be greater than when the enemy object EO 1 , which is larger in dimensions, has been attacked by the user.
  • the information processing section 31 advances the switching of parts of the face image attached to the enemy object EO 1 . That is, the information processing section 31 additionally deforms the face image (step 46 ). Further, when having detected a state other than an attack on the enemy objects EO and an attack on the misidentification objects, the information processing section 31 performs another process (step 47 ). Said another process is similar to that in the case of step 34 of FIG. 42 . For example, the information processing section 31 propagates the enemy objects EO.
  • the information processing section 31 determines whether or not the game is to be ended (step 48 ). It is determined that the game is to be ended, for example, when the deformation of the face image of the enemy object EO has exceeded a reference limit. Alternatively, it is determined that the game is to be ended, for example, when the user has destroyed the enemy objects EO and scored points of a predetermined limit. When the game is not to be ended, the information processing section 31 returns to step 43 .
  • the deformed face image is restored. Further, when the misidentification objects have been attacked by the user, the deformation of the face image further is advanced. Accordingly, the user needs to tackle the game with their concentration, and this increases a sense of tension during the execution of the game, and therefore makes it possible to train concentration. Further, based on the game apparatus 10 according to the present embodiment, a face image of the user or a face image of a person close to the user is deformed. This makes it possible to increase the possibility that the user becomes enthusiastic about a game in a virtual reality world on which the real world is reflected.
  • the cast determination process in step 16 and the process of the execution of the game in step 18 of FIG. 14 , the description is given, assuming the case where the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game.
  • the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the third embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15 , when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game.
  • the user tackles the game increasingly enthusiastically.
  • the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16 ) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12 , the processes shown in FIG. 14 .
  • the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of FIG. 14 ) and the second game (the game executed in step 18 of FIG. 14 ). Further, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment also as the first game according to the first embodiment (the game executed in step 106 of FIG. 15 and step 129 of FIG. 16 ).
  • FIG. 45 is an example of a screen displayed on the upper LCD 22 according to the present embodiment.
  • a list of face images that can be attached to enemy objects EO (a character column) is displayed as characters.
  • the screen of the game apparatus 10 does not need to include the list of face images.
  • the list of face images may be displayed.
  • the display position of the list of face images is not limited to the left of the screen as shown in FIG. 45 .
  • face images such as face images PS 1 through PS 5 , are displayed.
  • an enemy object EO 20 and enemy objects EO 21 through EO 25 are displayed.
  • the enemy object EO 20 , the enemy objects EO 21 through EO 25 , and the like are, for example, enemy objects EO created on the screen shown in FIGS. 7 through 9 in the first embodiment, or in the selection operation as shown in the cast determination process in FIG. 18 .
  • the enemy object EO 20 is drawn larger at the center of the screen, and the enemy object EO 21 through EO 25 and the like are drawn around the enemy object EO 20 .
  • the face image PS 1 has been originally attached.
  • the face image PS 2 has been originally attached.
  • the enemy object EO 25 the face image PS 5 has been originally attached.
  • parts of the faces are switched between the enemy object EO 20 and the enemy objects EO 21 through EO 25 .
  • noses are switched between the enemy object EO 20 and the enemy object EO 22 .
  • left eyebrows and left eyes are switched between the enemy object EO 20 and the enemy object EO 25 .
  • the switching of parts of the faces may be, for example, performed on a polygon-by-polygon basis when the face images are texture-mapped, the polygons forming three-dimensional models onto which the face images are texture-mapped.
  • Such switching of parts of the faces may be performed by, for example, randomly changing the number of parts to be switched and target parts to be switched. Further, for example, the number of parts to be switched may be determined in accordance with a success or a failure in, and the score of, the game that has already been executed or another game. For example, when the performance, or the degree of achievement, of the user has been excellent in the game that has already been executed, the number of parts to be switched is decreased. When the performance, or the degree of achievement, of the user has been poor, the number of parts to be switched is increased.
  • the game may be divided into levels, and the number of parts to be switched may be changed in accordance with the level of the game. For example, the number of parts to be switched is decreased at an introductory level, whereas the number of parts to be switched is increased at an advanced level.
  • a face image of the user may be acquired by the inner capturing section 24 , face recognition may be performed, and the number of parts to be switched may be determined in accordance with the expression obtained from the recognition. For example, a determination may be made on: the case where the face image is smiling; the case where the face image is surprised; the case where the face image is sad; and the case where the face image is almost expressionless. Then, the number of parts to be switched may be determined in accordance with the determination result.
  • the expression of the face may be determined in accordance with: the dimensions of the eyes; the area of the mouth; the shape of the mouth; the positions of the contours of the cheeks relative to reference points, such as the centers of the eyes, the center of the mouth, and the nose; and the like.
  • an expressionless face image of the user may be registered in advance, and the expression of the user's face may be estimated from the difference values between: values obtained when a face image of the user has been newly acquired, such as the dimensions of the eyes, the area of the mouth, the shape of the mouth, the positions of the contours of the cheeks from the reference points, and the like; and values obtained from the face image registered in advance.
  • a method of estimating the expression of the face is not limited to the above procedure, and various procedures can be used.
  • the user executes a game where the user fights with the enemy objects EO, parts of whose faces are switched as shown in FIG. 45 , by attacking the enemy objects EO.
  • the manner of making an attack is similar to those of the procedures described in the first through third embodiments. Then, when the user has won battles with the enemy objects EO, the switched parts may return to the original face images.
  • step 90 is similar to those of step 30 of FIG. 42 and step 40 of FIG. 44 .
  • the information processing section 31 of the game apparatus 10 switched parts of the faces (step 91 ).
  • the information processing section 31 executes the game (step 92 ).
  • the information processing section 31 determines whether or not the game has been successful (step 93 ).
  • the information processing section 31 restores the faces, whose parts have been switched in the process of step 91 , to the faces in the states of being originally captured (step 94 ).
  • the game is started in the state where a face image of the user has been acquired by the inner capturing section 24 , and parts of the faces have been switched between the acquired face image and another face image. Then, when the user has succeeded in the game, for example, when the user has won battles with the enemy objects EO, the face images whose parts are switched are restored to the original face images.
  • the face image a part of whose face is switched, is a face image of the user themselves, or is a face image of a person intimate with the user, the user is given a high motivation to succeed in the game.
  • parts of the faces are switched at the start of the game according to the present embodiment, in accordance with the performance in another game, and therefore, it is possible to give the user a handicap or an advantage based on the result of said another game. Further, parts of the faces are switched in accordance with the level of the game, and therefore, it is possible to represent the difficulty level of the game by the degree of the deformation of the faces.
  • the cast determination process in step 16 of FIG. 14 and the process of the execution of the game in step 18 , the description is given, assuming the case where the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game.
  • the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the present embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15 , when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game.
  • the user tackles the game increasingly enthusiastically in order to save the acquired face image.
  • the face image which is an acquisition target
  • the face image acquisition process that uses a camera image captured by the inner capturing section 24 or the outer capturing section 23 .
  • the game is executed, and when the user has succeeded in the game, permission is given to store the face image in the saved data storage area Do.
  • a face image which is an acquisition target, is acquired from a camera image captured by the inner capturing section 24 or the outer capturing section 23 .
  • permission is given to store the face image acquired during the game in the saved data storage area Do. That is, in the fifth embodiment, a face image is acquired from a camera image captured during the execution of a predetermined game, and when conditions for succeeding in acquiring the face image have been satisfied in the game, permission is given to store the face image in the saved data storage area Do. Accordingly, this game results in a game where permission is given to store the face image in the saved data storage area Do.
  • the game according to the present embodiment is also provided to the user by the information processing section 31 of the game apparatus 10 executing the image processing program expanded in the main memory 32 .
  • the information processing section 31 of the game apparatus 10 executing the image processing program expanded in the main memory 32 .
  • a face image is acquired during the execution of the second game described above (step 18 of FIG. 14 ).
  • FIG. 47 is an example of a screen displayed on the upper LCD 22 of the game apparatus 10 according to the present embodiment.
  • the procedure of the creation of this screen is similar to that of the first embodiment. That is, the case is assumed where, for example, the user holds the lower housing 11 with both hands as shown in FIG. 4 , such that the lower housing 11 and the upper housing 21 of the game apparatus 10 are in the open state. At this time, the user can view the display screen of the upper LCD 22 . Further, in this state, the outer capturing section 23 can, for example, capture space ahead in the line of sight of the user.
  • the game apparatus 10 displays in the background of the screen a camera image CI captured by the outer capturing section 23 .
  • the information processing section 31 texture-maps, on a frame-by-frame basis, an image captured by the outer capturing section 23 onto the background portion of the screen of the game. That is, on the upper LCD 22 , a real-time real world image (moving image) captured by the real camera built into the game apparatus 10 is displayed in the background portion.
  • a real-time real world image moving image
  • an image acquired through the outer capturing section 23 in the direction of the line of sight of the outer capturing section 23 after the change is displayed in the background of the game. That is, in the background of the screen shown in FIG. 47 , an image acquired from the direction in which the user has directed the outer capturing section 23 of the game apparatus 10 is embedded.
  • a person facing the outer capturing section 23 in a full-face manner is included as a subject of the camera image CI displayed as the background of the screen.
  • the enemy objects EO and the aiming cursor AL are displayed, the enemy objects EO and the aiming cursor AL created in accordance with the procedure described in the above embodiments. Display is performed such that face images selected in the cast determination process and the like are texture-mapped on the facial surface portions of the enemy objects EO. Then, when the user of the game apparatus 10 has pressed the operation button 14 B (A button) corresponding to a trigger button in the state where the aiming cursor AL is pointed at the enemy objects EO, the user can fire a bullet at the enemy objects EO.
  • the operation button 14 B A button
  • the game apparatus 10 sequentially performs a predetermined face recognition process on the camera image CI captured by the real camera (e.g., the outer capturing section 23 ), and determines the presence or absence of a person's face in the camera image CI. Then, when the game apparatus 10 has determined in the face recognition process that a person's face is present in the camera image CI, and conditions for the appearance of an acquisition target object AO have been satisfied, an acquisition target object AO appears from the portion recognized as a face in the camera image CI.
  • the acquisition target object AO is displayed by texture-mapping a face image extracted from the camera image CI onto a predetermined portion of predetermined polygons (e.g., the facial surface portion of a three-dimensional model representing a human head shape.
  • the acquisition target object AO is displayed by attaching, as a texture, an image of the portion recognized as a face in the camera image CI to the surface of a three-dimensional model of a head shape formed by combining a plurality of polygons.
  • the acquisition target object AO is not limited to one obtained by texture-mapping an image of a recognized face onto a three-dimensional model.
  • the acquisition target object AO may be displayed as a plate physical body, to the main surface of which the image of the portion recognized as a face that has been clipped from the camera image CI is attached, or may be displayed as an image simply held in a two-dimensional pixel array.
  • the acquisition target object AO is placed in the virtual space described above, and an image of the virtual space (virtual world image), in which the acquisition target object AO and/or the enemy objects EO are viewed from the virtual camera, is combined with a real world image obtained from the camera image CI, whereby display is performed on the upper LCD 22 as if the acquisition target object AO and/or the enemy objects EO are placed in real space.
  • a bullet object BO is fired in the direction of the aiming cursor AL, and the acquisition target object AO also serves as a target of attack for the user. Then, when the user has won a battle with the acquisition target object AO, the user can store in the saved data storage area Do the face image attached to the acquisition target object AO.
  • a face image used for the acquisition target object AO may be a face image obtained from a face recognized in the camera image CI (a still image), or may be a face image obtained from a face recognized by repeatedly performing face recognition on the repeatedly captured camera image CI (a moving image).
  • a face image used for the acquisition target object AO may be a face image obtained from a face recognized in the camera image CI (a still image), or may be a face image obtained from a face recognized by repeatedly performing face recognition on the repeatedly captured camera image CI (a moving image).
  • the expression and the like of the person's face repeatedly captured in the camera image CI has changed, the changes are reflected on a texture of the acquisition target object AO. That is, it is possible to reflect in real time the expression of the person captured by the real camera of the game apparatus 10 , on the expression of the face image attached to the acquisition target object AO.
  • the acquisition target object AO that appears from the portion recognized as a face in the camera image CI may be placed so as to always overlap the recognized portion when displayed in combination with the camera image CI.
  • changes in the direction and the position of the game apparatus 10 (i.e., the direction and the position of the outer capturing section 23 ) in real space also change the imaging range captured by the game apparatus 10 , and therefore also change the camera image CI displayed on the upper LCD 22 .
  • the game apparatus 10 changes the position and the direction of the virtual camera in the virtual space in accordance with the motion of the game apparatus 10 in real space.
  • the acquisition target object AO displayed as if placed in real space is displayed as if placed at the same position in real space even when the direction and the position of the game apparatus 10 have changed in real space.
  • a real-time real world image captured by the real camera built into the game apparatus 10 is displayed, and therefore, a subject may move in real space.
  • the game apparatus 10 sequentially performs a face recognition process on the repeatedly captured camera image CI, and thereby sequentially places the acquisition target object AO in the virtual space such that the acquisition target object AO is displayed so as to overlap the position of the recognized face when combined with the camera image CI.
  • the acquisition target object AO displayed on the upper LCD 22 may be displayed by, for example, enlarging, reducing, or deforming the face image actually captured and displayed in the camera image CI, or may be displayed by changing the display direction of the model to which the face image is attached.
  • image processing differentiates the actually captured face image from the acquisition target object AO, and therefore enables the user of the game apparatus 10 to easily determine that the acquisition target object AO has appeared from the camera image CI.
  • FIG. 49 is a subroutine flow chart showing an example of a detailed operation of a during-game face image acquisition process performed by executing the image processing program.
  • FIG. 50 is a subroutine flow chart showing an example of a detailed operation of a yet-to-appear process performed in step 202 of FIG. 49 .
  • FIG. 51 is a subroutine flow chart showing an example of a detailed operation of an already-appeared process performed in step 208 of FIG. 49 .
  • programs for performing these processes are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35 ), or included in the external memory 45 or the data storage external memory 46 , and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34 , into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311
  • the processing operations performed by executing the image processing program according to the fifth embodiment are performed as follows.
  • a during-game face image acquisition process described later is performed during the game processing described with reference to FIG. 29 , only in each cycle of the game processing (e.g., performed once during steps 52 through 59 ).
  • the processing operations added to the first embodiment are described, and other processing operations are not described in detail.
  • various data stored in the main memory 32 in accordance with the execution of the image processing program according to the fifth embodiment is similar to the various data stored in accordance with the execution of the image processing program according to the first embodiment, except that appearance flag data, face recognition data, and acquisition target object data are further stored.
  • the appearance flag data indicates an appearance flag indicating whether the current state of the appearance of the acquisition target object AO is “yet to appear”, “during appearance”, or “already appeared”, and the appearance flag is set to “yet to appear” in the initialization in step 51 described above ( FIG. 29 ).
  • the face recognition data indicates the most recent face image obtained from faces sequentially recognized in the repeatedly captured camera image CI, and the position of the face image in the camera image CI.
  • the acquisition target object data includes: data of a three-dimensional model corresponding to the acquisition target object AO; texture data for performing mapping on the three-dimensional model; data indicating the placement direction and the placement position of the three-dimensional model; and the like.
  • the information processing section 31 determines whether or not the acquisition target object AO has yet to appear (step 201 ). For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 201 described above, based on whether or not the appearance flag is set to “yet to appear”. When the acquisition target object AO has yet to appear, the information processing section 31 proceeds to the subsequent step 202 . On the other hand, when the acquisition target object AO is not in the state of having yet to appear, the information processing section 31 proceeds to the subsequent step 203 .
  • step 202 the information processing section 31 performs a yet-to-appear process, and proceeds to the subsequent step 203 .
  • the information processing section 31 performs a yet-to-appear process, and proceeds to the subsequent step 203 .
  • FIG. 50 a description is given below of the yet-to-appear process performed by the information processing section 31 in step 203 .
  • the information processing section 31 performs a predetermined face recognition process on the camera image indicated by the real camera image data Db, stores the face recognition result in the main memory 32 (step 211 ), and proceeds to the subsequent step.
  • the face recognition process may be performed sequentially by the information processing section 31 , using the camera image, independently of the processing of the flow chart shown in FIG. 50 .
  • the information processing section 31 acquires the face recognition result in step 211 described above, and stores the face recognition result in the main memory 32 .
  • the information processing section 31 determines whether or not conditions for the appearance of the acquisition target object AO in the virtual space have been satisfied (step 212 ).
  • the conditions for the appearance of the acquisition target object AO on an essential condition that a person's face has been recognized in the camera image in step 221 described above, may be: that the acquisition target object AO appears only once from the start to the end of the game; that the acquisition target object AO appears at predetermined time intervals; that in accordance with the disappearance of the acquisition target object AO from the virtual world, a new acquisition target object AO appears; or that the acquisition target object AO appears at a random time.
  • the information processing section 31 proceeds to the subsequent step 213 .
  • the information processing section 31 ends the process of this subroutine.
  • step 213 the information processing section 31 sets an image of the face recognized in the face recognition process in step 211 described above, as a texture of the acquisition target object AO, and proceeds to the subsequent step. For example, in the camera image indicated by the camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 211 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
  • the information processing section 31 sets the acquisition target object AO, using the face image obtained from the face recognized in the face recognition process in step 211 (step 214 ), and proceeds to the subsequent step.
  • the information processing section 31 sets the size and the shape of a polygon (e.g., a planar polygon) corresponding to the state of the start of the appearance of the acquisition target object AO, and sets the acquisition target object AO corresponding to the state of the start of the appearance by attaching the texture of the face image set in step 213 to the main surface of the polygon, to thereby update the acquisition target object data.
  • a polygon e.g., a planar polygon
  • the information processing section 31 newly places the acquisition target object AO in the virtual space (step 215 ), and proceeds to the subsequent step. For example, when the camera image is displayed on the upper LCD 22 , the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 211 , to thereby update the acquisition target object data.
  • an image is generated by rendering with a perspective projection from the virtual camera the virtual space where the acquisition target object AO is newly placed in addition to the enemy objects EO, and a display image including at least the generated image is displayed.
  • the information processing section 31 places the acquisition target object AO in the virtual space such that the acquisition target object AO overlaps the region corresponding to the face image in the boundary surface 3 on which the texture of the camera image is mapped, and performs a perspective projection on the placed acquisition target object AO from the virtual camera.
  • the method of placing the acquisition target object AO in the virtual space is similar to the example of the placement of the enemy object EO described with reference to FIGS. 33 through 39 , and therefore is not described in detail.
  • the information processing section 31 sets the appearance flag to “during appearance” to thereby update the appearance flag data (step 216 ), and ends the process of this subroutine.
  • step 203 the information processing section 31 determines whether or not the acquisition target object AO is appearing. For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 203 described above, based on whether or not the appearance flag is set to “during appearance”. When the acquisition target object AO is appearing, the information processing section 31 proceeds to the subsequent step 204 . On the other hand, when the acquisition target object AO is not appearing, the information processing section 31 proceeds to the subsequent step 207 .
  • step 204 the information processing section 31 performs a during-appearance process, and proceeds to the subsequent step.
  • the information processing section 31 represents the state of the acquisition target object AO appearing, by gradually changing the face image included in the camera image to a three-dimensional object.
  • the information processing section 31 sets the face image as a texture of the acquisition target object AO, based on the result of a face recognition performed on the camera image.
  • the information processing section 31 sets the acquisition target object AO by performing a morphing process for changing a planar polygon to predetermined three-dimensional polygons (e.g., a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape). Then, as in step 215 , the information processing section 31 places the acquisition target object AO subjected to the morphing process at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 204 , to thereby update the acquisition target object data.
  • the acquisition target object AO appears from the image of the face recognized in the real world image, the acquisition target object AO is represented so as to gradually change from planar to three-dimensional in the face image, by performing such a morphing process.
  • the three-dimensional polygons, to which the planar polygon is changed by the morphing process include polygons of various possible shapes.
  • the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of the head of a predetermined character.
  • the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the facial surface of the head-shaped polygons.
  • the acquisition target object AO is generated by performing the morphing process to change the planar polygon to plate polygons having a predetermined thickness.
  • the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the main surface of plate polygons.
  • the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of a predetermined weapon (e.g., missile-shaped polygons).
  • the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto a part of the weapon-shaped polygons (e.g., mapped onto the missile-shaped polygons at the head of the missile).
  • the information processing section 31 determines whether or not the during-appearance process on the acquisition target object AO has ended (step 205 ). For example, when the morphing process on the acquisition target object AO has reached its final stage, the information processing section 31 determines that the during-appearance process has ended. Then, when the during-appearance process on the acquisition target object AO has ended, the information processing section 31 proceeds to the subsequent step 206 . On the other hand, when the during-appearance process on the acquisition target object AO has not ended, the information processing section 31 proceeds to the subsequent step 207 .
  • the information processing section 31 determines that the morphing process on the acquisition target object AO is at the final stage.
  • step 206 the information processing section 31 sets the appearance flag to “already appeared” to thereby update the appearance flag data, and proceeds to the subsequent step 207 .
  • step 207 the information processing section 31 determines whether or not the acquisition target object AO has already appeared. For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 207 described above, based on whether or not the appearance flag is set to “already appeared”. When the acquisition target object AO has already appeared, the information processing section 31 proceeds to the subsequent step 208 . On the other hand, when the acquisition target object AO has not already appeared, the information processing section 31 ends the process of this subroutine.
  • step 208 the information processing section 31 performs an already-appeared process, and ends the process of the subroutine.
  • the information processing section 31 performs an already-appeared process, and ends the process of the subroutine.
  • FIG. 51 a description is given below of the already-appeared process performed by the information processing section 31 in step 208 described above.
  • the information processing section 31 performs a predetermined face recognition process on the camera image indicated by the real camera image data Db, stores the face recognition result as face recognition data in the main memory 32 (step 221 ), and proceeds to the subsequent step.
  • the face recognition process may also be performed sequentially by the information processing section 31 , using the camera image, independently of the processing of the flow chart shown in FIG. 51 .
  • the information processing section 31 acquires the face recognition result in step 221 described above, and stores the face recognition result as face recognition data in the main memory 32 .
  • the information processing section 31 sets an image of the face recognized in the face recognition process in step 221 described above (an image included in the face area in the camera image), as a texture of the acquisition target object AO (step 222 ), and proceeds to the subsequent step. For example, in the camera image indicated by the real camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 101 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
  • the information processing section 31 sets the acquisition target object AO corresponding to the region of the image of the face recognized in the face recognition process in step 221 described above (step 223 ), and proceeds to the subsequent step.
  • the information processing section 31 sets the acquisition target object AO by attaching the texture of the face image set in step 222 to the facial surface portion of a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape, to thereby update the acquisition target object data.
  • the polygons to which the face image obtained from the face recognized in the face recognition process in step 221 is attached as a texture may be, for example, enlarged, reduced, or deformed, or the texture of the face image may be deformed.
  • the information processing section 31 places the acquisition target object AO set in step 223 described above in the virtual space (step 224 ), and proceeds to the subsequent step. For example, as in step 215 , when the camera image is displayed on the upper LCD 22 , the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 221 , to thereby update the acquisition target object data.
  • the acquisition target object AO may be placed such that the facial surface portion to which the texture of the face image is attached opposes the virtual camera, or the orientation of the acquisition target object AO may be changed to a given direction in accordance with the progression of the game.
  • the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space (step 225 ). For example, using the position of the acquisition target object AO indicated by the acquisition target object data and the position of the bullet object BO indicated by the bullet object data Dg, the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space. When the acquisition target object AO and the bullet object BO have made contact with each other, the information processing section 31 proceeds to the subsequent step 226 . On the other hand, when the acquisition target object AO and the bullet object BO have not made contact with each other, the information processing section 31 proceeds to the subsequent step 229 .
  • step 226 the information processing section 31 performs a point addition process, and proceeds to the subsequent step.
  • the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition.
  • the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 225 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the acquisition target object AO, such that the bullet object BO is not present in the virtual space).
  • the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO having made contact with the bullet object BO has been successful (step 227 ). As an example of means for determining whether or not the acquisition of the face image has been successful, the information processing section 31 performs the process of step 227 . Then, when the acquisition of the face image has been successful, the information processing section 31 proceeds to the subsequent step 228 . On the other hand, when the acquisition of the face image has not been successful, the information processing section 31 proceeds to the subsequent step 228 .
  • a success in the acquisition of the face image is, for example, the case where the user has won a battle with the acquisition target object AO.
  • a predetermined life value for existing in the virtual space is set for the acquisition target object AO, and when the acquisition target object AO has made contact with the bullet object BO, a predetermined number is subtracted from the life value. Then, when the life value of the acquisition target object AO has become 0 or below, the acquisition target object AO is caused to disappear from the virtual space, and it is determined that the acquisition of the face image attached to the acquisition target object AO has been successful.
  • step 228 when the acquisition of the face image has been successful, the information processing section 31 saves the data that indicates the face image obtained from the face recognized in step 221 and is stored in the main memory 32 , in addition to data of the face image that has been saved in the saved data storage area Do up to the current time, and proceeds to the subsequent step 229 .
  • the CPU 311 of the information processing section 31 performs the process of step 228 .
  • the saved data storage area Do is a storage area in which the information processing section 31 can write and read and which is constructed in, for example, the data storage internal memory 35 or the data storage external memory 46 .
  • the information processing section 31 can display the data of the new face image on the screen of the upper LCD 22 , for example, in addition to the list of face images described with reference to FIGS. 7 and 8 .
  • the information processing section 31 To manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates and saves the face image management information Dn 1 described with reference to FIG. 12 . That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn 1 . Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do, as the address of face image data. Furthermore, the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like.
  • the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn 2 described with reference to FIG. 13 . That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn 2 .
  • the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do, or transfer the data through the wireless communication module 36 . Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14 .
  • the information processing section 31 may cause the acquisition target object AO that is the target used to succeed in the acquisition of the face image, to disappear from the virtual space.
  • the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to succeed in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
  • step 229 the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO present in the virtual space has failed. Then, when the acquisition of the face image attached to the acquisition target object AO has failed, the information processing section 31 proceeds to the subsequent step 230 . It should be noted that in the case where a plurality of acquisition target objects AO are present, when any one of the face images attached to the acquisition target objects AO has failed, the information processing section 31 proceeds to the subsequent step 230 . On the other hand, when the acquisition of none of the face images attached to the acquisition target objects AO has failed, the information processing section 31 ends the process of this subroutine.
  • a failure in the acquisition of the face image is, for example, the case where the user has lost a battle with the acquisition target object AO.
  • the acquisition target object AO has continued to be present in the virtual space for a predetermined time or longer, it is determined that the acquisition of the face image attached to the acquisition target object AO has failed.
  • step 230 when the acquisition of the face image has failed, the information processing section 31 discards the data that indicates the face image obtained from the face recognized in step 221 described above and is stored main memory 32 , and ends the process of the subroutine.
  • the information processing section 31 may cause the acquisition target object AO that is the target used to fail in the acquisition of the face image, to disappear from the virtual space.
  • the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to fail in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
  • a face image obtained from a face recognized in a camera image captured during the game where the user attacks the enemy objects BO serves as a target to be newly saved in the saved data storage area Do. Then, to save the face image acquired during the game in the saved data storage area Do, the user requires a game result sufficient to succeed in the acquisition of the face image in the game already executed.
  • display is performed such that on a real world image obtained from a real camera, a virtual world image showing an acquisition target object represented as if a face image in the real world image slides out is superimposed. This makes it possible to display a new image as if the acquisition target object is present in real space.
  • the acquisition target object AO to which the face image acquired during the game is attached is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image and add the new face image to the saved data storage area Do, while reflecting the real world during the execution of the game and human relationships in the real world.
  • permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do.
  • permission may be given to store the face image in the saved data storage area Do, by executing another game where the user fights with the acquisition target object AO.
  • the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has scored more points than the acquisition target object AO that has appeared during the game, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do.
  • the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has overcome the obstacles set by the acquisition target object AO that has appeared during the game, and the user has reached a goal, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do.
  • a face image acquired in the image processing based on the flow chart shown in FIG. 14 (a face image acquired in the face image acquisition process before the execution of the game for storing a face image in the saved data storage area Do, in the first through fourth embodiments; and a face image acquired during the execution of the game for storing a face image in the saved data storage area Do, in the fifth embodiment) serves as a target to be stored in the saved data storage area Do.
  • a face image already acquired in an application different from the application of the image processing may serve as a target to be stored in the saved data storage area Do.
  • the game apparatus 10 includes a capturing section (camera), and therefore, based on a camera capturing application different from the application of the image processing based on the flow chart shown in FIG. 14 , can capture an image with the capturing section, display the captured image on a screen, and save data of the captured image in a storage medium, such as the data storage internal memory 35 and the data storage external memory 46 . Further, the game apparatus 10 can receive data including a captured image from another device, and can also save the received data in a storage medium, such as the data storage internal memory 35 and the data storage external memory 46 , by executing a communication application. As described above, a face image obtained from a face recognized in an image obtained in advance by executing the camera capturing application or the communication application may serve a target to be stored in the saved data storage area Do.
  • a face image already acquired by an application different from the application of the image processing serves as a target to be stored in the saved data storage area Do
  • at least one face image is extracted by performing a face recognition process on photographed images saved during the execution of the different application, and the extracted face image serves as a target to be stored.
  • the character is displayed on the upper LCD 22 and/or the lower LCD 12 (e.g., steps 30 , 40 , 90 , 103 , 126 , 140 , 160 , and 162 )
  • at least one character including a face image acquired in advance by executing the different application is also displayed as a selection target.
  • the face image is stored in the saved data storage area Do.
  • a face image already acquired by an application different from the application of the image processing also serves as a target to be stored in the saved data storage area Do. This increases the variations of face images that can be acquired by the user, and therefore makes it easy to collect face images. Additionally, a face image unexpected by the user is suddenly added as a target to participate in the game, and therefore, it is also possible to prevent weariness in collecting face images.
  • the angular velocities generated in the game apparatus 10 are detected, and the motion of the game apparatus 10 in real space is calculated using the angular velocities.
  • the motion of the game apparatus 10 may be calculated using another method.
  • the motion of the game apparatus 10 may be calculated using the accelerations detected by the acceleration sensor 39 built into the game apparatus 10 .
  • the computer performs processing on the assumption that the game apparatus 10 having the acceleration sensor 39 is in a static state (i.e., performs processing on the assumption that the acceleration detected by the acceleration sensor 39 is the gravitational acceleration only)
  • a static state i.e., performs processing on the assumption that the acceleration detected by the acceleration sensor 39 is the gravitational acceleration only
  • the game apparatus 10 is actually in a static state, it is possible to determine, based on the detected acceleration, whether or not the game apparatus 10 is tilted relative to the direction of gravity, and also possible to determine to what degree the game apparatus 10 is tilted.
  • the acceleration sensor 39 detects the acceleration corresponding to the motion of the acceleration sensor 39 in addition to a component of the gravitational acceleration.
  • the game apparatus 10 having the acceleration sensor 39 is moved by being dynamically accelerated with the user's hand, it is possible to calculate various motions and/or positions of the game apparatus 10 by processing the acceleration signals generated by the acceleration sensor 39 . It should be noted that even when it is assumed that the acceleration sensor 39 is in a dynamic state, it is possible to determine the tilt of the game apparatus 10 relative to the direction of gravity by removing the acceleration corresponding to the motion of the acceleration sensor 39 by a predetermined process.
  • the motion of the game apparatus 10 may be calculated using the amount of movement of a camera image captured in real time by the real camera built into the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24 ).
  • the camera image captured by the real camera also changes. Accordingly, it is possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, using changes in the camera image captured by the real camera built into the game apparatus 10 .
  • a predetermined physical body is recognized in a camera image captured by the real camera built into the game apparatus 10 , and the imaging angles and the imaging positions of the physical body are chronologically compared to one another.
  • the entire camera images captured by the real camera built into the game apparatus 10 are chronologically compared to one another. This makes it possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, from the amounts of changes in the imaging direction and the imaging range in the entire image.
  • the motion of the game apparatus 10 may be calculated by combining at least two of: the angular velocities generated in the game apparatus 10 ; the accelerations generated in the game apparatus 10 ; and a camera image captured by the game apparatus 10 . This makes it possible that in the state where it is difficult to estimate the motion of the game apparatus 10 in order to calculate the motion from one parameter, the motion of the game apparatus 10 is calculated by combining this parameter with another parameter, whereby the motion of the game apparatus 10 is calculated so as to compensate for such a state.
  • the motion of the game apparatus 10 may be calculated using so-called AR (augmented reality) technology.
  • a planar image (a planar view image, as opposed to the stereoscopically visible image described above) of the real world based on a camera image CI acquired from either one of the outer capturing section 23 and the inner capturing section 24 is displayed on the upper LCD 22 .
  • an image stereoscopically visible with the naked eye (a stereoscopic image) may be displayed on the upper LCD 22 .
  • the game apparatus 10 can display on the upper LCD 22 a stereoscopically visible image (stereoscopic image) using camera images acquired from the left outer capturing section 23 a and the right outer capturing section 23 b . In this case, drawing is performed such that the enemy objects EO are present in the stereoscopic image displayed on the upper LCD 22 , and the acquisition target object AO appears from the stereoscopic image.
  • the image processing described above is performed using a left-eye image obtained from the left outer capturing section 23 a and a right-eye image obtained from the right outer capturing section 23 b .
  • either one of the left-eye image and the right-eye image is used as the camera image from which a face image is extracted by performing a face recognition process, and the enemy objects EO or the acquisition target object AO obtained by mapping a texture of the face image obtained from the one of the images are set in the virtual space.
  • a perspective transformation is performed from two virtual cameras (a stereo camera), on the enemy objects EO, the acquisition target object AO, and the bullet object BO, and the like that are placed in the virtual space, whereby a left-eye virtual world image and a right-eye virtual world image are obtained. Then, a left-eye display image is generated by combining a left-eye real world image with the left-eye virtual world image, and a right-eye display image is generated by combining a right-eye real world image with the right-eye virtual world image. Then, the left-eye display image and the right-eye display image are output to the upper LCD 22 .
  • a real-time moving image captured by the real camera built into the game apparatus 10 is displayed on the upper LCD 22 , and display is performed such that the enemy objects EO and the acquisition target object AO appear in the moving image (camera image) captured by the real camera.
  • the images to be displayed on the upper LCD 22 have various possible variations.
  • a moving image recorded in advance, or a moving image or the like obtained from television broadcast or another device is displayed on the upper LCD 22 .
  • the moving image is displayed on the upper LCD 22 , and the enemy objects EO and the acquisition target object AO appear in the moving image.
  • a still image obtained from the real camera built into the game apparatus 10 or another real camera is displayed on the upper LCD 22 .
  • the still image obtained from the real camera is displayed on the upper LCD 22 , and the enemy objects EO and the acquisition target object AO appear in the still image.
  • the still image obtained from the real camera may be a still image of the real world captured in real time by the real camera built into the game apparatus 10 , or may be a still image of the real world photographed in advance by the real camera or another real camera, or may be a still image obtained from television broadcast or another device.
  • the upper LCD 22 is a parallax barrier type liquid crystal display device, and therefore is capable of switching between stereoscopic display and planar display by controlling the on/off states of the parallax barrier.
  • the upper LCD 22 may be a lenticular type liquid crystal display device, and therefore may be capable of displaying a stereoscopic image and a planar image.
  • an image is displayed stereoscopically by dividing two images captured by the outer capturing section 23 , each into vertical strips, and alternately arranging the divided vertical strips.
  • an image can be displayed in a planar manner by causing the user's right and left eyes to view one image captured by the inner capturing section 24 . That is, even the lenticular type liquid crystal display device is capable of causing the user's left and right eyes to view the same image by dividing one image into vertical strips, and alternately arranging the divided vertical strips. This makes it possible to display an image, captured by the inner capturing section 24 , as a planar image.
  • the descriptions are given using the hand-held game apparatus 10 .
  • the present invention may be achieved by causing a stationary game apparatus or an information processing apparatus, such as a general personal computer, to execute the image processing program according to the present invention.
  • a stationary game apparatus or an information processing apparatus, such as a general personal computer
  • an information processing apparatus such as a general personal computer
  • any hand-held electronic device may be used, such as a personal digital assistant (PDA), a mobile phone, a personal computer, or a camera.
  • PDA personal digital assistant
  • a mobile phone may include two display sections and a real camera on the main surface of a housing.
  • the image processing is performed by the game apparatus 10 .
  • at least some of the process steps in the image processing may be performed by another device.
  • the game apparatus 10 when the game apparatus 10 is configured to communicate with another device (e.g., a server or another game apparatus), the process steps in the image processing may be performed by the cooperation of the game apparatus 10 and said another device.
  • the game apparatus 10 performs a face image acquisition process and game processing for permitting face images to be saved in an accumulating manner, and the face images that serve as targets to be permitted to be saved when the game has been successful may be saved in another device.
  • a plurality of game apparatuses 10 save face images in another device in an accumulating manner, and this further encourages collection of face images.
  • this may also possibly create a different enjoyment by browsing face images saved by other game apparatuses 10 .
  • another device may perform the processes of steps 52 through 57 of FIG. 29
  • the game apparatus 10 may perform the processes of steps 58 and 59 of FIG. 29 , by the cooperation of the game apparatus 10 and said another device.
  • the image processing described above can be performed by a processor or by the cooperation of a plurality of processors, the processor and the plurality of processors included in an information processing system that includes at least one information processing apparatus.
  • the processing of the flow chart described above is performed in accordance with the execution of a predetermined program by the information processing section 31 of the game apparatus 10 .
  • some or all of the processing may be performed by a dedicated circuit provided in the game apparatus 10 .
  • the shape of the game apparatus 10 and the shapes, the number, the placement, or the like of the various buttons of the operation button 14 , the analog stick 15 , and the touch panel 13 that are provided in the game apparatus 10 are merely illustrative, and the present invention can be achieved with other shapes, numbers, placements, and the like.
  • the processing orders, the setting values, the criterion values, and the like that are used in the image processing described above are also merely illustrative, and it is needless to say that the present invention can be achieved with other orders and values.
  • the image processing program (game program) described above may be supplied to the game apparatus 10 not only from an external storage medium, such as the external memory 45 or the data storage external memory 46 , but also via a wireless or wired communication link. Further, the program may be stored in advance in a non-volatile storage device of the game apparatus 10 . It should be noted that examples of the information storage medium having stored thereon the program may include a CD-ROM, a DVD, and any other optical disk storage medium similar to these, a flexible disk, a hard disk, a magnetic optical disk, and a magnetic tape, as well as a non-volatile memory. Furthermore, the information storage medium for storing the program may be a volatile memory that temporarily stores the program. Such storage media can be defined as storage media that can be read by a computer or the like. For example, a computer or the like is caused to read and execute the program stored in each of these storage media, and thereby can provide the various functions described above.
  • appended notes can be exemplified by the following forms (referred to as “appended notes”).
  • the components included in each appended note can be combined with the components included in the other appended notes.
  • a computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • the game processing step including:
  • a computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • the game processing step including:
  • a computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • a step of creating a character object including a face image obtained by deforming the acquired face image
  • An image processing apparatus connectable to a display device comprising:
  • image acquisition means for acquiring a face image
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing means including:
  • An image processing apparatus connectable to a display device comprising:
  • image acquisition means for acquiring at least one face image; means for creating a first character object, the first character object including one of the acquired face images;
  • game processing means for executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing means including:
  • An image processing apparatus connectable to a display device comprising:
  • image acquisition means for acquiring a face image
  • the character object including a face image obtained by deforming the acquired face image
  • game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
  • An image processing apparatus comprising:
  • image acquisition means for acquiring a face image
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing means including:
  • An image processing apparatus comprising:
  • image acquisition means for acquiring at least one face image
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing means including:
  • An image processing apparatus comprising:
  • image acquisition means for acquiring a face image
  • the character object including a face image obtained by deforming the acquired face image
  • game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
  • An image processing system comprising:
  • a display device that displays information including an image acquired by the capturing device
  • the image processing apparatus including:
  • the game processing means including:
  • An image processing system comprising:
  • a display device that displays information including an image acquired by the capturing device
  • the image processing apparatus including:
  • the game processing means including:
  • An image processing system comprising:
  • a display device that displays information including an image acquired by the capturing device
  • the image processing apparatus including:
  • the game processing step including:
  • the game processing step including:
  • a step of creating a character object including a face image obtained by deforming the acquired face image
  • a storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method, according to the present invention can generate a new image by combining a real world image with a virtual world image, and therefore are suitable for use as a game program, an image processing apparatus, an image processing system, an image processing method, and the like that perform a process of displaying various images on a display device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A face image is acquired during a predetermined game or before a start of the predetermined game, and a first character object is created, the first character object including the face image. Then, in the predetermined game, a game related to the first character object is advanced in accordance with an operation of a player. At least when a success in the game related to the first character object has been determined, the face image is saved in a storage area in an accumulating manner.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • The disclosures of Japanese Patent Application No. 2010-232869, filed on Oct. 15, 2010, and Japanese Patent Application No. 2010-293443, filed on Dec. 28, 2010, are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method.
  • 2. Description of the Background Art
  • Conventionally, as disclosed in, for example, Japanese Laid-Open Patent Publication No. 2006-72669, a proposal is made for an apparatus that displays an image obtained by combining an image of the real world with an image of a virtual world. Further, as disclosed in, for example, Japanese Laid-Open Patent Publication No. 2010-142592, a proposal is also made for an information processing technique, such as a game using a user image, which is information obtained in the real world. In this technique, in the progression of the game, at a time when a user image included in image data (e.g., an image of a human face area) satisfies conditions defined in advance, the image data is captured.
  • An image of a face area (hereinafter a “face image”) is an image of the most characteristic part of a living thing, and therefore is very useful as information for reflecting the real world on a virtual world. The conventional techniques, however, do not make sufficient use of the feature of a face image in which it is possible to reflect a situation in the real world on a virtual world.
  • SUMMARY OF THE INVENTION
  • It is an object of the present invention to assist a user in the acquisition, the collection, and the like of face images, and also to make it possible to represent a virtual world on which the real world is reflected by the face images.
  • To achieve the above object, the present invention may employ, for example, the following configurations. It is understood that when the description of the scope of the appended claims is interpreted, the scope should be interpreted only by the description of the scope of the appended claims. If the description of the scope of the appended claims contradicts the description of these columns, the description of the scope of the appended claims has priority.
  • A configuration example of a computer-readable storage medium having stored thereon a game program according to the present invention is executed by a computer of a game apparatus that displays an image on a display device. The game program causing the computer to execute an image acquisition step, a step of creating a first character object, a first game processing step, a determination step, and a step of saving in a second storage area in an accumulating manner. The image acquisition step acquires a face image and temporarily stores the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game. The step of creating a first character object creates a first character object, the first character object being a character object including the face image stored in the first storage area. The first game processing step, in the predetermined game, advances a game related to the first character object in accordance with an operation of a player. The determination step determines a success in the game related to the first character object. The step of saving in a second storage area in an accumulating manner, at least when a success in the game has been determined in the determination step, saves the face image stored in the first storage area, in a second storage area in an accumulating manner.
  • Based on the above, until a success in the game is determined, the player cannot save in the second storage area the face image acquired during the game or before the start of the game and temporarily stored in the first storage area, and therefore enjoys a sense of tension. On the other hand, the face image may be saved in the second storage area when a success in the game has been determined, whereby, for example, it is possible to utilize the acquired face image even after the game has ended in the first game processing step. This causes the player to tackle the game of the first game processing step very enthusiastically and with their concentration.
  • In addition, in the image acquisition step, before the start of the predetermined game, the face image may be acquired and temporarily stored in the first storage area.
  • Based on the above, it is possible to execute the game of the first game processing step such that the face image acquired before the start of the predetermined game serves as a target to be saved in the second storage area.
  • In addition, the game program may further cause the computer to execute a step of creating a second character object. The step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area. In this case, in the first game processing step, in the predetermined game, a game related to the second character object may be additionally advanced in accordance with an operation of the player.
  • Based on the above, it is possible to utilize in a game the face image saved in the second storage area. Accordingly, it is possible to advance a game related to a character object including the face image acquired in the current predetermined game and related to a character object including a previously stored face image. This makes it possible to make various representations in the game.
  • In addition, the game program may further cause the computer to execute a step of creating a second character object and a second game processing step. The step of creating a second character object creates a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area. The second game processing step advances a game related to the second character object in accordance with an operation of the player.
  • Based on the above, the player can create the second character object including a face image selected from among the face images saved in the second storage area, and execute the game of the second game processing step. That is, the player can enjoy the game of the second game processing step by utilizing the face images stored by succeeding in the game of the first game processing step. In this case, the character object that appears in the game includes a face image selected automatically or by an operation of the player, and therefore, the player can simply introduce a mental picture of the real world into a virtual world.
  • In addition, the game apparatus may be capable of acquiring an image from a capturing device. In this case, in the image acquisition step, the face image may be acquired from the capturing device before the start of the predetermined game.
  • Based on the above, it is possible to cause a character object including a face image captured by the capturing device, to appear in a game, and save the face image captured by the capturing device, in an accumulating manner.
  • In addition, the game apparatus may be capable of acquiring an image from a first capturing device that captures a front direction of a display surface of the display device, and an image from a second capturing device that captures a direction of a back surface of the display surface of the display device, the first capturing device and the second capturing device serving as the capturing device. In this case, the image acquisition step may include: a step of acquiring a face image captured by the first capturing device in preference to acquiring a face image captured by the second capturing device; and a step of, after the face image from the first capturing device has been saved in the second storage area, permitting the face image captured by the second capturing device to be acquired.
  • Based on the above, the acquisition of a face image using the first capturing device is preferentially made, the first capturing device capturing the front direction of the display surface of the display device. This increases the possibility that a face image of the player of the game apparatus or the like who views the display surface of the display device is preferentially acquired. This increases the possibility of restricting the acquisition of an image with the second capturing device in the state where the player of the game apparatus or the like is not specified, the second capturing device capturing the direction of the display surface of the display device.
  • In addition, the game program may further cause the computer to execute a step of specifying attributes of the face images and a step of prompting the player to acquire a face image. The step of specifying attributes of the face images specifies attributes of the face images saved in the second storage area. The step of prompting the player to acquire a face image prompts the player to acquire a face image corresponding to an attribute different from the attributes specified from the face images saved in the second storage area.
  • Based on the above, it is possible to reduce the imbalance among the attributes of the face images saved in an accumulating manner, and thereby assist the player who wishes to save face images having various attributes in an accumulating manner.
  • In addition, the first game processing step may include a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player. In this case, in the first game processing step, an attack on the first character object may be a valid attack for succeeding in the game related to the first character object, and an attack on the second character object may be an invalid attack for succeeding in the game related to the first character object.
  • Based on the above, the player needs to control an attack on the second character object in the game of the first game processing step, and therefore selects and attacks the first character object. Thus, the player needs to correctly recognize the first character object, and requires concentration.
  • In addition, the game program may further cause the computer to execute a step of creating a third character object. The step of creating a third character object creates a third character object, the third character object being a character object including a face image different from the face image included in the second character object. In this case, the second game processing step may include a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player. In the second game processing step, an attack on the second character object may be a valid attack for succeeding in the game related to the second character object, and an attack on the third character object may be an invalid attack for succeeding the game related to the second character object.
  • Based on the above, the player needs to control an attack on the third character object in the game of the second game processing step, and therefore, selects and attacks the second character object. Thus, the player needs to correctly recognize the second character object, and requires concentration.
  • In addition, the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object. The step of creating a third character object creates a third character object, the third character object being a character object including the face image stored in the first storage area and being smaller in dimensions than the first character object. The step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image stored in the first storage area and being smaller in dimensions than the first character object. In this case, the first game processing step may include: a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the first character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the first character object approaches the original face image stored in the first storage area.
  • Based on the above, the player needs to correctly recognize the first character object, the third character object, and the fourth character object, and requires concentration. Particularly when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically.
  • In addition, the game program may further cause the computer to execute a step of creating a third character object and a step of creating a fourth character object. The step of creating a third character object creates a third character object, the third character object being a character object including the same face image as the face image included in the second character object and being smaller in dimensions than the second character object. The step of creating a fourth character object creates a fourth character object, the fourth character object being a character object including a face image different from the face image included in the second character object and being smaller in dimensions than the second character object. In this case, the second game processing step may include: a step of advancing the game related to the second character object by attacking the character objects in accordance with an operation of the player; a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the second character object; and a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the second character object approaches the original face image saved in the second storage area.
  • Based on the above, the player needs to correctly recognize the second character object, the third character object, and the fourth character object, and requires concentration.
  • In addition, in the step of creating the first character object, a character object including a face image obtained by deforming the face image stored in the first storage area may be created as the first character object. In this case, the first game processing step may include a step of, when the game related to the first character object has been successful, restoring the deformed face image to the original face image stored in the first storage area.
  • Based on the above, for example, when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the first game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
  • In addition, in the step of creating the second character object, a character object including a face image obtained by deforming the face image saved in the second storage area may be created as the second character object. In this case, the second game processing step may include a step of, when the game related to the second character object has been successful, restoring the deformed face image to the original face image saved in the second storage area.
  • Based on the above, for example, when the acquired face image is a face image of a person in an intimate relationship with the player, the player can tackle the game of the second game processing step increasingly enthusiastically in order to restore the deformed face image to the original face image.
  • In addition, in the image acquisition step, the face image may be acquired and temporarily stored in the first storage area during the predetermined game. In this case, in the first game processing step, in accordance with the creation of the first character object based on the acquisition of the face image during the predetermined game, the first character object may be caused to appear in the predetermined game, and the game related to the first character object may be advanced.
  • Based on the above, it is possible to execute the game of the first game processing step such that the face image acquired during the predetermined game serves as a target to be saved in the second storage area.
  • In addition, the game program may further cause the computer to execute a captured image acquisition step, a display image generation step, and a display control step. The captured image acquisition step acquires a captured image captured by a real camera. The display image generation step generates a display image in which a virtual character object that appears in the predetermined game is placed so as to have, as a background, the captured image acquired in the captured image acquisition step. The display control step displays on the display device the display image generated in the display image generation step. In this case, in the image acquisition step, during the predetermined game, at least one face image may be extracted from the captured image displayed on the display device, and may be temporarily stored in the first storage area.
  • Based on the above, a face image included in a captured image of the real world displayed as a background appears as a character object. This makes it possible to save the face image in an accumulating manner by a success in a game related to the character object.
  • In addition, in the display image generation step, the display image may be generated by placing the first character object such that, when displayed on the display device, the first character object overlaps a position of the face image in the captured image, the face image extracted in the image acquisition step.
  • Based on the above, it is possible to represent the first character object as if appearing from the captured image, and display an image as if the first character object is present in a real space captured by the real camera.
  • In addition, in the captured image acquisition step, captured images of a real world captured in real time by the real camera may be repeatedly acquired. In the display image generation step, the captured images repeatedly acquired in the captured image acquisition step may be sequentially set as the background. In the image acquisition step, face images corresponding to the already extracted face image may be repeatedly acquired from the captured images sequentially set as the background. In the step of creating the first character object, the first character object may be repeatedly created so as to include the face images repeatedly acquired in the image acquisition step. In the display image generation step, the display image may be generated by placing the repeatedly created first character object such that, when displayed on the display device, the repeatedly created first character object overlaps positions of the face images in the respective captured images, the face images repeatedly acquired in the image acquisition step.
  • Based on the above, even when the capturing position and the capturing direction of the real camera have changed, it is possible to place the first character object in accordance with the changes, and even when the position and the expression of a person who is a subject from which a face image has been acquired have changed, it is also possible to reflect the changes on the first character object. This makes it possible to display the first character object as if present in a real space represented by a captured image captured in real time.
  • In addition, the game apparatus may be capable of using image data stored in storage means for storing data not temporarily. In this case, in the image acquisition step, before the start of the predetermined game, at least one face image may be extracted from the image data stored in the storage means, and may be temporarily stored in the first storage area.
  • Based on the above, a face image is acquired from image data stored in advance in the game apparatus. This makes it possible that a face image acquired in advance by another application or the like (e.g., a face image included in an image photographed by a camera capturing application, or included in an image received from another device by a communication application) serves as an acquisition target.
  • In addition, other configuration examples of the present invention may be carried out in the form of an image processing apparatus and an image processing system that include means for executing the above steps, and may be carried out in the form of an image processing method including operations performed in the above steps.
  • Based on a configuration example of the present invention, it is possible to assist the user in the acquisition, the collection, and the like of face images, and also to represent a virtual world on which the real world is reflected by the face images.
  • These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a front view showing an example of a game apparatus 10 in an open state;
  • FIG. 2 is a side view showing an example of the game apparatus 10 in the open state;
  • FIG. 3A is a left side view showing an example of the game apparatus 10 in a closed state;
  • FIG. 3B is a front view showing an example of the game apparatus 10 in the closed state;
  • FIG. 3C is a right side view showing an example of the game apparatus 10 in the closed state;
  • FIG. 3D is a rear view showing an example of the game apparatus 10 in the closed state;
  • FIG. 4 is a diagram showing an example of a user holding the game apparatus 10 with both hands;
  • FIG. 5 is a diagram showing an example of a user holding the game apparatus 10 with one hand;
  • FIG. 6 is a block diagram showing an example of the internal configuration of the game apparatus 10;
  • FIG. 7 is an example of face images displayed as a list;
  • FIG. 8 is another example of face images displayed as a list;
  • FIG. 9 is an example of a screen for attaching a face image to an enemy object;
  • FIG. 10 is a diagram illustrating a face image selection screen;
  • FIG. 11 is a diagram showing an example of various data stored in a main memory in accordance with the execution of an image processing program according to the present invention by the game apparatus of FIG. 1;
  • FIG. 12 is an example of the data structure of face image management information;
  • FIG. 13 is an example of an aggregate table where already acquired face images are classified by attribute;
  • FIG. 14 is a flow chart showing an example of the operation of the game apparatus 10 according to a first embodiment;
  • FIG. 15 is a flow chart showing an example of a detailed process of a face image acquisition process 1;
  • FIG. 16 is a flow chart showing an example of a detailed process of a face image acquisition process 2;
  • FIG. 17 is a flow chart showing an example of a detailed process of a list display process;
  • FIG. 18 is a flow chart showing an example of a detailed process of a cast determination process;
  • FIG. 19A is a flow chart showing an example of a detailed process of a face image management assistance process 1;
  • FIG. 19B is a flow chart showing an example of a detailed process of a face image management assistance process 2;
  • FIG. 19C is a flow chart showing an example of a detailed process of a face image management assistance process 3;
  • FIG. 20A is a diagram showing an overview of a virtual space, which is an example of the image processing program;
  • FIG. 20B is a diagram showing the relationship between a screen model and an α-texture;
  • FIG. 21 is a diagram showing an example of the virtual space;
  • FIG. 22 is a diagram showing a virtual three-dimensional space (game world) defined in a game program, which is an example of the image processing program;
  • FIG. 23 is an example of process steps of examples of the forms of display performed on an upper LCD of a game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 24 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 25 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 26 is an example of process steps of examples of the forms of display performed on the upper LCD of the game apparatus, which is an example of the apparatus that executes the image processing program;
  • FIG. 27A is a diagram showing an example of silhouette models of a shadow object as viewed from above;
  • FIG. 27B is a diagram showing an example of silhouette models of the shadow object;
  • FIG. 28 is a diagram showing an example of the non-transparencies of objects;
  • FIG. 29 is a flow chart showing an example of the operation of image processing performed by the game apparatus executing the image processing program;
  • FIG. 30 is a subroutine flow chart showing an example of a detailed operation of an enemy-object-related process;
  • FIG. 31 is a subroutine flow chart showing an example of a detailed operation of a bullet-object-related process;
  • FIG. 32A is a subroutine flow chart showing an example of a detailed operation of a display image updating process (a first drawing method) of the image processing program;
  • FIG. 32B is a subroutine flow chart showing an example of a detailed operation of a display image updating process (a second drawing method) of the image processing program according to the present invention;
  • FIG. 33 is a diagram illustrating an example of a rendering process in the first drawing method;
  • FIG. 34 is a diagram illustrating the positional relationships between objects of FIG. 33;
  • FIG. 35 is a diagram illustrating an example of a process of rendering a camera image;
  • FIG. 36 is a diagram illustrating an example of a coordinate system used when the camera image is rendered;
  • FIG. 37 is a diagram illustrating an example of a process of rendering a virtual space;
  • FIG. 38 is a diagram illustrating the positional relationship between object of FIG. 37;
  • FIG. 39 is a diagram illustrating an example of a coordinate system of a boundary surface, used when the virtual space is rendered;
  • FIG. 40 is a diagram showing an example of a display image generated by the image processing program;
  • FIG. 41 is a diagram showing an example of a screen of a game apparatus according to a second embodiment;
  • FIG. 42 is a flow chart showing an example of the operation of the game apparatus according to the second embodiment;
  • FIG. 43 is a diagram showing an example of a screen of a game apparatus according to a third embodiment;
  • FIG. 44 is a flow chart showing an example of the operation of the game apparatus according to the third embodiment;
  • FIG. 45 is a diagram showing an example of a screen according to a fourth embodiment;
  • FIG. 46 is a flow chart showing an example of the operation of a game apparatus according to the fourth embodiment;
  • FIG. 47 is a diagram showing an example of a screen displayed on an upper LCD of a game apparatus according to a fifth embodiment;
  • FIG. 48 is a diagram showing an example of the screen displayed on the upper LCD of the game apparatus according to the fifth embodiment;
  • FIG. 49 is a subroutine flow chart showing an example of a detailed operation of a during-game face image acquisition process performed by executing the image processing program according to the fifth embodiment;
  • FIG. 50 is a subroutine flow chart showing an example of a detailed operation of a yet-to-appear process performed in step 202 of FIG. 49; and
  • FIG. 51 is a subroutine flow chart showing an example of a detailed operation of an already-appeared process performed in step 208 of FIG. 49.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment
  • A description is given of a specific example of an image processing apparatus that executes an image processing program according to an embodiment of the present invention. The following embodiment, however, is merely illustrative, and the present invention is not limited to the configuration of the following embodiment.
  • It should be noted that in the following embodiment, data processed by a computer is illustrated using graphs and natural language. More specifically, however, the data is specified by computer-recognizable pseudo-language, commands, parameters, machine language, arrays, and the like. The present invention does not limit the method of representing the data.
  • <Configuration Example of Hardware>
  • First, with reference to the drawings, a description is given of a hand-held game apparatus 10 as an example of the image processing apparatus that executes the image processing program according to the present embodiment. The image processing apparatus according to the present invention, however, is not limited to a game apparatus. The image processing apparatus according to the present invention may be a given computer system, such as a general-purpose computer.
  • It should be noted that the image processing program according to the present embodiment is a game program. The image processing program according to the present invention, however, is not limited to a game program. The image processing program according to the present invention can be applied by being executed by a given computer system. Further, the processes of the present embodiment may be subjected to distributed processing by a plurality of networked devices, or may be performed by a network system where, after main processes are performed by a server, the process results are distributed to terminals, or may be performed by a so-called cloud network.
  • FIGS. 1, 2, 3A, 3B, 3C, and 3D are each a plan view showing an example of the appearance of the game apparatus 10. The game apparatus 10 shown in FIGS. 1 through 3D includes a capturing section (camera), and therefore is capable of capturing an image with the capturing section, displaying the captured image on a screen, and storing data of the captured image. Further, the game apparatus 10 is capable of executing a game program stored in an exchangeable memory card, or a game program received from a server or another game apparatus via a network. The game apparatus 10 is also capable of displaying on the screen an image generated by computer graphics processing, such as an image captured by a virtual camera set in a virtual space. It should be noted that in the present specification, the act of obtaining image data with the camera is described as “capturing”, and the act of storing the image data of the captured image is described as “photographing”.
  • The game apparatus 10 shown in FIGS. 1 through 3D includes a lower housing 11 and an upper housing 21. The lower housing 11 and the upper housing 21 are joined together by a hinge structure so as to be openable and closable in a folding manner (foldable). That is, the upper housing 21 is attached to the lower housing 11 so as to be rotatable (pivotable) relative to the lower housing 11. Thus, the game apparatus 10 has the following two forms: a closed state where the upper housing 21 is in firm contact with the lower housing 11 (FIGS. 3A and 3C); and a state where the upper housing 21 has rotated relative to the lower housing 11 such that the state of firm contact is released (an open state). The rotation of the upper housing 21 is allowed to the position where, as shown in FIG. 2, the upper housing 21 and the lower housing 11 are approximately parallel to each other in the open state (see FIG. 2).
  • FIG. 1 is a front view showing an example of the game apparatus 10 being open (in the open state). A planar shape of each of the lower housing 11 and the upper housing 21 is a wider-than-high rectangular plate-like shape having a longitudinal direction (horizontal direction (left-right direction): an x-direction in FIG. 1) and a transverse direction ((up-down direction): a y-direction in FIG. 1). The lower housing 11 and the upper housing 21 are joined together at the longitudinal upper outer edge of the lower housing 11 and the longitudinal lower outer edge of the upper housing 21 by a hinge structure so as to be rotatable relative to each other. Normally, a user uses the game apparatus 10 in the open state. The user stores away the game apparatus 10 in the closed state. Further, the upper housing 21 can maintain the state of being stationary at a desired angle formed between the lower housing 11 and the upper housing 21 due, for example, to a frictional force generated at the connecting part between the lower housing 11 and the upper housing 21. That is, the game apparatus 10 can maintain the upper housing 21 stationary at a desired angle with respect to the lower housing 11. Generally, in view of the visibility of a screen provided in the upper housing 21, the upper housing 21 is open at a right angle or an obtuse angle with the lower housing 11. Hereinafter, in the closed state of the game apparatus 10, the respective opposing surfaces of the upper housing 21 and the lower housing 11 are referred to as “inner surfaces” or “main surfaces”. Further, the surfaces opposite to the respective inner surfaces (main surfaces) of the upper housing 21 and the lower housing 11 are referred to as “outer surfaces”.
  • Projections 11A are provided at the upper long side portion of the lower housing 11, each projection 11A projecting perpendicularly (in a z-direction in FIG. 1) to an inner surface (main surface) 11B of the lower housing 11. A projection (bearing) 21A is provided at the lower long side portion of the upper housing 21, the projection 21A projecting perpendicularly to the lower side surface of the upper housing 21 from the lower side surface of the upper housing 21. Within the projections 11A, 21A and 11A, for example, a rotating shaft (not shown) is accommodated so as to extend in the x-direction from one of the projections 11A through the projection 21A to the other projection 11A. The upper housing 21 is freely rotatable about the rotating shaft, relative to the lower housing 11. Thus, the lower housing 11 and the upper housing 21 are connected together in a foldable manner.
  • The inner surface 11B of the lower housing 11 shown in FIG. 1 includes a lower liquid crystal display (LCD) 12, a touch panel 13, operation buttons 14A through 14L, an analog stick 15, a first LED 16A, and a microphone hole 18.
  • The lower LCD 12 is accommodated in the lower housing 11. A planar shape of the lower LCD 12 is a wider-than-high rectangle, and is placed such that the long side direction of the lower LCD 12 coincides with the longitudinal direction of the lower housing 11 (the x-direction in FIG. 1). The lower LCD 12 is provided in the center of the inner surface (main surface) of the lower housing 11. The screen of the lower LCD 12 is exposed through an opening of the inner surface of the lower housing 11. The game apparatus 10 is in the closed state when not used, so that the screen of the lower LCD 12 is prevented from being soiled or damaged. As an example, the number of pixels of the lower LCD 12 is 320 dots×240 dots (horizontal×vertical). Unlike an upper LCD 22 described later, the lower LCD 12 is a display device that displays an image in a planar manner (not in a stereoscopically visible manner). It should be noted that although an LCD is used as a display device in the first embodiment, any other display device may be used, such as a display device using electroluminescence (EL). Further, a display device having a desired resolution may be used as the lower LCD 12.
  • The touch panel 13 is one of input devices of the game apparatus 10. The touch panel 13 is mounted so as to cover the screen of the lower LCD 12. In the first embodiment, the touch panel 13 may be, but is not limited to, a resistive touch panel. The touch panel may also be a touch panel of any pressure type, such as an electrostatic capacitance type. In the first embodiment, the touch panel 13 has the same resolution (detection accuracy) as that of the lower LCD 12. The resolutions of the touch panel 13 and the lower LCD 12, however, may not necessarily need to be the same.
  • The operation buttons 14A through 14L are each an input device for providing a predetermined input. Among the operation buttons 14A through 14L, the cross button 14A (direction input button 14A), the button 14B, the button 14C, the button 14D, the button 14E, the power button 14F, the select button 14J, the home button 14K, and the start button 14L are provided on the inner surface (main surface) of the lower housing 11.
  • The cross button 14A is cross-shaped, and includes buttons for indicating at least up, down, left, and right directions, respectively. The cross button 14A is provided in an lower area of the area to the left of the lower LCD 12. The cross button 14A is placed so as to be operated by the thumb of a left hand holding the lower housing 11.
  • The button 14B, the button 14C, the button 14D, and the button 14E are placed in a cross formation in an upper portion of the area to the right of the lower LCD 12. The button 14B, the button 14C, the button 14D, and the button 14E, are placed where the thumb of a right hand holding the lower housing 11 is naturally placed. The power button 14F is placed in a lower portion of the area to the right of the lower LCD 12.
  • The select button 14J, the home button 14K, and the start button 14L are provided in a lower area of the lower LCD 12.
  • The buttons 14A through 14E, the select button 14J, the home button 14K, and the start button 14L are appropriately assigned functions, respectively, in accordance with the program executed by the game apparatus 10. The cross button 14A is used for, for example, a selection operation and a moving operation of a character during a game. The operation buttons 14B through 14E are used for, for example, a determination operation or a cancellation operation. The power button 14F is used to power on/off the game apparatus 10.
  • The analog stick 15 is a device for indicating a direction. The analog stick 15 is provided to an upper portion of the area to the left of the lower LCD 12 of the inner surface (main surface) of the lower housing 11. That is, the analog stick 15 is provided above the cross button 14A. The analog stick 15 is placed so as to be operated by the thumb of a left hand holding the lower housing 11. The provision of the analog stick 15 in the upper area places the analog stick 15 at the position where the thumb of the left hand of the user holding the lower housing 11 is naturally placed. The cross button 14A is placed at the position where the thumb of the left hand holding the lower housing 11 is moved slightly downward. This enables the user to operate the analog stick 15 and the cross button 14A by moving up and down the thumb of the left hand holding the lower housing 11. The key top of the analog stick 15 is configured to slide parallel to the inner surface of the lower housing 11. The analog stick 15 functions in accordance with the program executed by the game apparatus 10. When, for example, the game apparatus 10 executes a game where a predetermined object appears in a three-dimensional virtual space, the analog stick 15 functions as an input device for moving the predetermined object in the three-dimensional virtual space. In this case, the predetermined object is moved in the direction in which the key top of the analog stick 15 has slid. It should be noted that the analog stick 15 may be a component capable of providing an analog input by being tilted by a predetermined amount in any one of up, down, right, left, and diagonal directions.
  • It should be noted that the four buttons, namely the button 14B, the button 14C, the button 14D, and the button 14E, and the analog stick 15 are placed symmetrically to each other with respect to the lower LCD 12. This also enables, for example, a left-handed person to provide a direction indication input using these four buttons, namely the button 14B, the button 14C, the button 14D, and the button 14E, depending on the game program.
  • The first LED 16A (FIG. 1) notifies the user of the on/off state of the power supply of the game apparatus 10. The first LED 16A is provided on the right of an end portion shared by the inner surface (main surface) of the lower housing 11 and the lower side surface of the lower housing 11. This enables the user to view whether or not the first LED 16A is lit on, regardless of the open/closed state of the game apparatus 10.
  • The microphone hole 18 is a hole for a microphone built into the game apparatus 10 as a sound input device. The built-in microphone detects a sound from outside the game apparatus 10 through the microphone hole 18. The microphone and the microphone hole 18 are provided below the power button 14F on the inner surface (main surface) of the lower housing 11.
  • The upper side surface of the lower housing 11 includes an opening 17 (a dashed line shown in FIGS. 1 and 3D) for a stylus 28. The opening 17 can accommodate the stylus 28 that is used to perform an operation on the touch panel 13. It should be note that, normally, an input is provided to the touch panel 13 using the stylus 28. The touch panel 13, however, can be operated not only by the stylus 28 but also by a finger of the user.
  • The upper side surface of the lower housing 11 includes an insertion slot 11D (a dashed line shown in FIGS. 1 and 3D), into which an external memory 45 having a game program stored thereon is to be inserted. Within the insertion slot 11D, a connector (not shown) is provided for electrically connecting the game apparatus 10 and the external memory 45 in a detachable manner. The connection of the external memory 45 to the game apparatus 10 causes a processor included in internal circuitry to execute a predetermined game program. It should be noted that the connector and the insertion slot 11D may be provided on another side surface (e.g., the right side surface) of the lower housing 11.
  • The inner surface 21B of the upper housing 21 shown in FIG. 1 includes loudspeaker holes 21E, an upper LCD 22, an inner capturing section 24, a 3D adjustment switch 25, and a 3D indicator 26 are provided. The inner capturing section 24 is an example of a first capturing device.
  • The upper LCD 22 is a display device capable of displaying a stereoscopically visible image. The upper LCD 22 is capable of displaying a left-eye image and a right-eye image, using substantially the same display area. Specifically, the upper LCD 22 is a display device using a method in which the left-eye image and the right-eye image are displayed alternately in the horizontal direction in predetermined units (e.g., in every other line). It should be noted that the upper LCD 22 may be a display device using a method in which the left-eye image and the right-eye image are displayed alternately for a predetermined time. Further, the upper LCD 22 is a display device capable of displaying an image stereoscopically visible with the naked eye. In this case, a lenticular type display device or a parallax barrier type display device is used so that the left-eye image and the right-eye image that are displayed alternately in the horizontal direction can be viewed separately with the left eye and the right eye, respectively. In the first embodiment, the upper LCD 22 is a parallax-barrier-type display device. The upper LCD 22 displays an image stereoscopically visible with the naked eye (a stereoscopic image), using the right-eye image and the left-eye image. That is, the upper LCD 22 allows the user to view the left-eye image with their left eye, and the right-eye image with their right eye, using the parallax barrier. This makes it possible to display a stereoscopic image giving the user a stereoscopic effect (stereoscopically visible image). Furthermore, the upper LCD 22 is capable of disabling the parallax barrier. When disabling the parallax barrier, the upper LCD 22 is capable of displaying an image in a planar manner (the upper LCD 22 is capable of displaying a planar view image, as opposed to the stereoscopically visible image described above. This is a display mode in which the same displayed image can be viewed with both the left and right eyes). Thus, the upper LCD 22 is a display device capable of switching between: the stereoscopic display mode for displaying a stereoscopically visible image; and the planar display mode for displaying an image in a planar manner (displaying a planar view image). The switching of the display modes is performed by the 3D adjustment switch 25 described later.
  • The upper LCD 22 is accommodated in the upper housing 21. A planar shape of the upper LCD 22 is a wider-than-high rectangle, and is placed at the center of the upper housing 21 such that the long side direction of the upper LCD 22 coincides with the long side direction of the upper housing 21. As an example, the area of the screen of the upper LCD 22 is set greater than that of the lower LCD 12. Specifically, the screen of the upper LCD 22 is set horizontally longer than the screen of the lower LCD 12. That is, the proportion of the width in the aspect ratio of the screen of the upper LCD 22 is set greater than that of the lower LCD 12. The screen of the upper LCD 22 is provided on the inner surface (main surface) 21B of the upper housing 21, and is exposed through an opening of the inner surface of the upper housing 21. Further, the inner surface of the upper housing 21 is covered by a transparent screen cover 27. The screen cover 27 protects the screen of the upper LCD 22, and integrates the upper LCD 22 and the inner surface of the upper housing 21, and thereby provides unity. As an example, the number of pixels of the upper LCD 22 is 800 dots×240 dots (horizontal×vertical). It should be noted that an LCD is used as the upper LCD 22 in the first embodiment. The upper LCD 22, however, is not limited to this, and a display device using EL or the like may be used. Furthermore, a display device having any resolution may be used as the upper LCD 22.
  • The loudspeaker holes 21E are holes through which sounds from loudspeakers 44 that serve as a sound output device of the game apparatus 10 are output. The loudspeakers holes 21E are placed symmetrically with respect to the upper LCD. Sounds from the loudspeakers 44 described later are output through the loudspeaker holes 21E.
  • The inner capturing section 24 functions as a capturing section having an imaging direction that is the same as the inward normal direction of the inner surface 21B of the upper housing 21. The inner capturing section 24 includes an imaging device having a predetermined resolution, and a lens. The lens may have a zoom mechanism.
  • The inner capturing section 24 is placed: on the inner surface 21B of the upper housing 21; above the upper edge of the screen of the upper LCD 22; and in the center of the upper housing 21 in the left-right direction (on the line dividing the upper housing 21 (the screen of the upper LCD 22) into two equal left and right portions). Such a placement of the inner capturing section 24 makes it possible that when the user views the upper LCD 22 from the front thereof, the inner capturing section 24 captures the user's face from the front thereof. A left outer capturing section 23 a and a right outer capturing section 23 b will be described later.
  • The 3D adjustment switch 25 is a slide switch, and is used to switch the display modes of the upper LCD 22 as described above. The 3D adjustment switch 25 is also used to adjust the stereoscopic effect of a stereoscopically visible image (stereoscopic image) displayed on the upper LCD 22. The 3D adjustment switch 25 is provided at an end portion shared by the inner surface and the right side surface of the upper housing 21, so as to be visible to the user, regardless of the open/closed state of the game apparatus 10. The 3D adjustment switch 25 includes a slider that is slidable to any position in a predetermined direction (e.g., the up-down direction), and the display mode of the upper LCD 22 is set in accordance with the position of the slider.
  • When, for example, the slider of the 3D adjustment switch 25 is placed at the lowermost position, the upper LCD 22 is set to the planar display mode, and a planar image is displayed on the screen of the upper LCD 22. It should be noted that the same image may be used as the left-eye image and the right-eye image, while the upper LCD 22 remains set to the stereoscopic display mode, and thereby performs planar display. On the other hand, when the slider is placed above the lowermost position, the upper LCD 22 is set to the stereoscopic display mode. In this case, a stereoscopically visible image is displayed on the screen of the upper LCD 22. When the slider is placed above the lowermost position, the visibility of the stereoscopic image is adjusted in accordance with the position of the slider. Specifically, the amount of deviation in the horizontal direction between the position of the right-eye image and the position of the left-eye image is adjusted in accordance with the position of the slider.
  • The 3D indicator 26 indicates whether or not the upper LCD 22 is in the stereoscopic display mode. For example, the 3D indicator 26 is an LED, and is lit on when the stereoscopic display mode of the upper LCD 22 is enabled. The 3D indicator 26 is placed on the inner surface 21B of the upper housing 21 near the screen of the upper LCD 22. Accordingly, when the user views the screen of the upper LCD 22 from the front thereof, the user can easily view the 3D indicator 26. This enables the user to easily recognize the display mode of the upper LCD 22 even when viewing the screen of the upper LCD 22.
  • FIG. 2 is a right side view showing an example of the game apparatus 10 in the open state. The right side surface of the lower housing 11 includes a second LED 16B, a wireless switch 19, and the R button 14H. The second LED 16B notifies the user of the establishment state of the wireless communication of the game apparatus 10. The game apparatus 10 is capable of wirelessly communicating with other devices, and the second LED 16B is lit on when wireless communication is established between the game apparatus 10 and other devices. The game apparatus 10 has the function of establishing connection with a wireless LAN by, for example, a method based on the IEEE 802.11.b/g standard. The wireless switch 19 enables/disables the function of the wireless communication. The R button 14H will be described later.
  • FIG. 3A is a left side view showing an example of the game apparatus 10 being closed (in the closed state). The left side surface of the lower housing 11 shown in FIG. 3A includes an openable and closable cover section 11C, the L button 14H, and the sound volume button 141. The sound volume button 141 is used to adjust the sound volume of the loudspeakers of the game apparatus 10.
  • Within the cover section 11C, a connector (not shown) is provided for electrically connecting the game apparatus 10 and a data storage external memory 46 (see FIG. 1). The data storage external memory 46 is detachably attached to the connector. The data storage external memory 46 is used to, for example, store (save) data of an image captured by the game apparatus 10. It should be noted that the connector and the cover section 11C may be provided on the right side surface of the lower housing 11. The L button 14H will be described later.
  • FIG. 3B is a front view showing an example of the game apparatus 10 in the closed state. The outer surface of the upper housing 21 shown in FIG. 3B includes a left outer capturing section 23 a, a right outer capturing section 23 b, and a third LED 29.
  • The left outer capturing section 23 a and the right outer capturing section 23 b each includes an imaging device (e.g., a CCD image sensor or a CMOS image sensor) having a predetermined common resolution, and a lens. The lens may have a zoom mechanism. The imaging directions of the left outer capturing section 23 a and the right outer capturing section 23 b (the optical axis of the camera) are each the same as the outward normal direction of the outer surface 21D. That is, the imaging direction of the left outer capturing section 23 a and the imaging direction of the right outer capturing section 23 b are parallel to each other. Hereinafter, the left outer capturing section 23 a and the right outer capturing section 23 b are collectively referred to as an “outer capturing section 23”. The outer capturing section 23 is an example of a second capturing device.
  • The left outer capturing section 23 a and the right outer capturing section 23 b included in the outer capturing section 23 are placed along the horizontal direction of the screen of the upper LCD 22. That is, the left outer capturing section 23 a and the right outer capturing section 23 b are placed such that a straight line connecting between the left outer capturing section 23 a and the right outer capturing section 23 b is placed along the horizontal direction of the screen of the upper LCD 22. When the user has pivoted the upper housing 21 at a predetermined angle (e.g., 90°) relative to the lower housing 11, and views the screen of the upper LCD 22 from the front thereof, the left outer capturing section 23 a is placed on the left side of the user viewing the screen, and the right outer capturing section 23 b is placed on the right side of the user (see FIG. 1). The distance between the left outer capturing section 23 a and the right outer capturing section 23 b is set to correspond to the distance between both eyes of a person, and may be set, for example, in the range from 30 mm to 70 mm. It should be noted, however, that the distance between the left outer capturing section 23 a and the right outer capturing section 23 b is not limited to this range. It should be noted that in the first embodiment, the left outer capturing section 23 a and the right outer capturing section 23 b are fixed to the housing 21, and therefore, the imaging directions cannot be changed.
  • The left outer capturing section 23 a and the right outer capturing section 23 b are placed symmetrically with respect to the line dividing the upper LCD 22 (the upper housing 21) into two equal left and right portions. Further, the left outer capturing section 23 a and the right outer capturing section 23 b are placed in the upper portion of the upper housing 21 and in the back of the portion above the upper edge of the screen of the upper LCD 22, in the state where the upper housing 21 is in the open state (see FIG. 1). That is, the left outer capturing section 23 a and the right outer capturing section 23 b are placed on the outer surface of the upper housing 21, and, if the upper LCD 22 is projected onto the outer surface of the upper housing 21, is placed above the upper edge of the screen of the projected upper LCD 22.
  • Thus, the left outer capturing section 23 a and the right outer capturing section 23 b of the outer capturing section 23 are placed symmetrically with respect to the center line of the upper LCD 22 extending in the transverse direction. This makes it possible that when the user views the upper LCD 22 from the front thereof, the imaging directions of the outer capturing section 23 coincide with the directions of the respective lines of sight of the user's right and left eyes. Further, the outer capturing section 23 is placed in the back of the portion above the upper edge of the screen of the upper LCD 22, and therefore, the outer capturing section 23 and the upper LCD 22 do not interfere with each other inside the upper housing 21. Further, when the inner capturing section 24 provided on the inner surface of the upper housing 21 as shown by a dashed line in FIG. 3B is projected onto the outer surface of the upper housing 21, the left outer capturing section 23 a and the right outer capturing section 23 b are placed symmetrically with respect to the projected inner capturing section 24. This makes it possible to reduce the upper housing 21 in thickness as compared to the case where the outer capturing section 23 is placed in the back of the screen of the upper LCD 22, or the case where the outer capturing section 23 is placed in the back of the inner capturing section 24.
  • The left outer capturing section 23 a and the right outer capturing section 23 b can be used as a stereo camera, depending on the program executed by the game apparatus 10. Alternatively, either one of the two outer capturing sections (the left outer capturing section 23 a and the right outer capturing section 23 b) may be used solely, so that the outer capturing section 23 can also be used as a non-stereo camera, depending on the program. When a program is executed for causing the left outer capturing section 23 a and the right outer capturing section 23 b to function as a stereo camera, the left outer capturing section 23 a captures a left-eye image, which is to be viewed with the user's left eye, and the right outer capturing section 23 b captures a right-eye image, which is to be viewed with the user's right eye. Yet alternatively, depending on the program, images captured by the two outer capturing sections (the left outer capturing section 23 a and the right outer capturing section 23 b) may be combined together, or may be used to compensate for each other, so that imaging can be performed with an extended imaging range. Yet alternatively, a left-eye image and a right-eye image that have a parallax may be generated from a single image captured using one of the outer capturing sections 23 a and 23 b, and a pseudo-stereo image as if captured by two cameras can be generated. To generate the pseudo-stereo image, it is possible to appropriately set the distance between virtual cameras.
  • The third LED 29 is lit on when the outer capturing section 23 is operating, and informs that the outer capturing section 23 is operating. The third LED 29 is provided near the outer capturing section 23 on the outer surface of the upper housing 21.
  • FIG. 3C is a right side view showing an example of the game apparatus 10 in the closed state. FIG. 3D is a rear view showing an example of the game apparatus 10 in the closed state.
  • The L button 14G and the R button 14H are provided on the upper side surface of the lower housing 11 shown in FIG. 3D. The L button 14G is provided at the left end portion of the upper side surface of the lower housing 11, and the R button 14H is provided at the right end portion of the upper side surface of the lower housing 11. The L button 14G and the R button 14H are appropriately assigned functions, respectively, in accordance with the program executed by the game apparatus 10. For example, the L button 14G and the R button 14H function as shutter buttons (capturing instruction buttons) of the capturing sections described above.
  • It should be noted that although not shown in the figures, a rechargeable battery that serves as the power supply of the game apparatus 10 is accommodated in the lower housing 11, and the battery can be charged through a terminal provided on the side surface (e.g., the upper side surface) of the lower housing 11.
  • FIGS. 4 and 5 each show an example of the state of the use of the game apparatus 10. FIG. 4 is a diagram showing an example of a user holding the game apparatus 10 with both hands.
  • In the example shown in FIG. 4, the user holds the side surfaces and the outer surface (the surface opposite to the inner surface) of the lower housing 11 with both palms, middle fingers, ring fingers, and little fingers, such that the lower LCD 12 and the upper LCD 22 face the user. Such holding enables the user to perform operations on the operation buttons 14A through 14E and the analog stick 15 with their thumbs, and to perform operations on the L button 14G and the R button 14H with their index fingers, while holding the lower housing 11.
  • FIG. 5 is a diagram showing an example of a user holding the game apparatus 10 with one hand. In the example shown in FIG. 5, when providing an input on the touch panel 13, the user releases one of the hands having held the lower housing 11 therefrom, and holds the lower housing 11 only with the other hand. This makes it possible to provide an input to the touch panel 13 with the one hand.
  • FIG. 6 is a block diagram showing an example of the internal configuration of the game apparatus 10. The game apparatus 10 includes, as well as the components described above, electronic components, such as an information processing section 31, a main memory 32, an external memory interface (external memory I/F) 33, a data storage external memory I/F 34, a data storage internal memory 35, a wireless communication module 36, a local communication module 37, a real-time clock (RTC) 38, an acceleration sensor 39, an angular velocity sensor 40, a power circuit 41, and an interface circuit (I/F circuit) 42. These electronic components are mounted on electronic circuit boards, and are accommodated in the lower housing 11 (or may be accommodated in the upper housing 21).
  • The information processing section 31 is information processing means including a central processing unit (CPU) 311 that executes a predetermined program, a graphics processing unit (GPU) 312 that performs image processing, and the like. In the first embodiment, a predetermined program is stored in a memory (e.g., the external memory 45 connected to the external memory I/F 33, or the data storage internal memory 35) included in the game apparatus 10. The CPU 311 of the information processing section 31 executes the predetermined program, and thereby performs the image processing described later or game processing. It should be noted that the program executed by the CPU 311 of the information processing section 31 may be acquired from another device by communication with said another device. The information processing section 31 further includes a video RAM (VRAM) 313. The GPU 312 of the information processing section 31 generates an image in accordance with an instruction from the CPU 311 of the information processing section 31, and draws the image in the VRAM 313. The GPU 312 of the information processing section 31 outputs the image drawn in the VRAM 313 to the upper LCD 22 and/or the lower LCD 12, and the image is displayed on the upper LCD 22 and/or the lower LCD 12.
  • To the information processing section 31, the main memory 32, the external memory I/F 33, the data storage external memory I/F 34, and the data storage internal memory 35 are connected. The external memory I/F 33 is an interface for establishing a detachable connection with the external memory 45. The data storage external memory I/F 34 is an interface for establishing a detachable connection with the data storage external memory 46.
  • The main memory 32 is volatile storage means used as a work area or a buffer area of the information processing section 31 (the CPU 311). That is, the main memory 32 temporarily stores various types of data used for image processing or game processing, and also temporarily stores a program acquired from outside (the external memory 45, another device, or the like) the game apparatus 10. In the first embodiment, the main memory 32 is, for example, a pseudo SRAM (PSRAM).
  • The external memory 45 is nonvolatile storage means for storing the program executed by the information processing section 31. The external memory 45 is composed of, for example, a read-only semiconductor memory. When the external memory 45 is connected to the external memory I/F 33, the information processing section 31 can load a program stored in the external memory 45. In accordance with the execution of the program loaded by the information processing section 31, a predetermined process is performed. The data storage external memory 46 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data. For example, the data storage external memory 46 stores images captured by the outer capturing section 23 and/or images captured by another device. When the data storage external memory 46 is connected to the data storage external memory I/F 34, the information processing section 31 loads an image stored in the data storage external memory 46, and the image can be displayed on the upper LCD 22 and/or the lower LCD 12.
  • The data storage internal memory 35 is composed of a readable/writable non-volatile memory (e.g., a NAND flash memory), and is used to store predetermined data. For example, the data storage internal memory 35 stores data and/or programs downloaded by wireless communication through the wireless communication module 36.
  • The wireless communication module 36 has the function of establishing connection with a wireless LAN by, for example, a method based on the IEEE 802.11.b/g standard. Further, the local communication module 37 has the function of wirelessly communicating with another game apparatus of the same type by a predetermined communication method (e.g., infrared communication). The wireless communication module 36 and the local communication module 37 are connected to the information processing section 31. The information processing section 31 is capable of transmitting and receiving data to and from another device via the Internet, using the wireless communication module 36, and is capable of transmitting and receiving data to and from another game apparatus of the same type, using the local communication module 37.
  • The acceleration sensor 39 is connected to the information processing section 31. The acceleration sensor 39 detects the magnitudes of accelerations (linear accelerations) in the directions of straight lines along three axial (x, y, and z axes in the present embodiment) directions, respectively. The acceleration sensor 39 is provided, for example, within the lower housing 11. As shown in FIG. 1, the long side direction of the lower housing 11 is defined as an x-axis direction; the short side direction of the lower housing 11 is defined as a y-axis direction; and the direction perpendicular to the inner surface (main surface) of the lower housing 11 is defined as a z-axis direction. The acceleration sensor 39 thus detects the magnitudes of the linear accelerations produced in the respective axial directions. It should be noted that the acceleration sensor 39 is, for example, an electrostatic capacitance type acceleration sensor, but may be an acceleration sensor of another type. Further, the acceleration sensor 39 may be an acceleration sensor for detecting an acceleration in one axial direction, or accelerations in two axial directions. The information processing section 31 receives data indicating the accelerations detected by the acceleration sensor 39 (acceleration data), and calculates the orientation and the motion of the game apparatus 10.
  • The angular velocity sensor 40 is connected to the information processing section 31. The angular velocity sensor 40 detects angular velocities generated about three axes (x, y, and z axes in the present embodiment) of the game apparatus 10, respectively, and outputs data indicating the detected angular velocities (angular velocity data) to the information processing section 31. The angular velocity sensor 40 is provided, for example, within the lower housing 11. The information processing section 31 receives the angular velocity data output from the angular velocity sensor 40, and calculates the orientation and the motion of the game apparatus 10.
  • The RTC 38 and the power circuit 41 are connected to the information processing section 31. The RTC 38 counts time, and outputs the counted time to the information processing section 31. The information processing section 31 calculates the current time (date) based on the time counted by the RTC 38. The power circuit 41 controls the power from the power supply (the rechargeable battery accommodated in the lower housing 11, which is described above) of the game apparatus 10, and supplies power to each component of the game apparatus 10.
  • The I/F circuit 42 is connected to the information processing section 31. A microphone 43, a loudspeaker 44, and the touch panel 13 are connected to the I/F circuit 42. Specifically, the loudspeaker 44 is connected to the I/F circuit 42 through an amplifier not shown in the figures. The microphone 43 detects a sound from the user, and outputs a sound signal to the I/F circuit 42. The amplifier amplifies the sound signal from the I/F circuit 42, and outputs the sound from the loudspeaker 44. The I/F circuit 42 includes: a sound control circuit that controls the microphone 43 and the loudspeaker 44 (amplifier); and a touch panel control circuit that controls the touch panel 13. For example, the sound control circuit performs A/D conversion and D/A conversion on the sound signal, and converts the sound signal to sound data in a predetermined format. The touch panel control circuit generates touch position data in a predetermined format, based on a signal from the touch panel 13, and outputs the touch position data to the information processing section 31. The touch position data indicates the coordinates of the position (touch position), on the input surface of the touch panel 13, at which an input has been provided. It should be noted that the touch panel control circuit reads a signal from the touch panel 13, and generates the touch position data, once in a predetermined time. The information processing section 31 acquires the touch position data, and thereby recognizes the touch position, at which the input has been provided on the touch panel 13.
  • An operation button 14 includes the operation buttons 14A through 14L described above, and is connected to the information processing section 31. Operation data is output from the operation button 14 to the information processing section 31, the operation data indicating the states of inputs provided to the respective operation buttons 14A through 141 (indicating whether or not the operation buttons 14A through 141 have been pressed). The information processing section 31 acquires the operation data from the operation button 14, and thereby performs processes in accordance with the inputs provided to the operation button 14.
  • The lower LCD 12 and the upper LCD 22 are connected to the information processing section 31. The lower LCD 12 and the upper LCD 22 each display an image in accordance with an instruction from the information processing section 31 (the GPU 312). In the first embodiment, the information processing section 31 causes the lower LCD 12 to display an image for a hand-drawn image input operation, and causes the upper LCD 22 to display an image acquired from either one of the outer capturing section 23 and the inner capturing section 24. That is, for example, the information processing section 31 causes the upper LCD 22 to display a stereoscopic image (stereoscopically visible image) using a right-eye image and a left-eye image that are captured by the inner capturing section 24, or causes the upper LCD 22 to display a planar image using one of a right-eye image and a left-eye image that are captured by the outer capturing section 23.
  • Specifically, the information processing section 31 is connected to an LCD controller (not shown) of the upper LCD 22, and causes the LCD controller to set the parallax barrier to on/off. When the parallax barrier is on in the upper LCD 22, a right-eye image and a left-eye image that are stored in the VRAM 313 of the information processing section 31 (that are captured by the outer capturing section 23) are output to the upper LCD 22. More specifically, the LCD controller repeatedly alternates the reading of pixel data of the right-eye image for one line in the vertical direction, and the reading of pixel data of the left-eye image for one line in the vertical direction, and thereby reads the right-eye image and the left-eye image from the VRAM 313. Thus, the right-eye image and the left-eye image are each divided into strip images, each of which has one line of pixels placed in the vertical direction, and an image including the divided left-eye strip images and the divided right-eye strip images alternately placed is displayed on the screen of the upper LCD 22. The user views the images through the parallax barrier of the upper LCD 22, whereby the right-eye image is viewed with the user's right eye, and the left-eye image is viewed with the user's left eye. This causes the stereoscopically visible image to be displayed on the screen of the upper LCD 22.
  • The outer capturing section 23 and the inner capturing section 24 are connected to the information processing section 31. The outer capturing section 23 and the inner capturing section 24 each capture an image in accordance with an instruction from the information processing section 31, and output data of the captured image to the information processing section 31. In the first embodiment, the information processing section 31 gives either one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image, and the capturing section that has received the instruction captures an image, and transmits data of the captured image to the information processing section 31. Specifically, the user selects the capturing section to be used, through an operation using the touch panel 13 and the operation button 14. The information processing section 31 (the CPU 311) detects that an capturing section has been selected, and the information processing section 31 gives the selected one of the outer capturing section 23 and the inner capturing section 24 an instruction to capture an image.
  • When started by an instruction from the information processing section 31 (CPU 311), the outer capturing section 23 and the inner capturing section 24 perform capturing at, for example, a speed of 60 images per second. The captured images captured by the outer capturing section 23 and the inner capturing section 24 are sequentially transmitted to the information processing section 31, and displayed on the upper LCD 22 or the lower LCD 12 by the information processing section 31 (GPU 312). When output to the information processing section 31, the captured images are stored in the VRAM 313, are output to the upper LCD 22 or the lower LCD 12, and are deleted at predetermined times. Thus, images are captured at, for example, a speed of 60 images per second, and the captured images are displayed, whereby the game apparatus 10 can display views in the imaging ranges of the outer capturing section 23 and the inner capturing section 24, on the upper LCD 22 of the lower LCD 12 in real time.
  • The 3D adjustment switch 25 is connected to the information processing section 31. The 3D adjustment switch 25 transmits to the information processing section 31 an electrical signal in accordance with the position of the slider.
  • The 3D indicator 26 is connected to the information processing section 31. The information processing section 31 controls whether or not the 3D indicator 26 is to be lit on. When, for example, the upper LCD 22 is in the stereoscopic display mode, the information processing section 31 lights on the 3D indicator 26.
  • <Descriptions of Functions>
  • Next, a description is given of an overview of an example of game processing performed by the game apparatus 10. The game apparatus 10 provides the function of collecting face images by acquiring and saving, for example, face images of people through the inner capturing section 24, the outer capturing section 23, or the like in accordance with an operation of a user (hereinafter also referred to as a “player”). To collect face images, the user executes a game (first game) using an acquired face image, and when the result of the game has been successful, the user can save the acquired image. It should be noted that the user can acquire a face image, which is a target to be saved, from: an image captured by the inner capturing section 24, the outer capturing section 23, or the like before executing the first game; an image acquired by an application different from the first game before executing the first game; an image captured by the inner capturing section 24, the outer capturing section 23, or the like during the execution of the first game; or the like. As described later, the game apparatus 10 saves the face image acquired before the first game or during the first game, in a saved data storage area Do (see FIG. 11), which is accessible during the execution of the game processing. Then, the user repeats a similar operation on the game apparatus 10, and thereby can collect a plurality of face images, add the face images to the saved data storage area Do, and accumulate the face images. The saved data storage area Do is an area accessible to the game apparatus 10 that is executing the game. Accordingly, the acquired face image is saved in the saved data storage area Do, and thereby is available in the subsequent processes. Further, the game apparatus 10 reads data accumulated in the saved data storage area Do, and thereby displays a list of face images collected as a result of the first game. Then, the game apparatus 10 executes a game (second game) using a face image selected by the user, or a face image automatically selected by the game apparatus 10, from among the displayed list of face images.
  • The games executed by the game apparatus 10 (the first game and the second game) are, for example, each a game where the user makes an attack on enemy objects EO by aiming at them, and destroys the enemy objects EO. In the first embodiment, for example, a face image acquired by the user and yet to be saved in the saved data storage area Do, or a face image acquired by the user and already saved in the saved data storage area Do, is mapped as a texture onto a character object, such as an enemy object EO.
  • First, at the stage of the execution of the first game according to the first embodiment, the user can execute the first game by acquiring a desired face image through an capturing section, such as a camera. Then, when having succeeded in the first game, the user can save the acquired face image in the saved data storage area Do in an accumulating manner, cause a list of the face images to be displayed, and use the face images in the second game. Here, “in an accumulating manner” means that when the user has acquired a new face image and further succeeded in the first game, the new face image is added.
  • At the stage of the execution of the second game, the user can select a desired face image from among the collected face images, and create an enemy object EO. Then, the user can execute a game using the enemy object EO created using the desired face image, for example, a game where the user destroys the created enemy object EO. Without an operation of the user, however, the game apparatus 10 may automatically select a face image, for example, randomly from the saved data storage area Do, and may create an enemy object EO or the like. It should be noted that also at the stage of the execution of the first game, a character object may be created using a face image already collected in the saved data storage area Do, and may be caused to appear in the game together with a character object created using a face image yet to be saved in the saved data storage area Do. It should be noted that hereinafter, when a plurality of enemy objects EO are distinguished from one another, the enemy objects EO are referred to as, for example, “enemy objects EO1, EO2 . . . .” On the other hand, when the enemy objects EO1, EO2, and the like are collectively referred to, or a plurality of enemy objects do not need to be distinguished from one another, the enemy objects EO are referred to as “enemy objects EO”.
  • Next, with reference to FIGS. 7 through 10, a description is given of examples of display of the game apparatus 10 according to the first embodiment. FIG. 7 is an example of face images displayed as a list on the upper LCD 22 of the game apparatus 10. As described above, when the user has succeeded in the first game, the user can save acquired face images, and cause a list of the face images to be displayed as shown in FIG. 7. The face images displayed as a list are each obtained by texture-mapping image data acquired by, for example, the inner capturing section 24, the left outer capturing section 23 a, or the right outer capturing section 23 b, onto a three-dimensional model of a human facial surface. The image data is attached as a texture to, for example, the surface of a three-dimensional model formed by combining a plurality of polygons. In the game executed by the game apparatus 10, however, the face images are not limited to those obtained by performing texture-mapping on three-dimensional models. For example, the game apparatus 10 may display, as a face image, image data held in a simple two-dimensional pixel array. Alternatively, as shown in FIG. 7, among the face images displayed as a list, at least one face image may be held in a simple two-dimensional pixel array, and the other face images may be obtained by performing texture-mapping on three-dimensional models.
  • In FIG. 7, a face image G1 is surrounded by a heavy line L1. The heavy line L1 indicates that the face image G1 is in the state of being selected. “The state of being selected” means the state of being selected as a processing target by, for example, the user operating the operation buttons 14 or the like. The state of being selected is also referred to as “the state of being focused on”. For example, each time the user presses the operation buttons 14, the face image in the state of being selected is switched from left to right, or top to bottom. For example, when the user has pressed the right direction of the cross button 14A in the state of FIG. 7, the face image in the state of being selected transfers to the right, such as from the face image G1 to a face image G2 and from the face image G2 to the face image G3. “The face image in the state of being selected transfers” means that the face image surrounded by the heavy line L1 is switched on the screen of the upper LCD 22.
  • It should be noted that in FIG. 7, a horizontal row of face images is referred to as a “tier”. In the case where a face image G4 at the right end of one of the tiers is in the state of being selected, if the user further presses the right direction of the cross button 14A, the face image in the state of being selected is switched to a face image G5 at the left end of the next lower tier. Conversely, for example, in the case where the face image G5 at the left end of the lower tier is in the state of being selected, if the user presses the left direction of the cross button 14A, the face image in the state of being selected transfers upward and to the left, such as from the face image G5 to the face image G4 and from the face image G4 to the face image G3.
  • The switching of the state of being selected, however, is not limited to the pressing of the left and right directions of the cross button 14A, and the state of being selected may be switched by pressing the up and down directions. Further, the switching of the state of being selected is not limited to the pressing of the cross button 14A, and the image in the state of being selected may be switched by pressing other operation buttons 14, such as the operation button 14B (A button). Alternatively, the face image in the state of being selected may be switched by performing an operation on the touch panel 13 of the lower LCD 12. For example, the game apparatus 10 displays in advance on the lower LCD 12 a list of the face images similar to the list of the face images displayed on the upper LCD 22. Then, the game apparatus 10 may detect an operation on the touch panel 13, and thereby detect which face image has entered the state of being selected. Then, the game apparatus 10 may display the face image having entered the state of being selected, e.g., the face image G1, by surrounding the face image G1 by the heavy line L1.
  • As described above, the user can browse a list of currently already acquired face images, using the screen shown in FIG. 7 displayed on the upper LCD 22. Further, the user can cause a desired face image, from among the list of currently already acquired images, to enter the state of being selected. Then, for example, the user can fix the state of being selected by selecting a predetermined determination button, such as a determination button displayed on the lower LCD 12, using the touch panel 13, the operation buttons 14, or the like. Furthermore, for example, the user may press the button 14C (also referred to as a “B button”), whereby the screen of the list of the face images on the upper LCD 22 is closed, and a screen that waits for a menu selection (not shown) is displayed.
  • FIG. 8 is another example of the face images displayed as a list. In the example of FIG. 8, the face image G2 is in the state of being selected, and is surrounded by a heavy line L2. In the example of FIG. 8, when the face image G2 is in the state of being selected, face images related to the face image G2 are reacting. For example, a heart mark is displayed near a face image G0, and the face image G0 is giving a look with one eye closed to the face image G2 in the state of being selected. Further, for example, the face image G3 and a face image G7 are also turning their faces toward and giving looks to the face image G2 in the state of being selected.
  • The reactions of the face images related to the face image in the state of being selected, however, are not limited to actions such as: turning its face; giving a look with one eye closed while a heart mark is displayed near the face image; and turning its face and giving a look. For example, a related face image may show reactions such as smiling and producing a voice. Conversely, a face image unrelated to the face image in the state of being selected may change its expression from a smiling expression to a straight expression. Alternatively, a face image unrelated to the face image in the state of being selected may turn in the direction opposite to the direction of the face image in the state of being selected.
  • Here, a related face image may be defined at the stage of, for example, the acquisition of a face image. For example, groups for classifying face images may be set in advance, and when a face image is acquired, the user may input the group to which the face image to be acquired belongs. Alternatively, for example, a group of face images may be defined in accordance with the progression of the game, and the face images may be classified. For example, when the face image G1 has been newly acquired using the face image G0 during the progression of the game, it may be determined that the face image G0 and the face image G1 are face images that are related to each other and belong to the same group.
  • As shown in FIG. 8, when a face image, e.g., the face image G2, has been caused to enter the state of being selected through the operation buttons 14 or the like, the face images G0, G3, G7, and the like react. This makes it possible to represent a sense of affinity and the like between the acquired face images. That is, it is possible to not only simply classify collected face images into groups, but also represent the affinities between a plurality of face images by the expressions and the reactions of the face images. This makes it possible to represent in a virtual world of the game apparatus 10 the associations, the intimacies, and the like between people in the real world.
  • FIGS. 9 and 10 are examples of screens for attaching a face image to an enemy object EO, which is one of game characters. FIG. 9 is an example of an enemy object EO's head selection screen. For example, when the user has performed an operation on the touch panel 13 to select a menu “select boss” from menus displayed on the lower LCD 12 of the game apparatus 10, a list of the head shapes of enemy objects EO prepared in the game apparatus 10 as shown in FIG. 9 is displayed. Such an operation, however, is not limited to an operation on the touch panel 13, and the list of the head shapes as shown in FIG. 9 may be displayed by, for example, an operation on the operation buttons 14 or the like.
  • In FIG. 9, three types of head shapes, namely head shapes H1, H2, and H3, are displayed so as to be selected. For example, the head shape H1 includes: a facial surface portion H12 formed of a three-dimensional model; and a peripheral portion H13 surrounding the facial surface portion H12. An enemy object EO to appear in the game is formed as shown in FIGS. 7 and 8 by texture-mapping a face image onto the facial surface portion H12, which is a three-dimensional model.
  • The peripheral portion H13 may have a shape suggesting the feature of the enemy object EO to appear in the game. For example, if the peripheral portion H13 has a shape representing a helmet, it is possible to represent an aggressive mental picture of the enemy object EO. In FIG. 9, three types of head shapes, namely the head shapes H1, H2, and H3, are illustrated on the list of the head shapes, which has two rows and four columns, and undefined marks H0 are displayed in addition to these head shapes. Thus, the head shapes H1 through H3 are merely illustrative, and the types of head shapes are not limited to three types. For example, a new head shape may be added from a medium of an upgraded game program, a website where parts in the game are provided on the Internet, or the like.
  • In FIG. 9, a label LB with “boss” indicates that the head shape H1 has been caused by the user to enter the state of being selected. The label LB can be moved by, for example, the cross button 14A. The user performs an operation on the cross button 14A or the like to move the label LB, and thereby can cause, for example, the head shape H2 or H3 to enter the state of being selected. Then, after either head shape has been caused to enter the state of being selected, the user may press the operation button 14B (A button) or the like to determine the state of being selected. The determination of the state of being selected places the label LB, and fixes the selection of the head shape in the state of being selected, e.g., the head shape H1 in the example of FIG. 9. It should be noted that the user may press the operation button 14C (B button) when the screen shown in FIG. 9 is displayed on the upper LCD 22, whereby the enemy object EO's head selection screen is closed, and display returns to the previous operation screen.
  • FIG. 10 is a diagram illustrating a face image selection screen. For example, when the user has pressed the operation button 14B (A button) or the like to determine the head shape in the state of being selected in the state where the enemy object EO's head selection screen shown in FIG. 9 is displayed on the upper LCD 22, the screen shown in FIG. 10 is displayed. The screen shown in FIG. 10 is similar to those of FIGS. 7 and 8, but is different from those of FIGS. 7 and 8 in that the peripheral portion H13 of the head shape selected in FIG. 9 is added to the face image in the state of being selected.
  • On the screen shown in FIG. 10, the user can operate the cross button 14A or the like to switch the face image in the state of being selected. That is, the user can switch the face image in the state of being selected, such as from the face image G0 to G1 and from the face image G1 to G2. Then, display is performed such that the face image caused to enter the state of being selected is texture-mapped onto the facial surface portion of the head shape. For example, in the example of FIG. 10, the face image G2 is texture-mapped onto the facial surface portion H12 of the head shape H1 selected on the screen shown in FIG. 9, and is displayed in combination with the peripheral portion H13.
  • Such a combination of the peripheral portion H13 and the face image G2 is displayed, whereby an enemy object EO is temporarily created. The user imagines a mental picture of the enemy to confront in the game, by the temporarily displayed enemy object EO. The user can operate the cross button 14A or the like to switch the face image in the state of being selected, and thereby can switch the face image of the enemy object EO. That is, the user can switch the faces of the enemy objects EO, one after another, and thereby can create an enemy object EO that fits a mental picture of the enemy to fight with in the game.
  • That is, in the game apparatus 10, for example, the face images collected by succeeding in the first game and accumulated in the saved data storage area Do as described above are used in the subsequent second game. That is, the game apparatus 10 performs game processing using enemy objects EO created using the collected face images. For example, in accordance with an operation of the user, the game apparatus 10 performs a process termed a “cast determination process” before the execution of the game, and generates enemy objects EO that fit mental pictures formed by the user, or the game apparatus 10 automatically generates enemy objects EO before the execution of the game. “Automatically” means that for example, the game apparatus 10 can generate enemy objects EO by selecting a required number of face images, i.e., generate enemy objects EO to appear in the game, randomly from among the collected face images. Further, for example, in accordance with the history of the game processing performed by the user in the past, the game apparatus 10 may create enemy objects EO by selecting face images expected to be desired next by the user, based on the properties, the taste, and the like of the user. In accordance with the execution history of the game that has been executed by the user up to the current time, the game apparatus 10 may select a face image to be used next, based on face images, together with the attributes of the subjects of the face images, such as age, gender, friendship (family, friends, and relationships in work, school, and community), or, if a subject is a living thing such as a pet, the ownership relationship of the subject. Further, for example, the game apparatus 10 may select a face image to be used next, based on the performances of the user in the game executed in the past.
  • The game apparatus 10 performs game processing (the second game) using the enemy objects EO created by the specification made by such operations of the user, or created by the processing of the game apparatus 10. It should be noted that a character object that appears in the game, which is described using the term “enemy object EO” as an example, is not limited to an object having an adversarial relationship with the user, and may be a friend character object. Further, the present invention is not limited to a game where there are relationships such as enemies and friends, and may be a game where a player object representing the user themselves appears. Alternatively, the present invention may be, for example, a game where an object termed an “agent” appears, the object assisting the user in executing the game.
  • The game apparatus 10 executes a game where various character objects, such as the enemy objects EO described above, appear. To the character objects that appear in the game, the face images collected by the user succeeding in the first game are attached by texture mapping or the like. Accordingly, in the game executed by the game apparatus 10, the character objects including the face images collected by the user themselves appear. Thus, using images of portions representing the features of people and living things, such as face images, the user can execute a game where the real-world relationships with the people represented by the face images or with the living things of the face images are reflected on the various character objects. For example, it is possible to execute a game including emotions, such as affection, friendliness, favorable impression, and hatred.
  • It should be noted that also in the face image selection screen shown in FIG. 10, similarly to that of FIG. 8, when a face image has entered the state of being selected, other face images related to the face image in the state of being selected may show reactions. For example, in the example of FIG. 10, the face image G2 is in the state of being selected in combination with the peripheral portion 13 of the head shape H1, and a related face image, e.g., the face image G4, is smiling with its face turned toward the face image G2 and giving a look to the face image G2. Further, the face image G5 is giving an envious look to the face image G2 with its face turned upward. Furthermore, face images G8 and G9 are also giving looks to the face image G2. In FIG. 10, in contrast, the face images other than the face images G4, G5, G8, and G9 are not showing any reactions to the fact that the face image G2 has entered the state of being selected. Such differences in reaction make it possible to represent the relationships of affinity between a plurality of face images. As described above, when a face image has a specific relationship with the user, for example, when the face image and the user belong to a group of a plurality of friends, the game apparatus 10 can perform drawing by introducing intimacy relationships between people in the real world, into a virtual world represented by the game apparatus 10.
  • <Example of Various Data>
  • FIG. 11 is a diagram showing an example of various data stored in the main memory 32 by executing the image processing program.
  • It should be noted that programs for performing the processing of the game apparatus 10 are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35), or included in the external memory 45 or the data storage external memory 46, and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34, into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311.
  • Referring to FIG. 11, the main memory 32 stores the programs loaded from the built-in memory, the external memory 45, or the data storage external memory 46, and temporary data generated in the image processing. Referring to FIG. 11, in a data storage area of the main memory 32, the following are stored: operation data Da; real camera image data Db; real world image data Dc; boundary surface data Dd; back wall image data De; enemy object data Df; bullet object data Dg; score data Dh; motion data Di; virtual camera data Dj; rendered image data Dk; display image data Dl; aiming cursor image data Dm; management data Dn; saved data storage area Do; and the like. Further, in a program storage area of the main memory 32, a group of various programs Pa are stored that configure the image processing program.
  • <<Operation Data Da>>
  • The operation data Da indicates operation information of an operation of the user on the game apparatus 10. The operation data Da includes controller data Da1 and angular velocity data Da2. The controller data Da indicates that the user has operated a controller, such as the operation buttons 14 or the analog stick 15, of the game apparatus 10. The angular velocity data Da2 indicates the angular velocities detected by the angular velocity sensor 40. For example, the angular velocity data Da2 includes x-axis angular velocity data indicating an angular velocity about the x-axis, y-axis angular velocity data indicating an angular velocity about the y-axis, and z-axis angular velocity data indicating an angular velocity about the z-axis, the angular velocities detected by the angular velocity sensor 40. For example, the operation data from the operation buttons 14 or the analog stick 15 and the angular velocity data from the angular velocity sensor 40 are acquired per unit of time in which the game apparatus 10 performs processing (e.g., 1/60 seconds), and are stored in the controller data Da1 and the angular velocity data. Da2, respectively, in accordance with the acquisition, to thereby be updated.
  • It should be noted that game processing (e.g., the processes performed in FIG. 20A and thereafter) will be described later using an example where the controller data Da1 and the angular velocity data Da2 are each updated every one-frame period, which corresponds to the processing cycle. Alternatively, the controller data Da1 and the angular velocity data Da2 may be updated in another processing cycle. For example, the controller data Da1 may be updated in each cycle of detecting the operation of the user on a controller, such as the operation buttons 14 of the analog stick 15, and the updated controller data Da1 may be used in each processing cycle. In this case, the cycles of updating the controller data Da1 and the angular velocity data Da2 differ from the processing cycle.
  • <<Real Camera Image Data Db>>
  • The real camera image data Db indicates a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24. In the following descriptions of processing, in the step of acquiring a real camera image, the real camera image data Db is updated using a real camera image captured by either one of the outer capturing section 23 and the inner capturing section 24. It should be noted that the cycle of updating the real camera image data Db using the real camera image captured by the outer capturing section 23 or the inner capturing section 24 may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time. When the cycle of updating the real camera image data Db is shorter than the cycle of the processing of the game apparatus 10, the real camera image data Db may be updated as necessary, independently of the processing described later. In this case, in the step described later of acquiring a real camera image, the process may be performed invariably using the most recent real camera image indicated by the real camera image data Db. Hereinafter, in the present embodiment, the real camera image data Db is data indicating a real camera image captured by the outer capturing section 23 (e.g., the left outer capturing section 23 a).
  • <<Real World Image Data Dc>>
  • In the game processing described later, e.g., the process of the execution of the game in step 18 of FIG. 14, more specifically, in the processes shown in FIG. 20A and thereafter, a boundary surface 3 is introduced that is obtained by texture-mapping a real camera image captured by a real camera of the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24). The real world image data Dc is data for generating a real world image that seems to be present on the boundary surface 3, using the real camera image captured by the real camera of the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24). In a first drawing method described later, for example, the real world image data Dc includes texture data of the real camera image for attaching the real world image to the boundary surface (a screen object in the display range of a virtual camera). Further, in a second drawing method described later, for example, the real world image data Dc includes: data of a planar polygon for generating the real world image; texture data of the real camera image to be mapped onto the planar polygon; and data indicating the position of the planar polygon in a virtual space (the position from a real world drawing camera described later).
  • <<Boundary Surface Data Dd>>
  • The boundary surface data Dd is data for, in combination with the real world image data Dc described above, generating the real world image that seems to be present on the boundary surface 3. In the first drawing method, for example, the boundary surface data Dd is data concerning the screen object, and includes: opening determination data (corresponding to data of an α-texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like. Further, in the second drawing method, for example, the boundary surface data Dd is data for representing an opening in a planar polygon of the real world image, and includes: opening determination data (corresponding to data of an α-texture described later) indicating the state (e.g., the presence or absence of an opening) of each point included in the boundary surface 3; data indicating the placement position of the boundary surface 3 in the virtual space (the coordinates of the boundary surface 3 in the virtual space); and the like. The data indicating the placement position of the boundary surface 3 in the virtual space is, for example, conditional equations for a spherical surface (relational expressions for defining a spherical surface in the virtual space), and indicates the existence range of the boundary surface 3 in the virtual space.
  • The opening determination data indicating the state of being open is, for example, two-dimensional (e.g., a rectangular shape having 2048 pixels×384 pixels) texture data in which the alpha value (non-transparency) of each point can be set. The alpha value is a value of from “0” to “1”, with “0” being minimum and “1” being maximum. The alpha value indicates transparent by “0”, and indicates non-transparent by “1”. The opening determination data can indicate that a position where “0” is stored in the opening determination data is in the state of being open, and a position where “1” is stored is not in the state of being open. The alpha value can be set in, for example, an image of a game world generated in the game apparatus 10, or a pixel block unit including a pixel or a plurality of pixels in the upper LCD 22. In the present embodiment, “predetermined values over 0 but less than 1 (0.2 in the present embodiment)” are stored in an unopen area. This data is not used when applied to the real world image. When applied to the real world image, alpha values of “0.2” stored in the opening determination data are handled as “1”. It should be noted that an alpha values of “0.2” is used to draw a shadow ES of each of the enemy objects EO described above. The setting of the alpha value and the range of the alpha value, however, does not limit the image processing program according to the present invention.
  • In the image processing program according to the present embodiment, in the first drawing method, it is possible to generate the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of the virtual camera; by color information (pixel values) of a texture of the real world image to be attached to the boundary surface 3. Further, in the second drawing method, it is possible to generate the real world image having an opening by multiplying: the opening determination data corresponding to an area of the range of the visual space of a virtual world drawing camera; by color information (pixel values) of the real world image (specifically, rendered image data of the real camera image rendered with a parallel projection described later using the real world image data Dc). This is because when alpha values of “0” stored at the position of the opening are multiplied by the color information of the real world image at the position, the values of the color information of the real world image are “0” (the state of being completely transparent).
  • It should be noted that in the first drawing method, as described later, an image to be displayed on the upper LCD 22 is generated by rendering a virtual space image in which virtual objects are placed so as to include an object of the real world image to which the opening determination data is applied.
  • In addition, in the second drawing method, specifically, as described later, the virtual space image is rendered, taking into account the opening determination data. That is, the priority of each virtual object relative to the boundary surface (the priority relative to the real world image) is determined based on the opening determination data, and the virtual space image is generated by rendering each virtual object. Then, an image to be displayed on the upper LCD 22 is generated by combining the real world image with the virtual space image generated as described above.
  • In addition, in the image processing program according to the present embodiment, the shape of the boundary surface 3 is a spherical surface (see FIGS. 20A and 20B). Then, in the present embodiment, the shape of the opening determination data may be defined as rectangular. The opening determination data of this rectangular shape is mapped onto a central portion of the spherical surface as shown in FIGS. 20A and 20B, whereby it is possible to cause the points of the opening determination data to correspond to the points of the boundary surface.
  • It should be noted that in the present embodiment, the opening determination data is only data corresponding to the central portion of the spherical surface shown in FIG. 20A. Accordingly, the opening determination data may not be present depending on the orientation of the virtual camera (the virtual world drawing camera in the second drawing method). When the opening determination data is not present as described above, the real world image is drawn as it is. That is, the real world image is drawn on the condition that α-values of “1” are set.
  • The image processing for an opening created in the boundary surface 3 will be described later.
  • <<Back Wall Image Data De>>
  • The back wall image data De is data concerning a back wall BW, which is present in a second space 2. For example, the back wall image data De includes: image data for generating an image of the back wall BW; data indicating the position of a polygon model defining the back wall BW in the virtual space; and the like.
  • The polygon model defining the back wall BW is typically a model that has a radius greater than that of the sphere shown in FIG. 20A, about a vertical axis extending through the position of the virtual camera (the virtual world drawing camera in the second drawing method), and has the same shape as that of the central portion of the sphere shown in FIG. 20A. That is, the model defining the back wall BW includes the boundary surface 3. Further, the polygon model may be a planar polygon placed behind the position of an opening to be formed in the boundary surface 3. Furthermore, each time an opening is formed in the boundary surface 3, a planar polygon defining the projection surface of the opening may be placed in the second space 2.
  • Image data (texture) to be attached to the polygon model of the back wall BW may be given data. This image data represents another space (second space 2) existing behind the real world image, and therefore, the image data is preferably an image representing unreality, such as an image representing outer space, the sky, or an area in water, because it is possible to cause the player a strange feeling as if an unreal space exists behind real space. For example, when the user is playing the game according to the present embodiment in a room, it is possible to give the user a feeling as if an unreal space exists outside the room. Alternatively, a texture of the back wall may represent landscapes that are not normally seen, such as a desert and a wilderness. As described above, the selection of a texture of the back wall BW allows the player to notice a desired mental picture in another world hidden behind a real image represented as a background of the game world.
  • In addition, for example, if the image data is an image that can use repeated representations, such as an image of outer space, it is possible to reduce the data size of the image data (texture). Further, if the image data is such an image, it is possible to draw an image of the back wall BW without specifying the position where the back wall BW is to be drawn in the virtual space. This is because if an image can use repeated representations, the image is drawn without depending on the position (the repeated pattern can be represented on the entire polygon model).
  • It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the image data. In the present embodiment, it is assumed that an alpha value of “1” is defined for the image data.
  • <<Enemy Object Data Df>
  • The enemy object data Df is data concerning an enemy object EO, and includes substance data Df1, silhouette data Df2, and opening shape data Df3.
  • The substance data Df1 is data for drawing the substance of the enemy object EO, and includes, for example, a polygon model defining a three-dimensional shape of the substance of the enemy object EO, and texture data to be mapped onto the polygon model. The texture data may be, for example, a photograph of the face of the user or the like captured by each capturing section of the game apparatus 10. It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the texture data. In the present embodiment, it is assumed that an alpha value of “1” is defined for the texture data.
  • The silhouette data Df2 is data for semi-transparently drawing in the real world image the shadow of the enemy object EO present in the second space 2, and includes a polygon model and texture data to be attached to the polygon model. For example, this silhouette model includes eight planar polygons, and is placed at the same position as that of the enemy object EO present in the second space 2. The silhouette model to which a texture is attached is drawn, for example, semi-transparently, in the real world image as viewed from the virtual world drawing camera, whereby it is possible to represent the shadow of the enemy object EO present in the second space 2. Further, the texture data of the silhouette data Df2 may be, for example, images of the enemy object EO as viewed from all directions as shown in FIGS. 27A and 27B (e.g., eight planar polygons). Furthermore, these images may each be an image obtained by simplifying the silhouette model of the enemy object EO. It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the texture data to be attached to the silhouette model. In the present embodiment, it is assumed that an alpha value of “1” is defined for the texture data in the shadow image portion, and an alpha value of “0” is defined in the portion where there is no shadow image (the peripheral portion).
  • The opening shape data Df3 is data concerning the shape of an opening generated in the boundary surface 3 when the enemy object EO moves between a first space 1 and the second space 2. In the present embodiment, the opening shape data Df3 is data for setting alpha values of “0” at the position in the opening determination data corresponding to the position in the boundary surface 3 where the opening is generated. For example, the opening shape data Df3 is texture data that corresponds to the shape of the opening to be generated and has alpha values of “0”. It should be noted that in the present embodiment alpha values of “0” are set in the opening determination data for the shape indicated by the opening shape data Df3, the shape formed around the portion corresponding to the position through which the enemy object EO has passed in the boundary surface 3. The image processing performed when the enemy object EO generates an opening in the boundary surface 3 will be described later.
  • <<Bullet Object Data Dg>>
  • The bullet object data Dg is data concerning a bullet object BO, which is fired in accordance with an attack operation of the player. For example, the bullet object data Dg includes: a polygon model and bullet image (texture) data for drawing the bullet object BO; data indicating the placement direction and the placement position of the bullet object BO; and data indicating the moving velocity and the moving direction (e.g., a moving velocity vector) of the bullet object BO. It should be noted that in the present embodiment, the priority of drawing described later is defined by alpha values, and therefore, it is assumed that an alpha value is defined for the bullet image data. In the present embodiment, an alpha value of “1” is defined for the bullet image data.
  • <<Score Data Dh>>
  • The score data Dh indicates the score of a game where the enemy object EO appears. For example, as described above, points are added to the score of the game when the user has vanquished the enemy object BO by an attack operation, and points are deducted from the score of the game when the enemy object EO has reached the position of the user (i.e., the placement position of the virtual camera in the virtual space).
  • <<Motion Data Di>>
  • The motion data Di indicates the motion of the game apparatus 10 in real space. As an example, the motion of the game apparatus 10 is calculated by the angular velocities detected by the angular velocity sensor 40.
  • <<Virtual Camera Data Dj>>
  • The virtual camera data Dj is data concerning a virtual camera set in the virtual space. In the first drawing method, for example, the virtual camera data Dj includes data indicating the placement direction and the placement position of a virtual camera in the virtual space. Further, in the second drawing method, for example, the virtual camera data Dj includes: data indicating the placement direction and the placement position of a real world drawing camera in the virtual space; and data indicating the placement direction and the placement position of a virtual world drawing camera in the virtual space. Then, for example, the data indicating the placement direction and the placement position of the virtual camera in the virtual space in the first drawing method, and the data indicating the placement direction and the placement position of the virtual world drawing camera in the virtual space in the second drawing method change in accordance with the motion of the game apparatus 10 (angular velocities) indicated by the motion data Di. Further, the virtual camera data Dj includes angle-of-view (drawing range) data of the virtual camera. With this, in accordance with changes in the positions and the orientations of the virtual camera in the first drawing method and the virtual world drawing camera in the second drawing method, a drawing range (drawing position) in the boundary surface 3 changes.
  • <<Rendered Image Data Dk>>
  • The rendered image data Dk is data concerning an image rendered by processing described later.
  • In the first drawing method, the real world image is rendered as an object in the virtual space, and therefore, the rendered image data Dk includes rendered image data of the virtual space. The rendered image data of the virtual space is data indicating a virtual world image obtained by rendering with a perspective projection from the virtual camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3 (screen object) to which the real world image is applied as a texture, and the back wall BW are placed.
  • On the other hand, in the second drawing method, the real world image and the virtual world image are rendered by virtual cameras different from each other, and therefore, the rendered image data Dk includes rendered image data of the real camera image and rendered image data of the virtual space. The rendered image data of the real camera image indicates the real world image obtained by rendering with a parallel projection from the real world image drawing camera a planar polygon on which a texture of the real camera image is mapped. The rendered image data of the virtual space indicates the virtual world image obtained by rendering with a perspective projection from the virtual world drawing camera the virtual space where the enemy object EO, the bullet object BO, the boundary surface 3, and the back wall BW are placed.
  • <<Display Image Data Dl>>
  • The display image data Dl indicates a display image to be displayed on the upper LCD 22. In the first drawing method, for example, a display image to be displayed on the upper LCD 22 is generated by a process of rendering the virtual space. Further, in the second drawing method, for example, a display image to be displayed on the upper LCD 22 is generated by combining the rendered image data of the camera image with the rendered image data of the virtual space by a method described later.
  • <<Aiming Cursor Image Data Dm>>
  • The aiming cursor image data Dm is image data of an aiming cursor AL that is displayed on the upper LCD 22. The image data may be given data.
  • It should be noted that in the present embodiment, the data concerning each object (the boundary surface data Dd, the back wall image data De, the substance data Df1, the silhouette data Df2, and the bullet image data) includes information about the priority, the information defining the priority of drawing. In the present embodiment, the information about the priority uses alpha values. The relationship between the alpha values and the image processing will be described later.
  • In addition, in the present embodiment, the data concerning each object used for drawing includes data indicating whether or not a depth determination is to be made between the object and another. As described above, the data is set such that a depth determination is valid between each pair of: the enemy object EO; the bullet object BO; a semi-transparent enemy object; an effect object; and the screen object (boundary surface 3). Further, the data is set such that a depth determination is valid “between the shadow planar polygon (silhouette data Df2) and the enemy object EO (substance data Df1)”, “between the shadow planar polygon (silhouette data Df2) and the bullet object BO”, “between the shadow planar polygon (silhouette data Df2) and the semi-transparent enemy object”, and “between the shadow planar polygon (silhouette data Df2) and the effect object”. Furthermore, the data is set such that a depth determination is invalid between the shadow planar polygon (silhouette data Df2) and the screen object (boundary surface data Dd).
  • <<Management Data Dn>>
  • The management data Dn is data for managing: data to be processed by the game apparatus 10, such as collected face images; data accumulated by the game apparatus 10; and the like. The management data Dn includes face image management information Dn1, a face image attribute aggregate table Dn2, and the like. The face image management information Dn1 stores: the destination for storing the data of each face image (e.g., the address in the main memory 32 or the like); the source of acquiring the face image (e.g., the inner capturing section 24 or the outer capturing section 23); the attributes of the face image (e.g., the gender, the age, and the like of the subject of the face image); information of other face images related to the face image; and the like. Further, the face image attribute aggregate table Dn2 stores by attribute the numbers of collections of the face images currently already collected by the user. For example, when the subjects of the collected face images are classified by gender, age, and the like, the collection achievement value of each category is stored. Examples of the data structures of the face image management information Dn1 and the face image attribute aggregate table Dn2 will be described later.
  • <<Saved Data Storage Area Do>>
  • The saved data storage area Do is an area where, when the information processing section 31 executes the image processing program such as a game program, data to be processed by the information processing section 31, the resulting data of the process of the information processing section 31, and the like are saved. As an example, in the present embodiment, data of a face image acquired by the game apparatus 10 through the inner capturing section 24, the outer capturing section 23, the wireless communication module 36, the local communication module 37, and the like is saved. In the present embodiment, for example, the information processing section 31 executes the first game in the state where a face image acquired by the game apparatus 10 is temporarily stored in the main memory 32. Then, when it is determined that the user has succeeded in the first game in accordance with an operation of the user, the information processing section 31 saves in the saved data storage area Do the face image temporarily stored in the main memory 32. The face image saved in the saved data storage area Do is available in the subsequent game processing or the like.
  • The structure of the saved data storage area Do is not particularly limited. For example, the saved data storage area Do may be placed in the same physical address space as that of a regular memory, so as to be accessible to the information processing section 31. Further, for example, the saved data storage area Do may allow in advance the information processing section 31 to secure (or allocate) a predetermined block unit or a predetermined page unit at a necessary time. Furthermore, for example, the saved data storage area Do may have a structure where connections are made by management information, such as points connecting blocks, as in the file system of a computer.
  • In addition, the saved data storage area Do may, for example, secure an individual area for each program executed by the game apparatus 10. Accordingly, when a game program has been loaded into the main memory 32, the information processing section 31 may access the saved data storage area Do (input and output data) based on management information or the like of the game program.
  • In addition, the saved data storage area Do of a program may be accessible to the information processing section 31 that is executing another program. With this, data processed in the program may be delivered to said another program. For example, the information processing section 31 that is executing the second game may create a character object by reading data of a face image saved in the saved data storage area Do as a result of the execution of the first game described later. It should be noted that the saved data storage area Do is an example of a second storage area.
  • <Structures of Various Data>
  • With reference to FIGS. 12 and 13, descriptions are given of examples of data structures for managing face images in the game apparatus 10.
  • FIG. 12 is an example of the data structure of the face image management information Dn1 for managing face images saved in the game apparatus 10. The game apparatus 10 stores data of saved face images in the face image management information Dn1, and thereby can display a list of the face images on the screen of the upper LCD 22 in the form of, for example, FIGS. 7 and 8. The face image management information Dn1 is, for example, created as information in which a record is prepared for each face image. The face image management information Dn1 is, for example, saved in the data storage internal memory 35 or the data storage external memory 46. In FIG. 12, the elements of a record are illustrated by a record 1. Further, in FIG. 12, details of a record 2 and thereafter are not shown. Furthermore, although not shown in the figures, the information processing section 31 may, for example, save the total number of records of the face image management information Dn1, i.e., the total number of acquired face images, in the data storage internal memory 35, the data storage external memory 46, or the like.
  • In the example of FIG. 12, the face image management information Dn1 includes, for example, face image identification information, the address of face image data, the source of acquiring the face image, the estimation of gender, the estimation of age, and pieces of related face image information 1 through N. FIG. 12, however, is an example of the face image management information Dn1, and this does not mean that face image management information is limited to the elements shown in FIG. 12.
  • The face image identification information is information uniquely identifying the saved face image. The face image identification information may be, for example, a serial number.
  • The address of face image data is, for example, the address where data of the face image is stored in the data storage internal memory 35 or the data storage external memory 46. However, for example, when the data of the face image is stored in a storage medium in which a file system is constructed by an OS (operating system), a path name, a file name, and the like in the file system may be set as the address of face image data.
  • The source of acquiring the face image is, for example, information identifying the capturing device that has acquired the face image. As the source of acquiring the face image, for example, information identifying the inner capturing section 24, the left outer capturing section 23 a, or the right outer capturing section 23 b is set. However, when both the left outer capturing section 23 a and the right outer capturing section 23 b have been used to acquire the face image, information indicating both capturing sections is set. Further, for example, when the face image has been acquired by a capturing device other than the inner capturing section 24, the left outer capturing section 23 a, and the right outer capturing section 23 b, e.g., by a capturing device provided outside the game apparatus 10, information indicating such a state (e.g., “other”) is set. “When the face image has been acquired by a capturing device provided outside the game apparatus 10” is, for example, the case where an image captured by another game apparatus 10 similar to the game apparatus 10 has been acquired through the external memory interface 33, the wireless communication module 36, the local communication module 37, or the like. Furthermore, examples of such a case also include the cases: where an image obtained by a camera not included in the game apparatus 10 has been acquired; where an image obtained by a scanner has been acquired; and where an image such as a video image obtained from a video device has been acquired, each image obtained through the external memory interface 33, the wireless communication module 36, or the like.
  • The estimation of gender is information indicating whether the face image is male or female. The estimation of gender may be, for example, made by a process shown in another embodiment described later. The estimation of age is information indicating the age of a person represented by the face image. The estimation of age may be, for example, made by a process shown in another embodiment described later.
  • Each of the pieces of related image identification information 1 through N is information indicating another face image related to the face image. For example, as the pieces of related image identification information 1 through N, pieces of face image identification information of up to N related other face images may be set. The related other face images may be, for example, specified by an operation of the user through a GUI. For example, when a face image has been newly acquired, the information processing section 31 may detect, in the state where the user has operated the operation buttons 14 or the like to cause one or more face images related to the acquired face image to enter the state of being selected, an operation on the GUI of giving an instruction to set related images. Alternatively, the acquired face image may be classified by categories prepared by the game apparatus 10, such as themselves, friends, colleagues, and strangers. Then, face images belonging to the same category may be linked together using the pieces of related image identification information 1 through N. However, when face images are classified by the categories prepared by the game apparatus 10, an element “classification of face images” may be simply prepared, instead of the preparation of the entry of the pieces of related face image identification information 1 through N, so that themselves, friends, colleagues, strangers, and the like may be set. Further, in FIG. 12, a fixed number is used for the pieces of related image identification information 1 through N. Alternatively, the number N may be a variable number. In this case, the number N that is already set may be held in the face image management information Dn1. Furthermore, for example, in the face image management information Dn1, the records of face images related to each other may be connected together by chains of pointers.
  • FIG. 13 shows the data structure of the face image attribute aggregate table Dn2. The face image attribute aggregate table Dn2 is a table where already acquired face images are classified by attribute, and the numbers of the classified images are aggregated. Hereinafter, the face image attribute aggregate table Dn2 will also be referred to simply as an “aggregate table”. The information processing section 31 saves the aggregate table shown in FIG. 13 in, for example, the data storage internal memory 35 or the data storage external memory 46. In the example of FIG. 13, the aggregate table stores the number of acquired face images in each row defined by performing classification with the combination of gender (male or female) and an age bracket (under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over). That is, each row of the table shown in FIG. 13 includes elements such as gender, age, and the number of acquired face images. In the numbers of acquired face images, already acquired face images are classified, and the numbers of the classified acquired face images are aggregated. The categories and the attributes of face images, however, are not limited to genders or age brackets that are shown in FIG. 13.
  • <Example of Process Flow>
  • With reference to FIGS. 14 through 19, a description is given of an example of the operation of the image processing program executed by the information processing section 31 of the game apparatus 10. First, when the power (the power button 14F) to the game apparatus 10 has been turned on, the CPU 311 executes a boot program (not shown). This causes the programs stored in the built-in memory, the external memory 45, or the data storage external memory 46, to be expanded in the main memory 32 into the form of being executable by the CPU 311. Here, “the form of being executable by the CPU 311” means, for example, the form where machine instructions for the CPU 311 are written in a predetermined order and placed at appropriate addresses in the main memory 32, so as to be readable by a control section that processes the machine instructions for the CPU 311. The expansion into the form of being executable is also referred to simply as “loading”. It should be noted that in FIGS. 14 through 19, processes not directly related to the first embodiment are not described.
  • FIG. 14 is a flow chart showing an example of the operation of the information processing section 31. After a series of processes performed after the turning on of the power, when the information processing section 31 has detected an operation of the user, for example, an operation through the touch panel 13 or the operation buttons 14 on a graphical user interface (hereinafter a “GUI”) displayed on the lower LCD 12, such as a graphics object, e.g., a menu or an icon, the information processing section 31 performs the process of FIG. 14. Hereinafter, an operation of the user on the GUI through the touch panel 13 or the operation buttons 14 is referred to simply as an “operation on the GUI”. In the example of FIG. 14, the information processing section 31 waits for an operation of the user (step 8). Hereinafter, “steps” are abbreviated as “S” in the drawings.
  • Next, when having detected an operation of the user, the information processing section 31 performs the processes of step 9 and thereafter. For example, when the operation of the user on the GUI is an instruction to “acquire a face image with the inner capturing section 24” (“Yes” in step 9), the information processing section 31 performs a face image acquisition process 1 (step 10). Here, the instruction “to acquire a face image with the inner capturing section 24” is, for example, an instruction for acquisition using the inner capturing section 24, in accordance with an operation of the user on the GUI or the like. Subsequently, the information processing section 31 proceeds to step 19. The face image acquisition process 1 will be described later with reference to FIG. 15. On the other hand, when the operation of the user on the GUI is not an instruction “to acquire a face image with the inner capturing section 24” (“No” in step 9), the information processing section 31 proceeds to step 11.
  • Next, for example, when the operation of the user on the GUI is an instruction to “acquire a face image with the outer capturing section 23” (“Yes” in step 11), the information processing section 31 performs a face image acquisition process 2 (step 12). Subsequently, the information processing section 31 proceeds to step 19. Here, the instruction “to acquire a face image with the outer capturing section 23” is, for example, an instruction for acquisition using the outer capturing section 23 by an operation of the user on the GUI or the like. The face image acquisition process 2 will be described later with reference to FIG. 16. On the other hand, when the operation of the user on the GUI is not an instruction “to acquire a face image with the outer capturing section 23” (“No” in step 11), the information processing section 31 proceeds to step 13.
  • Next, when the operation of the user on the GUI is an instruction to display a list of collected face images (“Yes” in step 13), the information processing section 31 performs a list display process (step 14). Subsequently, the information processing section 31 proceeds to step 19. The list display process will be described later with reference to FIG. 17. On the other hand, when the operation of the user on the GUI is not an instruction to display a list of collected face images (“No” in step 13), the information processing section 31 proceeds to step 15.
  • When the operation of the user on the GUI is an instruction to determine a cast (“Yes” in step 15), the information processing section 31 performs a cast determination process (step 16). Subsequently, the information processing section 31 proceeds to step 19. The cast determination process will be described later with reference to FIG. 18. On the other hand, when the operation of the user on the GUI is not an instruction to determine a cast (“No” in step 15), the information processing section 31 proceeds to step 17.
  • When the operation of the user is an instruction to execute a game (“Yes” in step 17), the information processing section 31 executes the game (step 18). The process of step 18 is an example of a second game processing step. The game apparatus 10 performs the game processing of the game where various character objects, such as enemy objects EO, created in the cast determination process in step 16 appear. The type of the game is not limited. For example, the game executed in step 18 may be a game where the user fights with enemy objects EO created in the cast determination process. In this case, for example, the user fights with enemy objects EO having face images collected in the face image acquisition process 1 in step 10 and in the face image acquisition process 2 in step 12, and displayed in the list display process in step 14. Further, for example, this game may be an adventure game where a player object representing the user moves forward by overcoming various hurdles, obstacles, and the like. Alternatively, examples of the game may include: a war simulation where historical characters appear; a management simulation where a player object appear; and a driving simulation of a vehicle or the like, where a player object appears. Yet alternatively, the game may be a novel game modeled on the original of a novel, where character objects appear. Yet alternatively, the game may be one termed a role-playing game (RPG) where the user controls a main character and characters that appear in a story, to play their roles. Yet alternatively, the game may be one where the user simply has some training with the assistance of an agent that appears.
  • To the character objects that appear in such game processing, face images collected by the user having succeeded in the first game in step 10 are attached by texture mapping or the like. Accordingly, in the game executed in step 18, the character objects including the face images collected by the user themselves appear. Thus, using images of portions representing people and living things, such as face images, the user can execute a game where the real-world relationships with the people (or the living things) of the face images are reflected on the various character objects. For example, it is possible to perform game processing including emotions, such as affection, friendliness, favorable impression, and hatred.
  • On the other hand, when the operation of the user is not an instruction to execute a game (“No” in step 17), the information processing section 31 proceeds to step 19.
  • Then, the information processing section 31 determines whether or not the process is to be ended. When having detected through the GUI an instruction to end the process, the information processing section 31 ends the process of FIG. 14. On the other hand, when having detected through the GUI an instruction not to end the process (e.g., an instruction to retry the process), the information processing section 31 performs a face image management assistance process 1 (step 1A). The face image management assistance process 1 is, for example, a process of, based on already acquired face images, providing the user with information about the attributes and the like of a face image to be acquired next, so as to assist the user in acquiring a face image. A detailed process of the face image management assistance process 1 will be described later with reference to FIG. 19A. Subsequently, the information processing section 31 returns to step 8.
  • FIG. 15 is a flow chart showing an example of a detailed process of the face image acquisition process 1 (step 10 of FIG. 14). In this process, the information processing section 31 first performs a face image management assistance process 2 (step 100). The face image management assistance process 2 is, for example, a process of, based on already acquired face images, providing the user with information about the attributes and the like of a face image to be acquired next, so as to assist the user in acquiring a face image. A detailed process of the face image management assistance process 2 will be described later with reference to FIG. 19B.
  • Next, the information processing section 31 performs a face image acquisition process (step 101). The CPU 311 of the information processing section 31 performs the process of step 101 as an example of image acquisition means.
  • The information processing section 31 obtains images captured by, for example, the inner capturing section 24, the left outer capturing section 23 a, and/or the right outer capturing section 23 b in predetermined cycles, and displays the obtained images on the upper LCD 22. In this case, the display cycle may be the same as the unit of time of the processing of the game apparatus 10 (e.g., 1/60 seconds), or may be shorter than this unit of time. Immediately after the power to the game apparatus 10 has been turned on and the image processing program has been loaded, or in an initial state immediately after the process of FIG. 14 has been started, the information processing section 31 displays, for example, an image from the inner capturing section 24 on the upper LCD 22. It should be noted that in the lower LCD 12, for example, an capturing section selection GUI is prepared so as to select at least one of the inner capturing section 24, the left outer capturing section 23 a, and the right outer capturing section 23 b (including the case where both the left outer capturing section 23 a and the right outer capturing section 23 b are used). In the process of FIG. 15, it is assumed that the user can operate the capturing section selection GUI to freely switch capturing sections to be used. Hereinafter, the inner capturing section 24, the left outer capturing section 23 a, and/or the right outer capturing section 23 b that are, due to the initial state or the operation on the capturing section selection GUI, used for capturing are referred to simply as an “capturing section”.
  • For example, when the inner capturing section 24 is used, if the user turns their face toward the inner surface 21B of the upper housing 21 in the state where the upper housing 21 is open, the user's face is displayed on the upper LCD 22. Then, when the user has pressed, for example, the R button 14H (or the L button 14G), the information processing section 31 acquires, as data, an image from the inner capturing section 24 that is displayed on the upper LCD 22, and temporarily stores the acquired data in the main memory 32. At this time, the data of the image is only present in the main memory 32, and is not saved in the saved data storage area Do described later. The data present in the main memory 32 is only used in the game in step 106 described later, and as will be described later, is discarded when the game has not been successful and has been ended. The main memory 32 is an example of a first data storage area.
  • It should be noted that in the processing according to the present embodiment, the face image acquired in step 101 is texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game is executed. Accordingly, in the process of step 101, it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section. In the present embodiment, for example, it is assumed that the following processing is performed. (1) The information processing section 31 detects the contour of the face in the acquired image. The contour of the face is estimated from the distance between the eyes, and the positional relationships between the eyes and the mouth. That is, the information processing section 31 recognizes the boundary line between the contour of the face and the background, based on the arrangement of the eyes and the mouth, using the dimensions of a standard face. The boundary line can be acquired by combining, for example, differential processing (contour enhancement) and average processing (smoothing calculation), which are normal image processing. It should be noted that the method of detecting the contour of the face may be another known method. (2) The information processing section 31 fits the obtained face image with the dimensions of the facial surface portion of the head shape of the enemy object EO by enlarging or reducing the obtained face image. This process enables the game apparatus 10 to even acquire face images varying to some extent in dimensions and attach the acquired face images to enemy objects EO.
  • In the game apparatus 10 according to the present embodiment, however, the process of acquiring a face image is not limited to the procedure described above. For example, when a face image is acquired, a face image having target dimensions may be acquired from the capturing section, instead of the acquisition of an image from a given distance and in given dimensions. For example, a face image may be acquired on the condition that a distance from a subject is established such that the distance between the eyes of the face image obtained from the subject approximates a predetermined number of pixels. For example, the information processing section 31 may derive the distance from the subject. Alternatively, on the condition that a distance from a subject is established, the information processing section 31 may, for example, lead a person who is the subject, or the user who is the capturer, to adjust the angle of the subject's face with respect to the direction of the optical axis of the capturing section. Further, instead of the user pressing, for example, the R button 14H (or the L button 14G) to save the image, when it is determined that the adjustment of the distance from the subject and the adjustment of the angle of the face with respect to the direction of the optical axis of the capturing section are completed, the information processing section 31 may save the image. For example, the information processing section 31 may display marks representing target positions for positioning the eyes and the mouth, in superposition with the face image of the subject on the upper LCD 22. Then, when the positions of the eyes and the mouth of the subject that have been acquired from the capturing section have fallen within predetermined tolerance ranges from the marks of the target positions corresponding to the eyes and the mouth, the information processing section 31 may save the image in a memory.
  • It should be noted that when the face image is acquired in step 101, the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn2 shown in FIG. 13. “The corresponding row” means, for example, the row of the attributes corresponding to the gender and the age that are estimated in step 1002 of FIG. 19B described later.
  • Next, the information processing section 31 displays the image acquired in the process of step 101 on, for example, the upper LCD 22 (step 102).
  • Next, the information processing section 31 performs a process of selecting an enemy object EO (step 103). Here, the information processing section 31 prompts the user to select the head shape of an enemy object EO. For example, the information processing section 31 may display the list of head shapes as shown in FIG. 9, and may receive the selection of the user through the GUI. Then, the information processing section 31 sets the acquired face image as a texture of the enemy object EO (step 104), and generates the enemy object EO (step 105). The enemy object generated in step 105 is an example of a first character object. The information processing section 31 performs the process of step 105 as an example of means for creating a character object.
  • Then, the information processing section 31 executes a game using the generated enemy object EO (step 106). The CPU 311 of the information processing section 31 performs the process of step 106 as an example of first game processing means. Here, the type of the game is not limited. The game is, for example, a game simulating a battle with the enemy object EO. Alternatively, the game may be, for example, a game where the user competes with the enemy object EO in score. Then, after the execution of the game, the information processing section 31 determines whether or not the user has succeeded in the game (step 107). The information processing section 31 performs the process of step 107 as an example of means for determining a success or a failure. A “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO. Alternatively, a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score. Yet alternatively, a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
  • It should be noted that in the game executed in step 106, as well as a character object including the face image acquired in step 101, a character object using a face image already collected in the past may be caused to appear. For example, when a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
  • When the user has succeeded in the game, the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 101 described above, in addition to data of face images that have been saved up to the current time (step 109). The CPU 311 of the information processing section 31 performs the process of step 109 as an example of means for saving. The saved data storage area Do of the game is a storage area where the information processing section 31 that executes the game can perform writing and reading, the storage area constructed in, for example, the main memory 32, the data storage internal memory 35, or the data storage external memory 46. Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to FIGS. 7 and 8. As described above, based on the process of FIG. 15, the user executes the game (first game) in order to save the face image acquired in step 101 in the saved data storage area Do of the game. In the game, for example, a character object using a face image that has been saved in the saved data storage area Do by the user up to the current time is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image, and add the new face image to the saved data storage area Do, while reflecting human relationships in the real world and the like.
  • At this time, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn1 described with reference to FIG. 12, and saves the face image management information Dn1 in the data storage internal memory 35 or the data storage external memory 46. That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn1. Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do of the game, as the address of face image data. Furthermore, the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like.
  • In addition, the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn2 described with reference to FIG. 13. That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn2.
  • In addition, the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do of the game, or transfer the data through the wireless communication module 36. Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14.
  • On the other hand, when the user has not succeeded in the game, the information processing section 31 inquires of the user as to whether or not to retry the game (step 108). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13, an operation through the operation buttons 14, or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 106. On the other hand, when the user has not given an instruction to retry the game, the information processing section 31 discards the face image acquired in step 101 (step 110), and ends the process. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 101, without waiting for an instruction to retry the game in step 108.
  • With reference to FIG. 16, a description is given below of an example of a detailed process of the face image acquisition process 2 (step 12 of FIG. 14). In this process, a description is given of an example of the process of, in a face image acquisition process, leading the user to acquire a face image with the inner capturing section 24 prior to the two capturing sections of the outer capturing section 23 (the left outer capturing section 23 a and the right outer capturing section 23 b). The reason why the game apparatus 10 causes the user to first acquire a face image with the inner capturing section 24 is that the acquisition of a face image with the inner capturing section 24 clarifies, for example, the owner and the like of the game apparatus 10, so as to increase the possibility of restricting the use of the game apparatus 10 by another person.
  • With reference to FIG. 16, a description is given of an example of the operation of the image processing program executed by the information processing section 31 of the game apparatus 10. In this process, the information processing section 31 first determines whether or not a face image has already been acquired by the inner capturing section 24 (step 121). The determination of whether or not a face image has already been acquired by the inner capturing section 24 may be made, for example, with reference to the face image management information Dn1 shown in FIG. 12 and based on whether or not there is a record where the inner capturing section 24 is set as the source of acquiring the face image. Alternatively, for example, when the game apparatus 10 has the function of registering in the game apparatus 10 the face image of the owner of the game apparatus 10 acquired by the inner capturing section 24, the determination may be made based on whether or not the face image of the owner has already been registered.
  • When a face image has not already been acquired by the inner capturing section 24 (“No” in step 121), the information processing section 31 prompts the user to first perform a face image acquisition process with the inner capturing section 24 (step 124), and ends the process of this subroutine. More specifically, for example, the information processing section 31 displays on the upper LCD 22 a message indicating “In the game apparatus 10, if a face image has not already been acquired by the inner capturing section 24, a face image acquisition process cannot be performed with the outer capturing section 23”. Alternatively, the information processing section 31 may request the user to first register the face image of the owner.
  • On the other hand, when a face image has already been acquired by the inner capturing section 24 (“Yes” in step 121), the information processing section 31 performs a face image management assistance process 3 (step 122). The face image management assistance process 3 will be described later with reference to FIG. 19C. Then, the information processing section 31 performs a face image acquisition process with the outer capturing section 23 (step 123). For example, when the outer capturing section 23 is used, if the user directs the outer surface 21D of the upper housing 21 to another person's face in the state where the upper housing 21 is open, said another person's face is displayed on the upper LCD 22. Then, when the user has pressed, for example, the R button 14H (or the L button 14G), the information processing section 31 acquires, as data, an image from the outer capturing section 23 that is displayed on the upper LCD 22, and temporarily stores the acquired data in the main memory 32. At this time, the data of the image is only present in the main memory 32, and is not saved in the saved data storage area Do. The data present in the main memory 32 is only used in the game in step 129 described later, and as will be described later, is discarded when the game has not been successful and has been ended.
  • It should be noted that in the processing according to the present embodiment, the face image acquired in step 123 can also be texture-mapped onto the facial surface portion or the like of an enemy object EO, and the game can be executed. Accordingly, in the process of step 123, it is preferable that the face image should be acquired by clipping particularly the face portion from the image acquired from the capturing section, by a process similar to that of step 101 described above. Further, also when a face image is acquired in step 123, the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn2 shown in FIG. 13. “The corresponding row” means, for example, the row of the attributes corresponding to the gender and the age that are estimated in step 1202 of FIG. 19C described later.
  • Next, the information processing section 31 displays the image acquired in the process of step 123 on, for example, the upper LCD 22 (step 125).
  • Next, the information processing section 31 performs a process of selecting an enemy object EO (step 126). Here, the information processing section 31 prompts the user to select the head shape of an enemy object EO. For example, the information processing section 31 may display the list of head shapes as shown in FIG. 9, and may receive the selection of the user through the GUI. Then, the information processing section 31 sets the acquired face image as a texture of the enemy object EO (step 127), and generates the enemy object EO (step 128). The enemy object generated in step 128 is also an example of the first character object. The information processing section 31 performs the process of step 128 as an example of the means for creating a character object.
  • Then, the information processing section 31 executes a game using the generated enemy object EO (step 129). The CPU 311 of the information processing section 31 performs the process of step 129 as an example of the first game processing means. The game executed in step 129 is similar to that of step 106. That is, the type of the game executed in step 129 varies, and possible examples of the game may include: a game simulating a battle with the enemy object EO; and a game where the user competes with the enemy object EO in score. Then, after the execution of the game, the information processing section 31 determines whether or not the user has succeeded in the game (step 130). The information processing section 31 performs the process of step 130 as an example of the means for determining a success or a failure. A “success” is, for example, the case where the user has defeated the enemy object EO in the game where the user fights with the enemy object EO. Alternatively, a “success” is, for example, the case where the user has scored more points than the enemy object EO in the game where the user competes with the enemy object EO in score. Yet alternatively, a “success” may be, for example, the case where the user has reached a goal in a game where the user overcomes obstacles and the like set by the enemy object EO.
  • It should be noted that in the game executed in step 129, as well as a character object including the face image acquired in step 123, a character object using a face image already collected in the past may be caused to appear. For example, when a face image already collected in the past is attached to an enemy object EO or a friend object and appears, the user can play a game on which human relationships in the real world and the like are reflected.
  • When the user has succeeded in the game, the information processing section 31 saves, in the saved data storage area Do of the game, data of the face image present in the main memory 32 that has been acquired in step 123 described above, in addition to data of face images that have been saved up to the current time (step 132), and ends the process of the subroutine. The CPU 311 of the information processing section 31 performs the process of step 132 as an example of the means for saving. Data of a new face image is stored in the saved data storage area Do of the game, whereby the information processing section 31 that executes the game can display on the screen of the upper LCD 22 the data of the new face image by adding the data to, for example, the list of the face images described with reference to FIGS. 7 and 8. As described above, based on the process of FIG. 16, the user executes the game (first game) in order to save the face image acquired in step 123 in the saved data storage area Do of the game. In the game, for example, a character object using a face image that has been saved in the saved data storage area Do by the user up to the current time is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image, and add the new face image to the saved data storage area Do, while reflecting human relationships in the real world and the like.
  • At this time, as in the face image acquisition process 1 in step 10, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates the face image management information Dn1 described with reference to FIG. 12, and saves the face image management information Dn1 in the data storage internal memory 35 or the data storage external memory 46. That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn1. Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do of the game, as the address of face image data. Furthermore, the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like. In addition, the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn2 described with reference to FIG. 13. That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn2. In addition, the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do of the game, or transfer the data through the wireless communication module 36. Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14.
  • On the other hand, when the user has not succeeded in the game, the information processing section 31 inquires of the user as to whether or not to retry the game (step 131). For example, the information processing section 31 displays on the upper LCD 22 a message indicating an inquiry about whether or not to retry the game, and receives the selection of the user in accordance with an operation on the GUI provided on the lower LCD 12 (e.g., a positive icon, a negative icon, or a menu) through the touch panel 13, an operation through the operation buttons 14, or the like. When the user has given an instruction to retry the game, the information processing section 31 returns to step 129. On the other hand, when the user has not given an instruction to retry the game, the information processing section 31 discards the face image acquired in step 123 (step 133), and ends the process of the subroutine. It should be noted that when the game has not been successful, the information processing section 31 may discard the face image acquired in step 123, without waiting for an instruction to retry the game in step 131.
  • FIG. 17 is a flow chart showing an example of a detailed process of the list display process (step 14 of FIG. 14). In this process, the information processing section 31 first reads already registered face images from the saved data storage area Do of the data storage internal memory 35 or the data storage external memory 46, and displays the already registered face images on the upper LCD 22 (step 140). More specifically, the information processing section 31 acquires the addresses of face image data of the face images from the face image management information Dn1 saved in the saved data storage area Do. Then, the information processing section 31 may read the face images from the addresses in the data storage internal memory 35, the data storage external memory 46, or the like, and may display the face images on the upper LCD 22.
  • Next, the information processing section 31 waits for an operation of the user (step 141). Then, in accordance with an operation of the user, the information processing section 31 determines whether or not a face image is in the state of being selected (step 142). The determination of whether or not a face image is in the state of being selected is made based on, when the list of the face images are displayed on the upper LCD 22 as shown in FIGS. 7 and 8, the state of the operation after the user has pressed the operation buttons 14 or the like, or the state of the operation through the GUI. It should be noted that the face image in the state of being selected is displayed by being surrounded by, for example, the heavy line L1, or the heavy line L2, as shown in FIGS. 7 and 8.
  • Then, when any one of the face images is in the state of being selected, the information processing section 31 searches for face images related to the face image in the state of being selected, using the face image management information Dn1 (see FIG. 12) (step 143).
  • Then, the information processing section 31 performs a process of causing the found face images to react, such as causing the found face images to give looks to the face image in the state of being selected (step 144). The process of causing the found face images to react can be performed by, for example, the following procedure. For example, the following are prepared in advance: a plurality of patterns of eyes, in which the orientation of eyes are directed to another face image as shown in FIG. 8; and a plurality of patterns of a face, in which the orientation of a face is directed to another face image as shown in FIG. 8. Then, based on the relationships between the positions of the found face images and the position of the face image in the state of being selected, the corresponding patterns of the orientations of eyes and the orientations of faces are selected. Then, the corresponding patterns of the orientations of eyes and the orientations of faces of the face images may be displayed so as to switch the patterns of the orientations of eyes and the orientations of faces of the already displayed face images. That is, images of eyes determined based on the relationships between the positions of the found face images and the position of the face image in the state of being selected may replace the eye portions of the original face images. To change the orientations of the faces, display may be performed by switching the entire face images. Alternatively, for example, patterns of eyes may be prepared in advance, in which the orientation of eyes are changed at predetermined angles, e.g., at units of 15 degrees in a 360 degree direction. Then, based on the positional relationships between the face image in the state of being selected and the found face images, angles may be determined, and the patterns of eyes at the angles closest to the determined angles may be selected.
  • In addition, concerning the orientation of a face, patterns of a face image are prepared in which, on the assumption that the case of being directed in the normal direction of the screen is 0 degrees, the orientation is changed in the left-right directions at angles, e.g., 30 degrees, 60 degrees, and 90 degrees. Further, patterns are also prepared in which the orientation is changed in the up-down directions at, for example, approximately 30 degrees. Further, patterns may be prepared in which, for a face image whose orientation has been changed in the left-right direction at an angle of 90 degrees, the orientation is further changed in the up-down direction, i.e., diagonally upward (e.g., 15 degrees upward, 30 degrees upward, and 45 degrees upward) and diagonally downward (e.g., 15 degrees downward, 30 degrees downward, and 45 degrees downward). Then, based on the positional relationships between the face image in the state of being selected and the found face images, angles may be determined, and the angles of faces closest to the corresponding angles may be selected. Further, to emphasize intimacy, an expression such as an animation of a three-dimensional model closing one eye may be displayed. Further, a heart mark and the like may be prepared in advance, and displayed near the face images related to the face image in the state of being selected.
  • In the determination in step 142, when a face image is not in the state of being selected, the information processing section 31 performs another process (step 145). Said another process includes, for example, an operation on another GUI provided on the lower LCD 12, and a process on operation buttons 14 other than the operation buttons 14 used for the selection of face images (buttons 14 a, 14 b, 14 c, and the like). Subsequently, the information processing section 31 determines whether or not the process is to be ended (step 146). For example, when having detected that the button 14 c (B button) has been pressed while the screen shown in FIG. 8 is displayed, the information processing section 31 determines that an instruction has been given to “return”, and ends the process of FIG. 17. When the process is not to be ended, the information processing section 31 returns to step 140.
  • FIG. 18 is a flow chart showing an example of a detailed process of the cast determination process (step 16 of FIG. 14). In this process, the information processing section 31 first displays a list of the head shapes of enemy objects EO (step 160). It should be noted that here, the description is given, taking enemy objects EO as an example; however, also when a character object other than the enemy objects EO is generated, a process similar to the following process is performed. The information processing section 31 stores the head shapes of the enemy objects EO in the data storage internal memory 35 in advance, for example, before the shipment of the game apparatus 10, or at the installation or the upgrading of the image processing program. The information processing section 31 reads the head shapes of the enemy objects EO currently stored in the data storage internal memory 35, and displays the head shapes of the enemy objects EO in the arrangement as shown in FIG. 9.
  • Next, the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14, or the like, and receives the selection of the head shape of an enemy object EO (step 161). When the selection of the head shape of an enemy object EO has been ended, subsequently, the information processing section 31 displays a list of face images (step 162). Then, the information processing section 31 detects a selection operation of the user through the GUI, the operation buttons 14, or the like, and receives the selection of a face image (step 163). It should be noted that in the example of the process of FIG. 18, the information processing section 31 determines a face image by detecting the selection operation in step 163. Instead of such a process, however, the information processing section 31 may automatically determine a face image. For example, the information processing section 31 may select a face image randomly from among the face images accumulated in the saved data storage area Do. Alternatively, the information processing section 31 may save in advance the history of the game of the user using the game apparatus 10, in the main memory 32, the external memory 45, the data storage external memory 46, the data storage internal memory 35, or the like, and may select a face image in accordance with the properties, the taste, and the like of the user that are estimated from the history of the user. For example, in accordance with the frequencies of the user selecting face images in the past, the information processing section 31 may determine a face image to be selected next.
  • Then, the information processing section 31 sets the selected face image as a texture of the enemy object EO (step 164). Then, the information processing section 31 generates the enemy object EO by texture-mapping the selected face image onto the facial surface portion of the enemy object EO (step 165). The enemy object generated in step 165 is an example of a second character object. Then, the information processing section 31 displays the generated enemy object EO on the screen of the upper LCD 22 in the form of, for example, the enemy object EO shown in FIG. 10.
  • In addition, the information processing section 31 performs a process of causing related face images to react (step 166). This process is similar to the processes of steps 143 and 144 of FIG. 17. Further, in accordance with an operation of the user on the GUI, the information processing section 31 determines whether or not the generated enemy object EO is to be fixed (step 167). When the enemy object EO is not to be fixed, the information processing section 31 returns to step 162, and receives the selection of a face image. However, when the enemy object EO is not to be fixed, the information processing section 31 may return to step 160, and may receive the selection of the head shape of an enemy object EO. On the other hand, when the enemy object EO is to be fixed, the information processing section 31 ends the process. The information processing section 31 performs the game processing shown in step 18 of FIG. 14, using the fixed enemy object EO. Although not shown in FIG. 18, however, a menu of the GUI or the like may be prepared so as to end the process of FIG. 18 without fixing the enemy object EO.
  • FIG. 19A is a flow chart showing an example of a detailed process of the face image management assistance process 1 (step 1A of FIG. 14). In this process, the information processing section 31 reads the attributes of already acquired face images from the face image attribute aggregate table Dn2 (step 1A1). Then, the information processing section 31 searches the read face image attribute aggregate table Dn2 for an unacquired attribute or an attribute including a small number of acquired face images. An “unacquired attribute” means, for example, an attribute whose number of acquired face images is 0 in the table shown in FIG. 13. Further, “an attribute including a small number of acquired face images” means, for example, an attribute whose number of acquired face images is, when sorting is performed using the numbers of acquired face images in the table shown in FIG. 13 as sorting keys, included in predetermined ranks from the bottom.
  • Next, the information processing section 31 performs a process of prompting the user to acquire a face image corresponding to an unacquired attribute (step 1A2). For example, the information processing section 31 may display on the lower LCD 12 or the upper LCD 22 a message combining the attribute “male”, the attribute “10's”, with the phrase “the number of acquired images is 0”, based on the table shown in FIG. 13. Further, for example, the information processing section 31 may display a message combining the attribute “male”, the attribute “10's”, with the phrase “the number of acquired images is small”. The number of the combinations of the attributes to be displayed (a row in the table shown in FIG. 13), however, is not limited to one, and two or more combinations may be displayed. Then, for example, when having detected that the user has pressed the operation button 14B (A button), the information processing section 31 may end the process of FIG. 19A. Subsequently, the information processing section 31 returns to step 8 of FIG. 14.
  • It should be noted that here, the description is given, taking the face image management assistance process 1 shown in FIG. 19A, as an example of a detailed process performed at the time of the determination of whether the game is to be ended in FIG. 14 (step 1A). The face image management assistance process 1, however, is not limited to the time of the determination of whether the game is to be ended after the execution of the game (step 1A). For example, the information processing section 31 may perform the face image management assistance process 1 in order to prompt the user to acquire a face image during the list display process (step 14), the cast determination process (step 16), the execution of the game (step 18), or the like.
  • FIG. 19B is a flow chart showing an example of a detailed process of the face image management assistance process 2 (step 100 of FIG. 15). In this process, first, with reference to the number of acquired images, the information processing section 31 determines the presence or absence of an acquired image (step 1000). The information processing section 31 may store the number of acquired images as, for example, the number of records in the face image management information Dn1 in the main memory 32, the external memory 45, the data storage external memory 46, the data storage internal memory 35, or the like.
  • Then, when an acquired image is not present (“No” in the determination in step 1000), the information processing section 31 ends the process. On the other hand, when an acquired image is present (“Yes” in the determination in step 1000), the information processing section 31 proceeds to step 1001. Then, the information processing section 31 receives a request to acquire the face image (step 1001). The information processing section 31 recognizes the request to acquire the face image, for example, when having received an acquisition instruction through the L button 14G or the R button 14H in the state where the face image is displayed on the upper LCD 22 through the inner capturing section 24 or the outer capturing section 23.
  • Then, the information processing section 31 estimates the attributes, e.g., the gender and the age, of the face image acquired through the inner capturing section 24 or the outer capturing section 23 and displayed on the upper LCD 22 (step 1002). For example, the gender can be estimated from the size of the skeleton including the cheekbones and the mandible that are included in the face image, and the dimensions of the face. That is, the information processing section 31 calculates the relative dimensions of the contour of the face relative to the distance between the eyes and the distances between the eyes and the mouth (e.g., the width of the face, and the distance between the eyes and the chin). Then, when the relative dimensions are close to statistically obtained male average values, it may be determined that the face image is male. Further, for example, when the relative dimensions are close to statistically obtained female average values, it may be determined that the face image is female.
  • In addition, the information processing section 31 may store, in advance, feature information by gender and by age bracket (e.g., under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over), such as the average positions of parts of faces and the number of wrinkles in the portions of faces. Then, the information processing section may calculate the feature information of the face image, for example, acquired through the outer capturing section 23 and displayed on the upper LCD 22, and may estimate the age bracket closest to the calculated feature information. The above descriptions of the specification of the gender and the age, however, is illustrative, and the determination of the gender and the age is not limited to the above process. In the process of step 1002, it is possible to apply various gender determination techniques and various age specification techniques that are conventionally proposed.
  • Next, the information processing section 31 prompts the user to acquire a face image having an unacquired attribute (step 1003). For example, the information processing section 31 may display on the upper LCD 22 a message prompting the user to acquire a face image having an unacquired attribute. This process is similar to that of FIG. 19A. Then, the information processing section 31 determines whether or not the user has performed an operation of switching acquisition target face images (step 1004). For example, when the features, e.g., the distance between the eyes and the distances between the eyes and the mouth, of the face image included in the image acquired through the inner capturing section 24 or the outer capturing section 23, have changed, the information processing section 31 determines that acquisition target face images have been switched. Further, for example, when an acquisition instruction through the L button 14G or the R button 14H has been simply canceled, and an acquisition instruction has been given again through the L button 14G or the R button 14H, the information processing section 31 may determine that acquisition target face images have been switched. Then, when acquisition target face images have been switched, the information processing section 31 returns to step 1002.
  • On the other hand, when acquisition target face images have not been switched, the information processing section 31 ends the process as it is. Here, “when acquisition target face images have not been switched” is, for example, the case where the user has ended the face image management assistance process 2 through the GUI, the operation button 14C (B button), or the like. Alternatively, for example, when the state where acquisition target face images are not switched has continued for a predetermined time, the information processing section 31 may determine that acquisition target face images have not been switched. In this case, the information processing section 31 proceeds to step 101 of FIG. 15, and performs the face image acquisition process. It should be noted that as has already been described in the process of step 101 of FIG. 15, the information processing section 31 updates the number of acquired face images in the corresponding row of the face image attribute aggregate table Dn2 shown in FIG. 13.
  • In addition, in the determination process of step 1004, “when acquisition target face images have not been switched” is, for example, the case where the amount of change in the distance between the eyes and the amounts of change in the distances between the eyes and the mouth are within tolerances. Alternatively, for example, when an acquisition instruction from the L button 14G or the R button 14H has not been simply canceled, but has continued for a predetermined time, the information processing section 31 may determine that the acquisition instruction has not been canceled.
  • FIG. 19C is a flow chart showing an example of a detailed process of the face image management assistance process 3 (step 122 of FIG. 16). The process of FIG. 19C (steps 1201 through 1204) is similar to steps 1001 through 1004 in the process of FIG. 19B, and therefore is not described.
  • Based on the processes of FIGS. 19A through 19C, the information processing section 31 leads the user to preferentially acquire a face image corresponding to an unacquired attribute. Such a process makes it possible to assist a face image collection process of a user who wishes to acquire face images having as balanced attributes as possible.
  • It should be noted that in the present embodiment, an example of the process is shown where age brackets are classified as under 10, 10's, 20's, 30's, 40's, 50's, 60's, or 70 or over. The present invention, however, is not limited to such classification of age brackets. For example, the age brackets may be further classified into smaller categories. Alternatively, age brackets may be roughly classified, such as children, adults, and the elderly.
  • In the present embodiment, when having received an acquisition instruction through the L button 14G or the R button 14H, the information processing section 31 recognizes a request to acquire a face image. Instead of such a process, however, for example, as has already been described in the present embodiment, to acquire from the capturing section a face image having target dimensions, in a process of deriving the distance between the game apparatus 10 and the face, the angle between the optical axis of the capturing section and the face, and the like, the information processing section 31 may estimate the attributes, e.g., the gender and the age, of the face image. That is, when, for example, the information processing section 31 acquires a face image in real time or in each frame cycle (e.g., 1/60 seconds) for such a deriving process, the information processing section 31 may specify the attributes of the face image from the acquired face image.
  • <Detailed Example of Game Processing>
  • Here, with reference to FIGS. 20A through 40, a description is given of an example of the game processing of the game where character objects using face images appear. The following processing is performed in, for example, step 18 of FIG. 14, using enemy objects EO including face images accumulated in the saved data storage area Do.
  • First, in the present embodiment, a description is given of an overview of a game that can be played by the player executing the game program with the game apparatus 10. The game according to the present embodiment is a so-called shooting game where the player, as a main character of the game, shoots down enemy characters that appear in a virtual three-dimensional space prepared as a game world. For example, the virtual three-dimensional space forming the game world (a virtual space (also referred to as a “game space”)) is displayed on a display screen of the game apparatus 10 (e.g., the upper LCD 22) from the player's point of view (a so-called first-person point of view). As a matter of course, display may be performed from a third-person point of view. When the player has shot down an enemy character, points are added to the score. In contrast, when an enemy character has collided with the player (specifically, when the enemy character has reached within a certain distance from the position of the virtual camera), points are deducted from the score.
  • In addition, in the game according to the present embodiment, display is performed by combining an image of the real world acquired by the capturing section included in the game apparatus 10 (hereinafter referred to as a “real world image”), with a virtual world image representing the virtual space. Specifically, the virtual space is divided into an area closer to the virtual camera (hereinafter referred to as a “front area”) and an area further from the virtual camera (hereinafter referred to as a “back area”). Then, an image representing a virtual object present in the front area is displayed in front of the real world image, and the virtual object present in the back area is displayed behind the real world image. More specifically, as will be described later, combination is made such that the virtual object present in the front area is given preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
  • The method of combining the real world image with the virtual world image is not limited. For example, the real world image may be rendered with the virtual object by a common virtual camera such that real world image may be present as an object in the same virtual space as the virtual object (more specifically, for example, by being attached as a texture of a virtual object).
  • In addition, in another example, a first rendered image may be obtained by rendering the real world image from a first virtual camera (hereinafter referred to as a “real world drawing camera”), and a second rendered image may be obtained by rendering the virtual object from a second virtual camera (hereinafter referred to as a “virtual world drawing camera”). Then, the first rendered image may be combined with the second rendered image such that the virtual object present in the front area is give preference over the real world image, and the real world image is given preference over the virtual object present in the back area.
  • In the first method, typically, the object to which the real world image is applied as a texture (hereinafter referred to as a “screen object”) may be placed at a position, which is the boundary between the front area and the back area, and may be drawn together with the virtual object, such as an enemy object, as viewed from the common virtual camera. In this case, typically, the object to which the real world image is attached is an object having a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera, and the real world image may be attached to this surface (hereinafter referred to as a “boundary surface”) as a texture.
  • In addition, in the second method, the second rendered image is obtained by rendering the virtual object while making a depth determination (determination by Z-buffering) based on the boundary surface between the front area and the back area (hereinafter referred to simply as a “boundary surface”), and the first rendered image is obtained by performing rendering by attaching the real world image as a texture to a surface which has a certain distance from the virtual camera and whose normal line coincides with the direction of the line of sight of the virtual camera. Then, when the second rendered image is combined with the first rendered image such that the second rendered image is given preference over the first rendered image, a combined image is generated, in which the real world image seems to be present on the boundary surface.
  • In either method, the relationships between the distance from, and the angle of view of, the virtual camera and the size of the object of the real world image (the size in the direction of the line of sight) are set such that the real world image includes the range of the field of view of the virtual camera.
  • It should be noted that hereinafter, the first method is referred to as a “first drawing method”, and the second method is referred to as a “second drawing method”.
  • In addition, when predetermined event occurrence conditions in the game have been satisfied, a part of the real world image is opened, and display is performed such that the virtual space in the back area can be viewed through the opening. Further, an enemy character object is present in the front area, and when predetermined conditions have been satisfied, a special enemy character (a so-called “boss character”) appears in the back area. This stage is completed by shooting down the boss character. Several stages are prepared, and the game is completed by completing all the stages. In contrast, when predetermined game over conditions have been satisfied, the game is over.
  • In a typical example of the first drawing method described above, for the opening in the real world image, data indicating the position of the opening may be set on the boundary surface of the screen object. More specifically, the non-transparency of a texture to be applied to the boundary surface (a so-called α-texture) may indicate open or unopen. Further, in the second drawing method, data indicating the position of the opening may be set on the boundary surface.
  • In addition, in the present embodiment, the open/unopen state is set in the real world image. Alternatively, another image processing may be performed on the real world image. For example, given image processing can be performed by common technical knowledge of those skilled in the art, such as attaching dirt to the real world image, or pixelizating the real world image. Also in these examples, data may be set that indicates the position where image processing is performed on the boundary surface.
  • <Game World>
  • As described above, in the game according to the present embodiment, a game screen is displayed that represents the virtual space having such an improved sense of depth that the existence of the virtual space (back area) is felt also behind the real image. It should be noted that the real world image may be a regular image captured by a monocular camera, or may be a stereo image captured by a compound eye camera.
  • In the game according to the present embodiment, an image captured by the outer capturing section 23 is used as the real world image. That is, a real world image in the periphery of the player captured by the outer capturing section 23 (a real-world moving image acquired in real time) is used during the game play. Accordingly, when the user (the player of the game) holding the game apparatus 10 has changed the imaging range of the outer capturing section 23 by changing the orientation of the game apparatus 10 in the left-right direction or the up-down direction during the game play, the real world image displayed on the upper LCD 22 also changes so as to follow the change in the imaging range.
  • Here, the change in the orientation of the game apparatus 10 during the game play is made roughly in accordance with: (1) the player's intention; or (2) the intention (scenario) of the game. When the player has intentionally changed the orientation of the game apparatus 10 during play, the real world image captured by the outer capturing section 23 changes. This makes it possible to intentionally change the real world image displayed on the upper LCD 22.
  • In addition, the angular velocity sensor 40 of the game apparatus 10 detects the change in the orientation of the game apparatus 10, and the orientation of the virtual camera is changed in accordance with the detected change. More specifically, the current orientation of the virtual camera is changed in the direction of the change in the orientation of the outer capturing section 23. Further, the current orientation of the virtual camera is changed by the amount of change (angle) in the orientation of the outer capturing section 23. That is, when the orientation of the game apparatus 10 is changed, the real world image changes, and the displayed range of the virtual space changes. That is, a change in the orientation of the game apparatus 10 changes the real world image in conjunction with the virtual world image. This makes it possible to display a combined image as if the real world is associated with the virtual world. It should be noted that in the present embodiment, the position of the virtual camera is not changed. Alternatively, the position of the virtual camera may be changed by detecting the movement of the game apparatus 10.
  • It should be noted that in the second drawing method, such a process of changing the orientation of the virtual camera is applied to the virtual world drawing camera, but is not applied to the real world drawing camera.
  • In addition, when an object is displayed at a local position, such as the end of the screen (e.g., the right end or the left end) during the game play, the player naturally intends to attempt to capture the object at the center of the screen, and therefore moves the game apparatus 10 (outer capturing section 23). As a result, the real world image displayed on the screen changes. Such a change in the orientation of the game apparatus 10 (a change in the real world image) can be naturally made by the user, by performing programming such that an object displayed in accordance with the scenario of the game is intentionally displayed at the end of the screen.
  • <Details of Virtual Space>
  • (Drawing of Real World Image)
  • The real world image captured by the outer capturing section 23 is combined with the virtual space such that the real world image seems to be present at the boundary position between the front area and the back area of the virtual space. FIG. 20A shows an example of the virtual space according to the present embodiment. FIG. 20B shows the relationship between a screen model and an α-texture according to the present embodiment. In the first drawing method, to display the real world image, a screen object may be formed by, as shown in FIG. 20A, setting a spherical model (the screen model described above) having its center at the position of the virtual camera in the virtual space, and attaching the real world image to the inner surface of the sphere. More specifically, the real world image is attached as a texture to the screen model, in the entire portion of the viewing volume of the virtual camera. The remaining portion of the screen model is set to transparent, and therefore is not viewed on the screen. In this example, the boundary surface is a spherical surface, that is, as shown in FIG. 20A, the area closer to the virtual camera than the surface of the sphere is the front area (corresponding to a “second area” according to the present invention), and the area further from the virtual camera than the surface of the sphere is the back area (corresponding to a “first area” according to the present invention).
  • In the second drawing method, to display the real world image, a planar polygon to which a texture of the real world image is attached is placed in the virtual space. In the virtual space, the relative position of the planar polygon relative to the real world drawing camera is always fixed. That is, the planar polygon is placed so as to have a certain distance from the real world drawing camera, and is placed such that the normal direction of the planar polygon coincides with the point of view (optical axis) of the real world drawing camera.
  • In addition, the planar polygon is set to include the range of the field of view of the real world drawing camera. Specifically, the size of the planar polygon and the distance of the planar polygon from the virtual camera are set such that the planar polygon can include the range of the field of view of the virtual camera. The real world image is attached to the entire surface of the planar polygon on its virtual camera side. Thus, when the planar polygon to which the real world image is attached is drawn from the virtual camera, display is performed such that the real world image corresponds to the entire area of an image generated by the virtual camera.
  • It should be noted that as shown in FIG. 21, the boundary surface may be cylindrical. FIG. 21 shows another example of the virtual space according to the present embodiment. In this case, in the virtual space, a virtual cylindrical peripheral surface (boundary surface) is placed, whose center axis is a vertical axis extending through the position of the virtual camera (in the present embodiment, it is assumed that a Y-axis of the virtual space corresponds to the vertical direction, and an X-axis and a Z-axis correspond to the horizontal directions). As described above, however, the cylindrical peripheral surface is not an object to be viewed, but is an object used for an opening process. The outer peripheral surface of the cylinder divides the virtual space into the first space where the virtual camera is placed (corresponding to the “second area” according to the present invention), and the second space existing around the first space (corresponding to the “first area” according to the present invention).
  • (Process of Opening Real World. Image)
  • Further, in the game according to the present embodiment, an opening is provided in the real world image so that the player recognizes the existence of the back area behind the real world image. More clearly, the portion of the opening included in the real world image is displayed in a transparent or semi-transparent manner, and is combined with the world behind this portion. With this, in the game, the occurrence of a predetermined event triggers the opening (removal) of a part of the real world image, and an image representing another virtual space existing behind the real world image (back area) is displayed through the opening.
  • In the present embodiment, the boundary surface is a spherical surface, and such a process of displaying the back area by providing an opening in the real world image is achieved in the first drawing method by a texture attached to the inner surface of the spherical screen object described above, as shown in FIGS. 20A and 20B. Hereinafter, this texture is referred to as a “screen α-texture” (opening determination data described later). In the present embodiment, the screen α-texture is attached to a portion that completes a circuit by 360 degrees around the virtual camera at least in a certain direction. More specifically, as shown in FIG. 20B, the screen α-texture is attached to a central portion of the sphere, i.e., a portion that completes a circuit by 360 degrees around the position of the virtual camera in a direction parallel to the XY-plane and has a predetermined width in the Y-direction (hereinafter referred to as an “α-texture-applied portion”). The process as described above can simplify data included in the screen α-texture. Specifically, in the present embodiment, the screen α-texture has a rectangular shape. The attachment of the α-texture to the portion shown in FIG. 20A causes pieces of information of the dots on the screen α-texture to correspond to sets of coordinates of the α-texture-applied portion in the screen object.
  • As described above, the screen object to which the real world image is attached and on which the α-texture is set is drawn from the virtual camera, and therefore, drawing is performed such that the real world image having an opening is present on the boundary surface (the inner surface of the sphere). In the α-texture, the portion corresponding to the real world image is calculated by drawing from the virtual camera.
  • Also in the second drawing method, data indicating the position of an opening is set on the boundary surface of the virtual space (here, the inner surface of the sphere). Typically, data is set that indicates the presence or absence of an opening at each point of the boundary surface. More specifically, a spherical object similar to the above is placed in the virtual world where a virtual object is present, and a similar α-texture is set on the spherical object. Then, when the real world image is rendered, rendering is performed by applying to the planar polygon described above an α-texture corresponding to the portion drawn by the virtual world drawing camera, the corresponding α-texture included in the α-texture set on the spherical object. Alternatively, after a process is performed of making the opening transparent in the real world image using the α-texture corresponding to this portion, rendering is performed by the real world drawing camera such that the real world image after this process is attached to the planar polygon described above. It should be noted that this spherical object is an object used only to calculate an opening, but is an object not drawn when the virtual world is drawn.
  • It should be noted that in the present embodiment, data indicating an opening is data having information of each point of the boundary surface. Alternatively, the data may be information defining the position of an opening in the boundary surface by a calculation formula.
  • In the second space, a polygon (object) is placed, to which a background image (texture) of the second space included in the field of view of the virtual camera through an opening is to be attached. The background of the second space is occasionally referred to as a “back wall”.
  • In the first space, objects are placed so as to represent enemy characters and various characters representing bullets for shooting down the enemy characters. Also in the second space, predetermined objects (e.g., some of the enemy characters) are placed. The objects placed in the virtual space move in the virtual space in accordance with logic (algorithm) programmed in advance.
  • In addition, some of the enemy characters can move between the first space and the second space through an opening formed in the boundary surface, or can move between the first space and the second space by forming an opening in the boundary surface themselves. A particular event for forming an opening in the game is, for example, an event where an enemy character collides with the boundary surface (a collision event). Alternatively, the event is where in the progression of the game scenario, the boundary surface is destroyed based on predetermined timing, and an enemy character present in the second space enters the first space (an enemy character appearance event). Yet alternatively, an opening may be automatically formed in accordance with the passage of time. Yet alternatively, an opening may be repaired in accordance with a predetermined game operation of the player. For example, the player may reduce (repair) a formed opening by hitting the opening with a bullet.
  • FIG. 22 shows a virtual three-dimensional space (game world) defined in the game program, which is an example of the image processing program according to the embodiment. It should be noted that as described above, in the present embodiment, the boundary surface is spherical; however, in FIG. 22, the boundary surface is shown as cylindrical for convenience. As described above, in the game according to the present embodiment, display is performed on the upper LCD 22 of the game apparatus 10 such that the virtual world image representing the virtual three-dimensional space and the real world image are combined together.
  • In addition, as shown in FIG. 22, the virtual space in the game according to the present embodiment is divided into the first space 1 and the second space 2 by the boundary surface 3 formed of the spherical surface having its central axis extending through the position of the virtual camera.
  • On the boundary surface 3, a camera image CI, which is a real world image captured by a real camera built into the game apparatus 10 (FIG. 23), is combined with the virtual world image as if the camera image CI is present at a position on the boundary surface 3, by the processes of steps 81 and 82 described later in the first drawing method, or by the processes of steps 83 through step 85 described later in the second drawing method.
  • In the present embodiment, the real world image is a planar view image. The virtual world image is also a planar view image. That is, a planar view image is displayed on the upper LCD 22. The real world image, however, may be a stereoscopically visible image. The present embodiment is not limited by the type of the real world image. It should be noted that in the present embodiment, the camera image CI may be a still image, or may be a real-time real world image (moving image). In the game according to the present embodiment program, the camera image CI is a real-time real world image. Further, the camera image CI, which is a real world image, is not limited by the type of the camera. For example, the camera image CI may be an image obtained by a camera that can be externally connected to the game apparatus 10. Furthermore, in the present embodiment, the camera image CI may be an image acquired from the outer capturing section 23 (compound eye camera) and/or the inner capturing section 24 (monocular camera). In the game according to the present embodiment program, the camera image CI is an image acquired using one of the left outer capturing section 23 a and the right outer capturing section 23 b of the outer capturing section 23 as a monocular camera.
  • As described above, the first space 1 is a space closer when viewed from the virtual camera than the boundary surface 3, and is also a space surrounded by the boundary surface 3. Further, the second space 2 is a space behind the boundary surface 3 as viewed from the virtual camera. Although not shown in FIGS. 20A and 21, a back wall BW surrounding the boundary surface 3 is present. That is, the second space 2 is a space present between the boundary surface 3 and the back wall BW. To the back wall BW, a given image is attached. For example, to the back wall BW, an image representing outer space prepared in advance is attached, and display is performed such that the second space 2, which is outer space, exists behind the first space 1. That is, the first space 1, the boundary surface 3, the second space 2, and the back wall BW are placed in the order from the closer area to the further area, as viewed from the virtual camera.
  • As described above, however, the image processing program according to the present invention is not limited to a game program, and these settings and rules do not limit the image processing program according to the present invention. It should be noted that as shown in FIG. 22, enemy objects EO can move in the virtual three-dimensional space, and can move between the first space 1 and the second space 2 through the boundary surface 3 described above. When an enemy object EO moves between the first space 1 and the second space 2 in the boundary surface 3 by passing through an area captured by the virtual camera, representation is made such that on an image displayed on the upper LCD 22, the enemy object EO moves from the further area to the closer area, or moves from the closer area to the further area, by passing through the real world image.
  • On the screen, display is performed such that the enemy object EO moves between the first space 1 and the second space 2, using an opening (hole) produced in the real world image due to the game scenario or an event. FIGS. 22 and 24 show the state where an enemy object EO moves between the first space 1 and the second space 2 by forming an opening in the boundary surface 3 or passing through an opening already present in the boundary surface 3.
  • It should be noted that in the image processing program according to the present embodiment, objects present in the first space 1 or the second space 2 have three types: enemy objects EO, a bullet object BO, and a back wall BW. The image processing program according to the present invention is not limited to the types of the objects. In the image processing program according to the present embodiment, objects are virtual physical bodies present in the virtual space (the first space 1 and the second space 2). For example, in the image processing program according to the present embodiment, given objects, such as obstacle objects, may be present.
  • <Examples of Forms of Display>
  • FIGS. 23 through 26 show examples of the game screen displayed on the upper LCD 22. Descriptions are given below of examples of the forms of display shown in the respective figures.
  • First, a description is given of an aiming cursor AL, which is displayed commonly in FIGS. 23 through 26. In FIGS. 23 through 26, the aiming cursor AL for a bullet object BO is displayed commonly on the upper LCD 22, the bullet object BO fired in accordance with an attack operation using the game apparatus 10 (e.g., pressing the button 14B (A button)). In the game according to the present embodiment program, the aiming cursor AL is set so as to be directed in a predetermined direction in accordance with the program executed by the game apparatus 10.
  • For example, the aiming cursor AL is set so as to be fixed in the direction of the line of sight of the virtual camera, i.e., at the center of the screen of the upper LCD 22. In this case, as described above, in the present embodiment, the direction of the line of sight of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) is changed in accordance with the imaging direction of the outer capturing section 23. Thus, the player can change the direction of the aiming cursor AL in the virtual space by changing the orientation of the game apparatus 10. Then, the player performs an attack operation by, for example, pressing the button 14B (A button) of the game apparatus 10 with the thumb of the right hand holding the lower housing 11. With this, the player fires the bullet object BO by the attack operation, to thereby vanquish an enemy object EO and repair an opening present in the boundary surface 3, in the game according to the present embodiment.
  • Next, descriptions are given separately of FIGS. 23 through 26.
  • In FIG. 23, an enemy object EO present in the first space 1 and a camera image CI captured by the real camera built into the game apparatus 10 are displayed on the upper LCD 22. The enemy object EO is arbitrarily set.
  • The enemy object EO is, for example, an object obtained by using, as a texture, an image (e.g., a photograph of a person's face) stored in the data storage external memory 46 or the like of the game apparatus 10, and attaching the image to a three-dimensional polygon model of a predetermined shape (a polygon model representing a three-dimensional shape of a human head) by a predetermined method.
  • Further, in the present embodiment, the camera image CI displayed on the upper LCD 22 is, as described above, a real-time real world image captured by the real camera built into the game apparatus 10. Alternatively, for example, the camera image CI may be an image (e.g., a photograph of a landscape) stored in the data storage external memory 46 or the like of the game apparatus 10.
  • In the state where the camera image CI is displayed on the upper LCD 22, the enemy object EO can arbitrarily move. For example, the enemy object EO present in the first space 1 can move to the second space 2. FIG. 24 shows an example of the state where the enemy object EO present in the first space 1 moves from the first space 1 to the second space 2. In the example shown in FIG. 24, the enemy object EO present in the first space 1 moves to the second space 2 by forming an opening in the boundary surface 3. The enemy object EO having moved to the second space 2 is displayed as a shadow (silhouette model) ES at a position in an unopen area in the boundary surface 3, as viewed from the virtual camera. Further, the second space 2 is viewed through the opening in the boundary surface 3. That is, when an opening is present in the boundary surface 3 in the field of view of the virtual camera, a part of an image of the second space 2 is displayed through the opening, on the upper LCD 22. The image of the second space 2 is specifically objects present in the second space 2, such as the enemy object EO and a back wall BW that are present in the second space 2. The shadow ES represents the shadow of the enemy object EO. FIG. 27A shows silhouette models of the shadow of the enemy object EO as viewed from above. Further, FIG. 27B is an example of silhouette models of the shadow of the enemy object EO. As shown in FIGS. 27A and 27B, in the present embodiment, for the enemy object EO, silhouette models are set to correspond to a plurality of orientations. Specifically, the silhouette models are, for example, eight planar polygons shown in FIG. 27A. The silhouette models (eight planar polygons) are placed at the same position as that of the enemy object EO, which is a substance model. Further, the planar polygons have sizes included in the substance model (do not protrude beyond the substance model). Further, to each planar polygon, a texture is attached that is obtained by drawing a shadow image of the enemy object EO as viewed in the normal direction of the surface of the planar polygon. When the enemy object EO is present behind an unopen area in the boundary surface 3, the shadow ES is displayed by drawing the corresponding silhouette model.
  • It should be noted that all the eight planar polygons are rendered. When the enemy object EO is present behind an unopen area in the boundary surface 3, the substance model of the enemy object EO is hidden by the boundary surface (screen object) 3 based on a depth determination, and therefore is not drawn. It is set, however, such that a silhouette model is not subjected to a depth determination with the boundary surface (screen object) 3, and therefore, even when the enemy object EO (and its silhouette model) is present behind an unopen area in the boundary surface 3, the silhouette model is drawn, and the shadow is displayed as shown in FIGS. 24 and 25. However, when the enemy object EO is present in front of the boundary surface 3, or is present behind an open area in the boundary surface 3, the silhouette model is present in the back of the substance model, and therefore, the substance model of the enemy object EO is drawn. Accordingly, the silhouette model is not drawn, and therefore, the shadow is not displayed. This is because the silhouette model is set to be included in the substance model of the enemy object EO.
  • On the upper LCD 22, display is performed such that images are combined together in the following preference order.
  • (1) An image of an object present in the first space 1; (2) in an unopen area in the real world image, a combined image of a shadow image of an object present in the second space 2 and the real world image (e.g., a semi-transparent shadow image is combined with the real world image); and (3) in an open area in the real world image, an image (substance image) of an object present in the second space 2 is preferentially combined, and a back wall image is combined in the back of the image. Depending on the state of the movement of the enemy object EO present in the second space 2, however, there may be a scene where the enemy object EO is present across an open area and an unopen area. That is, there may be a scene where the enemy object EO is present on the edge of an opening, as viewed from the virtual camera. FIG. 25 shows such a state where the enemy object EO present in the second space 2 has moved to the edge of an opening set in the boundary surface 3. As shown in FIG. 25, the enemy object EO present in the second space 2 is displayed on the upper LCD 22 such that: an image of the enemy object EO is displayed as it is in the region of the second space 2 that can be viewed through the opening, as viewed from the virtual camera; and the shadow ES is displayed in the region of the second space 2 that cannot be viewed through the opening, as viewed from the virtual camera.
  • More specifically, as shown in FIG. 28, in each object, data of a non-transparency (alpha value) is set. FIG. 28 shows an example of the non-transparencies (alpha values) set for objects in the present embodiment. For a texture of the substance model of an enemy object, a non-transparency of 1 is set in the entire model. Further, for a texture of each of the silhouette models (planar polygons) of the enemy object, a non-transparency of 1 is set in the entire shadow image. A bullet model is set in a similar manner. Among enemy objects, for a texture of each of a semi-transparent object model and an effect model, for example, a non-transparency of 0.6 is set in the entire model. For a screen object (the spherical screen model shown in FIG. 20A), a non-transparency of 0.2 is set as a material, and a non-transparency of 1 or 0 is set on each point of the α-texture, which is a texture of the screen object. “1” indicates an unopen portion, and “0” indicates an open portion. That is, for the screen object, two types of settings: that of the material and that of the texture, are made as non-transparency values.
  • In addition, a depth determinations is valid between each pair of: an enemy object; a bullet object; a semi-transparent enemy object, an effect object, and the screen object. A depth determination is valid “between the shadow planar polygon and the enemy object”, “between the shadow planar polygon and the bullet object”, “between the shadow planar polygon and the semi-transparent enemy object”, and “between the shadow planar polygon and the effect object”. A depth determination is invalid between the shadow planar polygon and the screen object.
  • When a depth determination is valid, rendering is performed in accordance with a normal perspective projection. A hidden surface is removed in accordance with the depth direction from the virtual camera. When a depth determination is invalid, an object is rendered even if a target object is present in an area closer to the virtual camera than that of the object.
  • Then, in the present embodiment, when rendering is performed, it is possible to set a formula for rendering on the object-by-object basis. Specifically, the formulas are set as follows.
  • The substance of the enemy object, the bullet object, the semi-transparent enemy object, and the effect object are drawn by the following formula.

  • “color of object×non-transparency of object+color of background×(1−non-transparency of object)”
  • The screen object is drawn by the following formula.

  • “color of object (color of real world image)×non-transparency of texture of object+color of background×(1−non-transparency of texture of object)”
  • The silhouette model of the enemy object is drawn by the following formula.

  • “color of object×(1−non-transparency of material of background)+color of background×non-transparency of material of background)”
  • It should be noted that when the enemy object is drawn, the background of the enemy object is the screen object (boundary surface 3), and therefore, in the above formula, “non-transparency of material of background” is “non-transparency of material of screen object (boundary surface 3)”.
  • Based on the various settings as described above, when the enemy object is present behind an unopen portion in the boundary surface, not the substance but the shadow of the enemy object is displayed. When the enemy object is present in front of the boundary surface, or when the enemy object is present in an opening in the boundary surface, not the shadow but the substance of the enemy object is displayed.
  • In addition, in the game according to the present embodiment, an opening present in the boundary surface 3 can be repaired by hitting it with the bullet object BO. FIG. 26 shows the state where an opening present in the boundary surface 3 is closed by hitting it with the bullet object BO. As shown in FIG. 26, when, for example, the bullet object BO has collided with an unopen area present in the boundary surface 3, data of the unopen state is set for a boundary surface present in a certain range from the collision point. With this, when an opening is present in a certain range from a collision point, the opening is closed. It should be noted that in the present embodiment, the bullet object BO having collided with the opening disappears (thus, the bullet object BO has disappeared in FIG. 26). Further, when having collided with an opening in the boundary surface 3, the bullet object BO moves to the second space by passing through the opening.
  • It should be noted that, as described above, on the upper LCD 22, the real-time real world image captured by the real camera built into the game apparatus 10 is displayed as an image such that the real-time real world image seems to be present on the boundary surface 3. A change in the direction of the game apparatus 10 in real space also changes the imaging range captured by the game apparatus 10, and therefore also changes the camera image CI displayed on the upper LCD 22. In this case, the game apparatus 10 changes the position and the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space in accordance with the motion of the game apparatus 10 in real space. With this, the enemy object EO displayed as if placed in real space and an opening present in the boundary surface 3 are displayed as if placed at the same positions in real space even when the direction of the game apparatus 10 has changed in real space. For example, it is assumed that the imaging direction of the real camera of the game apparatus 10 is turned left. In this case, the display position of the enemy object EO displayed on the upper LCD 22 and the opening present in the boundary surface 3 move in the direction opposite to the turn in the imaging direction of the real camera (in the right direction), that is, the direction of the virtual camera (the virtual world drawing camera in the second drawing method) in the virtual space, where the enemy object EO and the opening present in the boundary surface 3 are placed, moves to the left as does that of the real camera. Thus, even when a change in the direction of the game apparatus 10 also changes the imaging range of the real camera, the enemy object EO and the opening present in the boundary surface 3 are displayed on the upper LCD 22 as if placed in a real space represented by the camera image CI.
  • <<Examples of Operations of Image Processing>>
  • Next, with reference to FIGS. 29 through 31 and FIGS. 32A and 32B, descriptions are given of examples of specific processing operations performed by the image processing program according to the present embodiment executed by the game apparatus 10. FIG. 29 is a flow chart showing an example of the operation of image processing performed by the game apparatus 10 executing the image processing program. FIG. 30 is a subroutine flow chart showing an example of a detailed operation of an enemy-object-related process performed in step 53 of FIG. 29. FIG. 31 is a subroutine flow chart showing an example of a detailed operation of a bullet-object-related process performed in step 54 of FIG. 29. FIGS. 32A and 32B are each a subroutine flow chart showing an example of a detailed operation of a display image updating process (the first drawing method and the second drawing method) performed in step 57 of FIG. 29.
  • <<Example of Image Processing>>
  • With reference to FIG. 29, a description is given of the operation of the information processing section 31. First, when the power (the power button 14F) of the game apparatus 10 has been turned on, the CPU 311 executes a boot program (not shown). This causes the programs stored in the built-in memory, the external memory 45, or the data storage external memory 46, to be loaded into the main memory 32. In accordance with the execution of the loaded programs by the information processing section 31 (the CPU 311), the steps shown in FIG. 29 are performed. It should be noted that in FIGS. 29 through 32A, processes not directly related to the present invention and peripheral processes are not described.
  • Referring to FIG. 29, the information processing section 31 performs the initialization of the image processing (step 51), and proceeds to the subsequent step. For example, the information processing section 31 sets the initial position and the initial direction of the virtual camera for generating a virtual world image (an image of the virtual space) in the virtual camera data Dj, and sets the coordinate axes (e.g., X, Y, and Z axes) of the virtual space where the virtual camera is placed. Subsequently, the information processing section 31 acquires various data from each component of the game apparatus 10 (step 52), and proceeds to the subsequent step 53. For example, the information processing section 31 updates the real camera image data Db using a camera image captured by the currently selected capturing section (the outer capturing section 23 in the present embodiment). For example, the information processing section 31 acquires data indicating that the operation button 14 or the analog stick 15 has been operated, to thereby update the controller data Da1. Further, the information processing section 31 acquires angular velocity data indicating the angular velocities detected by the angular velocity sensor 40, to thereby update the angular velocity data Da2.
  • Next, the information processing section 31 performs an enemy-object-related process (step 53), and proceeds to the subsequent step 54. With reference to FIG. 30, the enemy-object-related process is described below.
  • Referring to FIG. 30, the information processing section 31 determines whether or not conditions for the appearance of an enemy object EO have been satisfied (step 61). For example, the conditions for the appearance of an enemy object EO may be: that the enemy object EO appears at predetermined time intervals; that in accordance with the disappearance of the enemy object EO from the virtual world, a new enemy object EO appears; or that the enemy object EO appears at a random time. It should be noted that the conditions for the appearance of an enemy object EO is, for example, set by the group of various programs Pa stored in the main memory 32.
  • Then, when the conditions for the appearance for an enemy object EO have been satisfied, the information processing section 31 proceeds to the subsequent step 62. On the other hand, when the conditions for the appearance of an enemy object EO have not been satisfied, the information processing section 31 proceeds to the subsequent step 63.
  • In step 62, the information processing section 31 generates and initializes the enemy object data Df corresponding to the enemy object EO that has satisfied the conditions for the appearance, and proceeds to the subsequent step 63. For example, the information processing section 31 acquires the substance data Df1, the silhouette data Df2, the opening shape data Df3, and data of polygons corresponding to the enemy object EO, using the group of various programs Pa stored in the main memory 32. The information processing section 31 generates the enemy object data Df including the above items of data. Further, for example, the information processing section 31 initializes: data indicating the placement direction and the placement position of the polygons corresponding to the enemy object EO in the virtual space; and data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the generated enemy object data Df. The initialization is made by a known method.
  • Next, the information processing section 31 moves the enemy object EO placed in the virtual space (step 63), and proceeds to the subsequent step 64. As an example, the information processing section 31 updates data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df. At this time, the information processing section 31 updates the data indicating the placement direction of the enemy object EO, the data included in the enemy object data Df, based on the data indicating the moving direction. After the update, the information processing section 31 may update the data indicating the moving velocity and the moving direction of the enemy object EO in the virtual space, the data included in the enemy object data Df. The update of the data indicating the moving velocity and the moving direction allows the enemy object EO to move in the virtual space at a given velocity in a given direction.
  • Next, the information processing section 31 determines whether or not the enemy object EO has reached a certain distance from the position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method) (step 64). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with data indicating the placement position of the virtual camera (the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method), the data included in the virtual camera data Dj. Then, when the two items of data have satisfied predetermined conditions (e.g., the distance between the placement position of the enemy object EO and the placement position of the virtual camera has fallen below a predetermined value), the information processing section 31 determines that the enemy object EO has reached the certain distance from the position of the virtual camera, and when the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO has not reached the certain distance from the position of the virtual camera. It should be noted that hereinafter, when the term “virtual camera” is simply used without distinguishing between the first drawing method and the second drawing method, the “virtual camera” refers to the virtual camera in the first drawing method or the virtual world drawing camera in the second drawing method. When it is determined that the enemy object EO has reached the certain distance from the position of the virtual camera, the information processing section 31 proceeds to the subsequent step 65. On the other hand, when it is determined that the enemy object EO has not reached the certain distance from the position of the virtual camera, the information processing section 31 proceeds to step 66.
  • In step 65, the information processing section 31 performs a point deduction process, and proceeds to the subsequent step 66. For example, the information processing section 31 deducts a predetermined value from the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the deduction. It should be noted that in the point deduction process, the information processing section 31 may perform a process of causing the enemy object EO having reached the certain distance from the position of the virtual camera, to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having reached the certain distance from the position of the virtual camera, such that the enemy object EO is not present in the virtual space). Further, the predetermined value in the point deduction process may be a given value, and for example, may be set by the group of various programs Pa stored in the main memory 32.
  • In step 66, the information processing section 31 determines whether or not the enemy object EO is to pass through the boundary surface 3 (the enemy object EO is to move between the first space 1 and the second space 2). For example, the information processing section 31 compares the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the boundary surface 3, the data included in the boundary surface data Dd. Then, when the two items of data have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO is to pass through the boundary surface 3. When the two items of data have not satisfied the predetermined conditions, the information processing section 31 determines that the enemy object EO is not to pass through the boundary surface 3. It should be noted that the predetermined conditions are, for example, that the coordinates (placement position) of the enemy object EO in the virtual space satisfy conditional equations for the spherical surface of the boundary surface 3. As described above, the data indicating the placement position of the boundary surface 3 in the virtual space indicates the existence range of the boundary surface 3 in the virtual space, and is, for example, conditional equations for the spherical surface (the shape of the boundary surface 3 according to the present embodiment). When the placement position of the enemy object EO satisfies the conditional equations, the enemy object EO is present on the boundary surface 3 in the virtual space. In the present embodiment, for example, in such a case, it is determined that the enemy object EO is to pass through the boundary surface 3.
  • When it is determined that the enemy object EO is to pass through the boundary surface 3, the information processing section 31 proceeds to the subsequent step 67. On the other hand, when it is determined that the enemy object EO is not to pass through the boundary surface 3, the information processing section 31 ends the process of this subroutine.
  • In step 67, the information processing section 31 performs a process of updating the opening determination data included in the boundary surface data Dd, and ends the process of the subroutine. This process is a process for registering, in the boundary surface data Dd, information of an opening produced in the boundary surface 3 by the enemy object EO passing through the boundary surface 3. For example, in the first drawing method and the second drawing method, the information processing section 31 multiplies: the alpha values of the opening determination data of an area having its center at a position corresponding to the position where the enemy object EO passes through the boundary surface 3 in the virtual space, the opening determination data included in the boundary surface data Dd; by the alpha values of the opening shape data Df3. The opening shape data Df3 is texture data in which alpha values of “0” are stored and which has its center at the placement position of the enemy object EO. Accordingly, based on the multiplication, the alpha values of the opening determination data of the area where the opening is generated so as to have its center at the placement position of the enemy object EO (the coordinates of the position where the enemy object EO passes through the boundary surface 3) are “0”. That is, the information processing section 31 can update the state of the boundary surface (specifically, the opening determination data) without determining whether or not an opening is already present in the boundary surface 3. It should be noted that it may be determined whether or not an opening is already present at the position of the collision between the enemy object and the boundary surface. Then, when an opening is not present, an effect may be displayed such that a real world image corresponding to the collision position flies as fragments.
  • In addition, in the updating process of the opening determination data, the information processing section 31 may perform a process of staging the generation of the opening (e.g., causing a wall to collapse at the position where the opening is generated). In this case, the information processing section 31 needs to determine whether or not the position where the enemy object EO passes through the boundary surface 3 (the range where the opening is to be generated) has already been open. The information processing section 31 can determine whether or not the range where the opening is to be generated has already been open, by, for example, multiplying: data obtained by inverting the alpha values of the opening shape data Df3 from “0” to “1”; by the alpha values of the opening determination data multiplied as described above. That is, when the entire range where the opening is to be generated has already been open, the alpha values of the opening determination data are “0”. Thus, the multiplication results are “0”. On the other hand, when even a part of the range where the opening is to be generated is not open, there is a part where the alpha values of the opening determination data are not “0”. Thus, the multiplication results are other than “0”.
  • It should be noted that the opening shape data Df3 of the enemy object EO is texture data in which alpha values of “0” are stored so as to correspond to the shape of the enemy object EO. The information processing section 31 may convert the alpha values of the texture data into “1”, based on a predetermined event. When the above process is performed after the conversion, the alpha values of the opening shape data Df3 are “1”. Thus, the alpha values of the opening determination data are not changed. In this case, the enemy object EO passes through the boundary surface 3 without forming an opening. That is, this makes it possible to stage the enemy object EO as if it slips through the boundary surface 3 (see FIG. 22). It should be noted that the predetermined event, for example, may be time intervals defined by random numbers or predetermined intervals, or may be the satisfaction of predetermined conditions in the game. These events may be, for example, set by the group of various programs Pa stored in the main memory 32.
  • Referring back to FIG. 29, after the enemy-object-related process in step 53, the information processing section 31 performs a bullet-object-related process (step 54), and proceeds to the subsequent step 55. With reference to FIG. 31, the bullet-object-related process is described below.
  • Referring to FIG. 31, the information processing section 31 moves a bullet object BO in the virtual space in accordance with a moving velocity vector that is set (step 71), and proceeds to the subsequent step 72. For example, the information processing section 31 updates data indicating the placement direction and the placement position of the bullet object BO, based on data indicating the moving velocity vector, the data included in the bullet object data Dg. At this time, the information processing section 31 may update the data indicating the moving velocity vector, by a known method. Further, for example, depending on the type of the bullet object BO, the information processing section 31 may change the method of updating the data indicating the moving velocity vector. For example, when the bullet object BO is a ball, the information processing section 31 may update the data indicating the moving velocity vector, taking into account the effect of a gravity in the vertical direction in the virtual space.
  • Next, the information processing section 31 determines whether or not the user of the game apparatus 10 has performed a firing operation (step 72). For example, with reference to the controller data Da1, the information processing section 31 determines whether or not the user has performed a predetermined firing operation (e.g., pressing the button 14B (A button)). When the firing operation has been performed, the information processing section 31 proceeds to the subsequent step 73. On the other hand, when the firing operation has not been performed, the information processing section 31 proceeds to the subsequent step 74.
  • In step 73, in accordance with the firing operation, the information processing section 31 places the bullet object BO at the position of the virtual camera in the virtual space, sets the moving velocity vector of the bullet object BO, and proceeds to the subsequent step 74. For example, the information processing section 31 generates the bullet object data Dg corresponding to the firing operation. Then, for example, the information processing section 31 stores the data indicating the placement position and the placement direction (the direction of the line of sight) of the virtual camera, the data included in the virtual camera data Dj, in the data indicating the placement position and the placement direction of the bullet object BO, the data included in the generated bullet object data Dg. Further, for example, the information processing section 31 stores a given value in the data indicating the moving velocity vector, the data included in the generated bullet object data Dg. The value to be stored in the data indicating the moving velocity vector may be set by the group of various programs Pa stored in the main memory 32.
  • In step 74, the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, by comparing the data indicating the placement position of the enemy object EO, the data included in the enemy object data Df, with the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, the information processing section 31 determines whether or not the enemy object EO and the bullet object BO have made contact with each other in the virtual space. For example, when the data indicating the placement position of the enemy object EO and the data indicating the placement position of the bullet object BO have satisfied predetermined conditions, the information processing section 31 determines that the enemy object EO and the bullet object BO have made contact with each other. If not, the information processing section 31 determines that the enemy object EO and the bullet object BO have not made contact with each other. It should be noted that the predetermined conditions are, for example, that the distance between the placement position of the enemy object EO and the placement position of the bullet object BO falls below a predetermined value. The predetermined value may be, for example, a value based on the size of the enemy object EO.
  • When it is determined that the enemy object EO and the bullet object BO have made contact with each other, the information processing section 31 proceeds to the subsequent step 75. On the other hand, when it is determined that the enemy object EO and the bullet object BO have not made contact with each other, the information processing section 31 proceeds to the subsequent step 76.
  • In step 75, the information processing section 31 performs a point addition process, and proceeds to the subsequent step 76. For example, in the point addition process, the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition. Further, in the point addition process, the information processing section 31 performs a process of causing both objects having made contact with each other based on the determination in step 84 described above (i.e., the enemy object EO and the bullet object BO), to disappear from the virtual space (e.g., initializing the enemy object data Df concerning the enemy object EO having made contact with the bullet object BO, and the bullet object data Dg concerning the bullet object BO having made contact with the enemy object EO, such that the enemy object EO and the bullet object BO are not present in the virtual space). It should be noted that the predetermined points in the point addition process may be a given value, and may be, for example, set by the group of various programs Pa stored in the main memory 32.
  • In step 76, the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3. For example, using the placement position of the bullet object BO included in the bullet object data Dg and the opening determination data, the information processing section 31 determines whether or not the bullet object BO has made contact with an unopen area in the boundary surface 3.
  • For example, the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, satisfies conditional equations for the spherical surface of the boundary surface 3, as in the process of the enemy object EO. Then, when the data indicating the placement position of the bullet object BO does not satisfy the conditional equations for the spherical surface, the information processing section 31 determines that the bullet object BO has not made contact with the boundary surface 3. On the other hand, when the data indicating the placement position of the bullet object BO satisfies the conditional equations for the spherical surface of the boundary surface 3, the bullet object BO is present on the boundary surface 3 in the virtual space. At this time, the information processing section 31, for example, acquires the alpha values of the opening determination data of a predetermined area having its center at a position corresponding to the position where the bullet object BO is present on the boundary surface 3. The predetermined area is a predetermined area having its center at the contact point of the bullet object BO and the boundary surface 3. Then, when the alpha values of the opening determination data corresponding to at least a part of the predetermined area are alpha values of “1”, which correspond to an unopen area, the information processing section 31 determines that the bullet object BO has made contact with an unopen area in the boundary surface 3.
  • When it is determined that the bullet object BO has made contact with an unopen area in the boundary surface 3, the information processing section 31 proceeds to the subsequent step 77. On the other hand, when it is determined that the bullet object BO has not made contact with an unopen area in the boundary surface 3, the information processing section 31 proceeds to the subsequent step 78.
  • In step 77, the information processing section 31 performs a process of updating the opening determination data, and proceeds to the subsequent step 78. For example, in the updating process, the information processing section 31 updates, in the boundary surface 3, the alpha values of the opening determination data of the predetermined area having its center at the position corresponding to the placement position of the bullet object BO that has made contact with the unopen area in the boundary surface 3 based on the determination, to alpha values of “1”, which correspond to an unopen area. When the bullet object BO has made contact with the unopen area by this updating process, all the alpha values of the opening determination data in a predetermined area having its center at the contact point are updated to “1”. Accordingly, when there is a part where the alpha values of the opening determination data are set to “0” in the predetermined area having its center at the contact point, the alpha values of the opening determination data of this part are also updated to “1”. That is, when the bullet object BO has made contact with the edge of an opening provided in the boundary surface 3, the opening included in a predetermined area having its center at the position of the contact is repaired to the state of being unopen. Further, in the updating process, the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 76, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the unopen area in the boundary surface 3, such that the bullet object BO is not present in the virtual space). It should be noted that the predetermined area used in the updating process may be a given area, and may be, for example, set by the group of various programs Pa stored in the main memory 32.
  • In step 78, the information processing section 31 determines whether or not the bullet object BO has reached a predetermined position in the virtual space. The predetermined position may be, for example, the position where a back wall BW is present in the virtual space. In this case, for example, the information processing section 31 determines whether or not the data indicating the placement position of the bullet object BO, the data included in the bullet object data Dg, indicates that the bullet object BO has collided with the back wall BW.
  • Then, when the bullet object BO has reached the predetermined position, the information processing section 31 proceeds to the subsequent step 77. On the other hand, when the bullet object BO has not reached the predetermined position, the information processing section 31 ends the process of this subroutine.
  • In step 77, the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space, and ends the process of the subroutine. For example, the information processing section 31 performs a process of causing the bullet object BO having reached the predetermined position based on the determination in step 76 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO such that the bullet object BO is not present in the virtual space).
  • Referring back to FIG. 29, after the bullet-object-related process in step 54 described above, the information processing section 31 calculates the motion of the game apparatus 10 (step 55), and proceeds to the subsequent step 56. As an example, the information processing section 31 calculates the motion of the game apparatus 10 (e.g., a change in the imaging direction of the real camera provided in the game apparatus 10) using the angular velocities indicated by the angular velocity data Da2, to thereby update the motion data Di using the calculated motion. Specifically, when the user has changed in real space the imaging direction of the real camera provided in the game apparatus 10, the orientation of the entire game apparatus 10 also changes, and therefore, angular velocities corresponding to the change are generated in the game apparatus 10. Then, the angular velocity sensor 40 detects the angular velocities generated in the game apparatus 10, whereby data indicating the angular velocities is stored in the angular velocity data Da2. Thus, using the angular velocities indicated by the angular velocity data Da2, the information processing section 31 can calculate the direction and the amount (angle) that have changed in the imaging direction of the real camera provided in the game apparatus 10, as the motion of the game apparatus 10.
  • Next, in accordance with the motion of the game apparatus 10, the information processing section 31 changes the position of the virtual camera in the virtual space (step 56), and proceeds to the subsequent step 57. For example, using the motion data Di, the information processing section 31 imparts the same changes as those in the imaging direction of the real camera of the game apparatus 10 in real space, to the virtual camera in the virtual space, to thereby update the virtual camera data Dj using the position and the direction of the virtual camera after the changes. As an example, if the imaging direction of the real camera of the game apparatus 10 in real space has turned left by A°, the direction of the virtual camera in the virtual space also turns left by A°. With this, the enemy object EO and the bullet object BO displayed as if placed in real space are displayed as if placed at the same positions in real space even when the direction and the position of the game apparatus 10 have changed in real space.
  • Next, the information processing section 31 performs a process of updating the display image (step 57), and proceeds to the subsequent step 58. With reference to FIGS. 32A and 32B, the display image updating process is described below. FIG. 32A is the display image updating process in the first drawing method. Further, FIG. 32B is the display image updating process in the second drawing method.
  • First, a description is given of the display image updating process in the first drawing method.
  • Referring to FIG. 32A, the information processing section 31 performs a process of attaching the real camera image acquired in step 52 to the screen object (boundary surface 3) included in the viewing volume of the virtual camera (step 81), and proceeds to the subsequent step 82. For example, the information processing section 31 updates the texture data of the real camera image included in the real world image data Dc, using the real camera image data Db updated in step 52. Then, the information processing section 31 obtains the point where the direction of the line of sight of the virtual camera overlaps the boundary surface 3, using the data indicating the placement direction and the placement position of the virtual camera in the virtual space, the data included in the virtual camera data Dj. The information processing section 31 attaches the texture data of the real camera image included in the real world image data Dc, such that the obtained point is the center, to thereby update the boundary surface data Dd. At this time, the information processing section 31 acquires the opening determination data set for the area to which the texture data is attached, such that the opening determination data corresponds to the area corresponding to all the pixels of the texture data. Then, the information processing section 31 applies to the texture data the alpha values (“0” or “0.2”) set in the acquired opening determination data. Specifically, the information processing section 31 multiplies: color information of all the pixels of the texture data of the real camera image to be attached; by the alpha values at the corresponding positions of the opening determination data. By this process, an opening is represented in the real world image as described above. It should be noted that in the multiplication, an alpha value of “0.2” (an unopen area) stored in the opening determination data is handled as an alpha value of “1” set as the material described above. Further, in the present embodiment, the texture data of the real camera image to be attached to the boundary surface 3 is image data of an area that is wider than the field of view of a virtual camera C0.
  • Next, the information processing section 31 generates a display image by a process of rendering the virtual space (step 82), and ends the process of this subroutine. For example, the information processing section 31 generates an image obtained by rendering the virtual space where the boundary surface 3 (screen object), the enemy object EO, the bullet object BO, and the back wall BW are placed, to thereby update the rendered image data of the virtual space using the generated image, the rendered image data included in the rendered image data Dk. Further, the information processing section 31 updates the display image data Dl using the rendered image data of the virtual space. With reference to FIGS. 33 and 34, an example of the rendering process is described below.
  • FIG. 33 shows an example of the placement of the enemy object EO, the bullet object BO, the boundary surface 3 (the screen object in which the opening determination data is set), and the back wall BW in the virtual space. Further, FIG. 34 shows the positional relationships between the objects on the assumption that the virtual camera C0 in FIG. 33 is directed in the direction of (X, Y, Z)=(0, 0, 1) from the origin. As described above, the enemy object EO, the bullet object BO, the boundary surface 3, and the back wall BW are each placed in accordance with the data indicating the placement position included in the corresponding one of the enemy object data Df, the bullet object data Dg, the boundary surface data Dd, and the back wall image data De. Further, in the virtual space, the virtual camera C0 for rendering the virtual space is placed in accordance with the data indicating the placement direction and the placement position, the data included in the virtual camera data Dj.
  • As shown in FIG. 33 (or FIG. 34), the information processing section 31 renders with a perspective projection from the virtual camera C0 the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, so as to include the boundary surface 3. At this time, the information processing section 31 takes into account the information about the priority of drawing. In a normal perspective projection, an object present in the second space 2 is not drawn due to the presence of the boundary surface 3. In the game according to the present embodiment, an opening is provided in the boundary surface 3 (real world image), so that a part of the second space 2 can be viewed through the opening. Further, the shadow of the object present in the second space 2 is drawn in combination with the real world image. This makes it possible to give the user a feeling as if the virtual world further exists beyond the real world image. Specifically, the information processing section 31 performs the rendering process using the information about the priority of drawing. It should be noted that in the image processing program according to the present embodiment, alpha values are used as an example of the priority of drawing.
  • In the perspective projection described above, the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3. Here, the boundary surface 3 is the screen object to which the texture data of the real camera image is applied in the direction of the field of view (the range of the field of view) of the virtual camera C0 in step 81 described above. Further, as described above, to the texture data of the real camera image, the opening determination data corresponding to each position is applied. Accordingly, in the range of the field of view of the virtual camera C0, the real world image to which the opening determination data is applied is present.
  • It should be noted that in the present embodiment, for example, in an area having the opening determination data in which alpha values of “0” are stored (an open area), the information processing section 31 draws (renders) images of a virtual object and the back wall BW that are present in the second space 2, in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall BW that are present in the second space 2. That is, in the image to be displayed, the real world image attached in step 81 described above is drawn in the portion corresponding to this area.
  • Therefore, in an area having the opening determination data in which “0” is stored as viewed from the virtual camera C0, rendering is performed such that image data included in the substance data Df1 or the back wall image data De is drawn. Then, on the upper LCD 22, images of the virtual object and the back wall BW are displayed in the portion corresponding to this area.
  • In addition, in an area having the opening determination data in which alpha values of “0.2”, which indicate an unopen area, are stored as viewed from the virtual camera C0 (an area handled as an area where alpha values of “1” are stored as an unopen area), the virtual object and the back wall BW that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22, the real world image is drawn in the portion corresponding to this area. For the shadow ES (silhouette model) of the enemy object EO present in the second space 2 described above, however, a depth determination is set to invalid between the boundary surface 3 and the shadow ES. Accordingly, alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3, and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored (an area having the opening determination data in which alpha values of “0.2” are stored). With this, an image of the shadow ES is drawn on the real world image.
  • In addition, when the enemy object EO is present in the first space 1 such that the shadow ES (silhouette model) of the enemy object EO has a size included in the substance model and is placed in such a manner, and such that a depth determination is set to valid between the substance model of the enemy object EO and the silhouette model of the shadow ES, the silhouette model is hidden by the substance model, and therefore is not drawn.
  • It should be noted that in the present embodiment, as shown in FIG. 20A, the shape of the boundary surface 3 is a central portion of a spherical surface, and therefore, the opening determination data may not be present depending on the direction of the line of sight of the virtual camera C0. In this case, the above process is performed on the assumption that the opening determination data is present in which alpha values of “0.2” are stored in a simulated manner. That is, an area where the opening determination data is not present is handled as an area where alpha values of “1”, which indicate an unopen area, are stored.
  • In addition, the silhouette data Df2 included in the enemy object data Df corresponding to the enemy object EO according to the present embodiment is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object EO, and to each planar polygon, a texture of the silhouette image of the enemy object EO as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object EO in the second space 2 is reflected.
  • In addition, the information processing section 31 performs the rendering process such that the image data included in the aiming cursor image data Dm is preferentially drawn at the center of the field of view of the virtual camera C0 (the center of the image to be rendered).
  • By the above process, the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual camera C0 (an image including the aiming cursor AL), to thereby update the rendered image data of the virtual space (step 82). Then, the information processing section 31 updates the display image data Dl, using the updated rendered image data of the virtual space.
  • Next, a description is given of the display image updating process in the second drawing method.
  • In FIG. 32B, the information processing section 31 performs a process of rendering the real camera image acquired in step 52 described above (step 83), and proceeds to the subsequent step 84. For example, the information processing section 31 updates the texture data of the real camera image included in the real world image data Dc using the real camera image Db updated in step 52 described above. The information processing section 31 generates an image obtained by rendering the real camera image using the updated real world image data Dc, to thereby update the rendered image data of the real camera image using the generated image, the rendered image data included in the rendered image data Dk. With reference to FIGS. 35 and 36, a description is given below of an example of the rendering process of the real camera image.
  • In the present embodiment, as shown in FIG. 35, the information processing section 31 sets, as a texture, a real camera image obtained from the real camera of the game apparatus 10, and generates a planar polygon on which the texture is mapped. Then, the information processing section 31 generates, as a real world image, an image obtained by rendering the planar polygon with a parallel projection from a real world image drawing camera C1. Here, a description is given of an example of the method of generating a real world image in the case where the entire real camera image obtained from the real camera of the game apparatus 10 is displayed on the entire display screen of the upper LCD 22. It should be noted that in the present embodiment, the combined image according to the present embodiment (the combined image of a real world image and a virtual world image) is displayed on the entire display screen of the upper LCD 22. Alternatively, the combined image may be displayed in a part of the display screen of the upper LCD 22. In this case, the entire real camera image is displayed in the entire combined image.
  • First, a planar polygon is considered, on which a texture having i pixels is mapped in 1 unit of a coordinate system of the virtual space where the planar polygon is placed. In this case, a texture having i pixels×i pixels is mapped onto an area of 1 unit×1 unit of the coordinate system. Here, it is assumed that the display screen of the upper LCD 22 has horizontal W dots×vertical H dots, and the entire texture of the real camera image corresponds to the entire display screen having W dots×H dots. That is, it is assumed that the size of the texture data of the camera image is horizontal W pixels×vertical H pixels.
  • In this case, the planar polygon only needs to be placed such that 1 dot×1 dot on the display screen corresponds to a texture of 1 pixel×1 pixel in the real camera image, and the above coordinate system only needs to be defined as shown in FIG. 36. That is, an XY coordinate system of the virtual space where the planar polygon is placed is set such that the width of the planar polygon, on the entire main surface of which the texture of the camera image is mapped, corresponds to W/i units of the coordinate system, and the height of the planar polygon corresponds to H/i units of the coordinate system. The planar polygon is placed such that when the center of the main surface of the planar polygon, on which the texture is mapped, coincides with the origin of the XY coordinate system of the virtual space, the horizontal direction of the planar polygon corresponds to the X-axis direction (the right direction is the X-axis positive direction), and the vertical direction of the planar polygon corresponds to the Y-axis direction (the up direction is the Y-axis positive direction). In this case, in the main surface of the planar polygon, on which the texture is mapped: the top right corner position is placed at (X, Y)=(W/2i, H/2i); the bottom right corner position is placed at (X, Y)=(W/2i, −H/2i); the top left corner position is placed at (X, Y)=(−W/2i, H/2i); and the bottom left corner position is placed at (X, Y)=(−W/2i, −H/2i).
  • With the arrangement as described above, an area of 1 unit×1 unit in the above coordinate system corresponds to an area of i pixels×i pixels in the texture, and therefore, an area of horizontal (W/i)×vertical (H/i) in the planar polygon corresponds to the size of W pixels×H pixels in the texture.
  • As described above, the planar polygon placed in the coordinate system of the virtual space is rendered with a parallel projection such that 1 pixel in the real camera image (texture) corresponds to 1 dot on the display screen. Thus, a real world image is generated that corresponds to the camera image obtained from the real camera of the game apparatus 10.
  • It should be noted that as described above, the texture data of the real camera image included in the real world image data Dc is updated by the real camera image data Db. There is, however, a case where a size of horizontal A×vertical B of an image in the real camera image data Db does not coincide with a size of horizontal W×vertical H of the texture data. In this case, the information processing section 31 updates the texture data by a given method. For example, the information processing section 31 may update the texture data, using an image obtained by enlarging or reducing the sizes of horizontal A and vertical B of the image in the real camera image data Db so as to coincide with an image having a size of W×H (an image of the texture data). Alternatively, for example, it is assumed that the sizes of horizontal A and vertical B of the image in the real camera image data Db are greater than the sizes of horizontal W and vertical H of the texture data, respectively. In this case, for example, the information processing section 31 may update the texture data by clipping an image having a size of W×H (an image of the texture data) from a predetermined position in the image in the real camera image data Db. Yet alternatively, for example, it is assumed that at least one of the sizes of horizontal A and vertical B of the image in the real camera image data Db is smaller than the sizes of horizontal W and vertical H in the texture data. In this case, for example, the information processing section 31 may update the texture data by enlarging the image in the real camera image data Db so as to excess the size of the texture data, and subsequently clipping an image having a size of W×H (an image of the texture data) from a predetermined position in the enlarged image.
  • In addition, in the present embodiment, the horizontal×vertical size of the display screen of the upper LCD 22 coincides with the horizontal×vertical size of the texture data in the real camera image; however, these sizes do not need to coincide with each other. In this case, the size of the display screen of the upper LCD 22 and the size of the real world image do not coincide with each other. The information processing section 31 may change the size of the real world image by a known method when the real world image is displayed on the display screen of the upper LCD 22.
  • Next, as shown in FIG. 32B, the information processing section 31 performs a process of rendering the virtual space (step 84), and proceeds to the subsequent step 85. For example, the information processing section 31 generates, taking the opening determination data into account, an image obtained by rendering the virtual space where the enemy object EO, the bullet object BO, and the back wall BW are placed, to thereby update the rendered image data of the virtual space using the generated image, the rendered image data included in the rendered image data Dk. With reference to FIGS. 37 through 39, an example of the rendering process is described below.
  • FIG. 37 shows an example of the placement of the enemy object EO, the bullet object BO, the boundary surface 3 (opening determination data), and the back wall BW in the virtual space. Further, FIG. 38 shows the positional relationships between the objects on the assumption that the virtual camera (virtual world drawing camera) in FIG. 37 is directed in the direction of (X, Y, Z)=(0, 0, −1) from the origin. As described above, the enemy object EO, the bullet object BO, the boundary surface 3, and the back wall BW are each placed in accordance with the data indicating the placement position included in the corresponding one of the enemy object data Df, the bullet object data Dg, the boundary surface data Dd, and the back wall image data De. Further, in the virtual space, a virtual world drawing camera C2 for rendering the virtual space is placed in accordance with the data indicating the placement direction and the placement position, the data included in the virtual camera data Dj.
  • Here, first, a description is given of the position of the boundary surface 3 (opening determination data). As described above, in the image processing program according to the present embodiment, a real image in which an opening is provided is generated by multiplying the opening determination data by color information of the real world image (the rendered image data of the real camera image). Accordingly, for example, 1 horizontal coordinate unit×1 vertical coordinate unit in the rendered image data of the real camera image (see the positional relationships in the planar polygon in FIGS. 35 and 36) corresponds to 1 horizontal coordinate unit×1 vertical coordinate unit of the boundary surface 3 (specifically, the opening determination data) in the virtual space. That is, it is assumed that when the boundary surface 3 is viewed from the virtual world drawing camera C2 shown in FIG. 37 or 38 with a perspective projection, the range of the boundary surface 3 in the field of view of the virtual world drawing camera C2 corresponds to the horizontal×vertical size of the rendered image data of the real camera image.
  • FIG. 39 shows an example of the positional relationship between the virtual world drawing camera C2 and the boundary surface 3. The case is considered where the boundary surface 3 is subjected to a perspective projection from the virtual world drawing camera C2 directed in the direction of (X, Y, Z)=(0, 0, −1) from the origin. In this case, if the boundary surface 3 is placed at the position of Z=Z0 shown in FIG. 39, 1 horizontal coordinate unit×1 vertical coordinate unit in the opening determination data corresponds to 1 horizontal coordinate unit×1 vertical coordinate unit in the rendered image data of the real camera image. Here, the position of Z=Z0 is the position where, when the angle of view in the Y-axis direction of the virtual world drawing camera C2 that performs a perspective projection on boundary surface 3 is θ, the length between the fixation point of the virtual world drawing camera C2 and the display range in the Y-axis positive direction is H/2i. It should be noted that as described above, “H” is the number of vertical dots on the display screen of the upper LCD 22, and “i” is the number of pixels in the texture to be mapped onto 1 unit of the coordinate system of the virtual space. Then, if the distance between the center of the virtual world drawing camera C2 and the position of Z=Z0 is D (D>0), the following formula is obtained.

  • tan θ=(H/2i)/D=H/2Di
  • Thus, when a virtual world image is generated by performing a perspective projection on the enemy object EO and the like described later, taking the boundary surface 3 into account, the settings of the virtual world drawing camera C2 for generating the virtual world image are “the angle of view θ in the Y-axis direction=tan−1 (H/2Di), and the aspect ratio=W:H”. Then, the boundary surface 3 (specifically, the opening determination data indicating the state of the boundary surface 3) is placed at the view coordinates of Z=Z0 from the virtual world drawing camera C2. With this, the range of the boundary surface 3 in the field of view of the virtual world drawing camera C2 has a size of W×H.
  • Next, the rendering process of the virtual space is described. The information processing section 31 generates an image obtained by rendering the virtual space such that the boundary surface 3 is present at the position described above. The information processing section 31 performs the rendering process taking into account the combination of the real world image to be made later. An example of the rendering process is specifically described below.
  • The information processing section 31 renders with a perspective projection from the virtual world drawing camera C2 the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, such that the boundary surface 3 is present as shown in FIG. 37 (or FIG. 38). At this time, the information processing section 31 takes into account the information about the priority of drawing. In a normal perspective projection, rendering is performed such that an object present closer when viewed from the virtual camera in the virtual space is preferentially drawn. Accordingly, in the normal perspective projection, an object present in the second space 2 is not drawn due to the presence of the boundary surface 3. In the game according to the present embodiment, an opening is provided in the boundary surface 3 (real world image), so that a part of the second space 2 can be viewed through the opening. Further, the shadow of the object present in the second space 2 is drawn in combination with the real world image. This makes it possible to give the user a feeling as if the virtual world further exists beyond the real world image. Specifically, the information processing section 31 performs the rendering process using the information about the priority of drawing. It should be noted that in the image processing program according to the present embodiment, alpha values are used as an example of the information about the priority of drawing.
  • In the perspective projection described above, the object present in the second space 2 (the enemy object EO or the back wall BW in the present embodiment) is present behind the boundary surface 3. Here, the opening determination data is set in the boundary surface 3. As described above, the opening determination data is texture data of a rectangle in which alpha values are stored, and sets of coordinates in the texture data correspond to positions on the boundary surface in the virtual space. Thus, the information processing section 31 can specify an area of the opening determination data in the range of the field of view of the virtual world drawing camera C2, the area corresponding to the object present in the second space 2.
  • It should be noted that in the present embodiment, for example, in an area having the opening determination data in which alpha values of “0” are stored (an open area), the information processing section 31 draws (renders) images of a virtual object and the back wall that are present in the second space 2, in an area that can be viewed through the open area. Further, in an area having the opening determination data in which alpha values of “0.2”, which correspond to an unopen area, are stored (an area handled as an area where alpha values of “1” are stored as an unopen area), the information processing section 31 does not draw the virtual object and the back wall that are present in the second space 2. That is, in the image to be displayed, a real world image is drawn in the portion corresponding to this area by a combination process in step 85 described later.
  • Therefore, in an area having the opening determination data in which “0” is stored as viewed from the virtual world drawing camera C2, rendering is performed such that image data included in the substance data Df1 or the back wall image data De is drawn. Then, on the upper LCD 22, images of the virtual object and the back wall are displayed in the portion corresponding to this area by the combination process in step S85 described later.
  • In addition, in an area having the opening determination data in which alpha values of “0.2”, which indicate an unopen area, are stored as viewed from the virtual world drawing camera C2 (an area handled as an area where alpha values of “1” are stored as an unopen area), the virtual object and the back wall that are present in the second space 2 are not drawn. That is, in the image to be displayed on the upper LCD 22, a real world image is drawn in the portion corresponding to this area by the combination process in step 85 described later. For the shadow ES (silhouette model) of the enemy object EO described above, however, a depth determination is set to invalid between the shadow ES and the boundary surface 3. Accordingly, alpha values of “1” of the silhouette model are greater than alpha values of “0.2” of the boundary surface 3, and therefore, the shadow ES is drawn in an area where alpha values of “1”, which indicate an unopen area, are stored. With this, the shadow ES of the enemy object EO is drawn on the real world image. Further, when the enemy object EO is present in the first space 1 such that the silhouette model of the enemy object EO has a size included in the substance model and is placed in such a manner, and such that a depth determination is set to valid between the substance model of the enemy object EO and the silhouette model, the silhouette model is hidden by the substance model, and therefore is not drawn.
  • It should be noted that in the present embodiment, as shown in FIG. 20A, the shape of the boundary surface 3 is a central portion of a spherical surface, and therefore, the opening determination data may not be present depending on the direction of the field of view of the virtual world drawing camera C2. In this case, the above process is performed on the assumption that the opening determination data is present in which alpha values of “0.2” are stored in a simulated manner. That is, an area where the opening determination data is not present is handled as an area where alpha values of “1”, which indicate an unopen area, are stored.
  • It should be noted that the silhouette data Df2 included in the enemy object data Df corresponding to the enemy object EO according to the present embodiment is set such that the normal directions of a plurality of planar polygons correspond to radiation directions as viewed from the enemy object, and to each planar polygon, a texture of the silhouette image of the enemy object as viewed from the corresponding direction is applied. Accordingly, in the image processing program according to the present embodiment, the shadow ES of the enemy object EO in the virtual space image is represented as an image on which the orientation of the enemy object in the second space 2 is reflected.
  • By the above process, the information processing section 31 renders with a perspective projection the enemy object EO, the bullet object BO, and the back wall BW that are placed in the virtual space, and generates a virtual world image as viewed from the virtual world drawing camera C2, to thereby update the rendered image data of the virtual space (step 84 of FIG. 32B). It should be noted that the image generated by this process is an image obtained by excluding the real world image from the display image shown in FIG. 40.
  • Next, the information processing section 31 generates a display image obtained by combining the real world image with the virtual space image (step 85), and ends the process of this subroutine.
  • For example, the information processing section 31 generates a combined image of the real world image and the virtual space image by combining the rendered image data of the real camera image with the rendered image of the virtual space such that the rendered image of the virtual space is given preference. Then, the information processing section 31 generates a display image by preferentially combining the image data included in the aiming cursor image data at the center of the combined image (the center of the field of view of the virtual world drawing camera C2) (FIG. 40). FIG. 40 shows an example of the display image generated by the first drawing method or the second drawing method. It should be noted that when a virtual space image is not stored in the rendered image data of the virtual space, the information processing section 31 may store the real world image stored in the rendered image data of the camera image as it is in the display image data Dl.
  • As described above, the updating process of the display image (subroutine) is completed by the first drawing method or the second drawing method.
  • Referring back to FIG. 29, after the updating process of the display image in step 57, the information processing section 31 displays the display image on the upper LCD 22 (step 60), and proceeds to the subsequent step. For example, the CPU 311 of the information processing section 31 stores the display image data Dl updated in the step 57 described above (the display image) in the VRAM 313. Then, the GPU 312 of the information processing section 31 outputs the display image drawn in the VRAM 313 to the upper LCD 22, whereby the display image is displayed on the upper LCD 22.
  • Next, the information processing section 31 determines whether or not the game is to be ended (step 59). Conditions for ending the game may be, for example: that the predetermined conditions described above (the game is completed or the game is over) have been satisfied; or that the user has performed an operation for ending the game. When the game is not to be ended, the information processing section 31 proceeds to step 52 described above, and repeats the same process. On the other hand, when the game is to be ended, the information processing section 31 ends the process of the flow chart.
  • <Operations and Effects of Image Processing According to First Embodiment>
  • As described above, in the image processing program according to the present embodiment, as shown in the processes of FIGS. 15 and 16, the face image acquired in the face image acquisition process is not released for the user until the user succeeds in the game (first game). That is, the user cannot save the acquired face image in the saved data storage area Do of the game until the user succeeds in the game. Further, the user cannot, for example, copy, modify, or transfer the acquired face image. On the other hand, when the user stops retrying the game in the state where the user has failed in the game, the acquired face image is discarded, and the process ends. Further, depending on the manner of the process of the first game, for example, the execution mode or the specification, when the result of the first game is a failure, the face image is immediately discarded, and the process ends. Accordingly, until the acquired face image is handed over, that is, until the face image is saved in the saved data storage area Do, the user is fixated on the game, and pursues a success with enthusiasm. That is, based on the image processing program according to the present embodiment, the user can tackle the game very seriously.
  • In addition, when the operation of the user on the GUI at the start of the game is an instruction to “acquire a face image with the inner capturing section 24” (“Yes” in step 9 of FIG. 14), and the face image acquisition process 1 (step 10 of FIG. 14) is performed, the following effects are also expected. That is, when a face image is acquired by performing capturing with the inner capturing section 24 at the start of the game (typically, before the first game is started), face images different from capture to capture are obtained. Thus, for example, as compared to the case of selecting a specific image stored in another device and using the selected image in the game, a desire for a success in the game is increased. Further, a similar effect is expected also in the case of performing the face image acquisition process 2 by the outer capturing section 23 (step 12 of FIG. 14). This is because also when a face image is acquired by performing capturing with the outer capturing section 23 at the start of the game (typically, before the first game is started), face images different from capture to capture are obtained.
  • In addition, based on the image processing program according to the present embodiment, when the user has succeeded in the first game, the user can collect, in the saved data storage area Do, various face images, such as a face image of the user themselves, face images of people around the user, a face image included in an image obtained by a video device, and a face image of a living thing owned by the user. The game apparatus 10 can display the collected face images, for example, on the screen as shown in FIGS. 7 and 8. Then, the game apparatus 10 represents the state where, for example, on the screen as shown in FIG. 8, face images related to the face image in the state of being selected show reactions. Examples of the reactions include: giving a look to the face image in the state of being selected with one eye closed; and turning its face to the face image. Accordingly, in a virtual reality world including the face images collected in the game apparatus 10, the game apparatus 10 can represent relationships based on human relationships, intimacies, and the like in the real world. As a result, it is possible to cause the user having collected the face images, a sense of affinity for the virtual reality world including the face images, a familiarity with the collected face images, emotions similar to those toward people or living things in the real world, and the like.
  • In addition, based on the image processing program according to the present embodiment, it is possible to generate an enemy object EO by texture-mapping a face image selected from among the collected face images onto the facial surface portion of the enemy object EO, and execute the game. The user can freely determine a cast by attaching a face image selected from among the collected face images to the enemy object EO that appears in the game. Accordingly, during the execution of the game, the user can enhance the possibility of becoming increasingly enthusiastic about the game, by an effect obtained from the face of the enemy object EO.
  • In addition, based on the image processing program according to the present embodiment, in the case where an enemy object EO is generated, when a face image has entered the state of being selected in order to be attached to the enemy object EO, face images related to the face image in the state of being selected show reactions. Accordingly, it is possible to cause the user who determines the cast of the enemy object EO, a sense of affinity for the virtual reality world, a familiarity with the face images displayed as a list, and emotions similar to those toward people in the real world, and the like.
  • It should be noted that in the first embodiment, an enemy object EO is generated by attaching a face image to the enemy object EO. Such a process, however, is not limited to the generation of an enemy object EO, and can also be applied to the generation of character objects in general that appear in the game. For example, a face image acquired by the user may be attached to an agent who guides an operation on the game apparatus 10 or the progression of the game. Alternatively, a face image acquired by the user may be attached to characters that appear in the game apparatus 10, such as: a character object representing the user themselves; a character object that appears in the game in a friendly relationship with the user; a character object representing the owner of the game apparatus; and the like.
  • In the above descriptions, a person's face is assumed to be a face image; however, the present invention is not limited to a face image of a person, and can also be applied to a face image of an animal. For example, face images may be collected by performing the face image acquisition process described in the first embodiment, in order to acquire face images of various animals, such as mammals, e.g., dogs, cats, and horses, birds, fish, reptiles, amphibians, and insects. For example, with the game apparatus 10, it is possible to represent the relationships between people and animals in the real world, such that, as shown in the relationships between the people on the screen shown in FIG. 8, a little bird chirps at, a dog barks at, and a cat gives a look to, the person in the state of being selected. On the other hand, in the relationship between pet and master, the game apparatus 10 can reflect emotions, consciousness, and real-world relationships on the virtual world such that when a face image of the pet has entered the state of being selected, face images corresponding to the master and their family smile at, or give looks to, the pet. Then, it is possible to execute the game while making the user conscious of real-world relationships, by attaching collected faces to enemy objects EO and other character objects by the cast determination process.
  • It should be noted that the relationship between pet and master, the relationships between the master and their family, and the like may be defined by the face image management information Dn1 through a UIF (user interface), so that reference can be made to these relationships for the relationships between face images. It may be set such that emotions such as love and hate, and good and bad emotions toward a pet of a loved person and a pet of a hated person can be defined. Alternatively, for example, setting may be stored in the face image management information Dn1 such that an animal whose face image has succeeded in being saved in the saved data storage area Do of the game when the result of the game executed with a face image of the master has been successful is in an intimate relationship with the master. With the game apparatus 10, the user can execute a game in which a character object is generated and on which consciousness in the real world is reflected, based on the various face images collected as described above.
  • In addition, in the image processing program according to the present embodiment, as shown in FIG. 16, the user is led to acquire a face image by performing capturing with the inner capturing section 24 prior to the two capturing sections of the outer capturing section 23 (the left outer capturing section 23 a and the right outer capturing section 23 b). The inner capturing section 24 is used mainly to capture the user who operates the game apparatus 10, and therefore is difficult to be used to capture a person other than the user. Further, the inner capturing section 24 is used mainly to capture the user who operates the game apparatus 10, and therefore is suitable to capture the owner. Accordingly, this process has an effect in prohibiting the use of the outer capturing section 23 in the state where neither the user nor the owner of the game apparatus 10 has a face image saved in the game apparatus 10. This makes it possible to increase the possibility of, for example, prohibiting a third person from capturing an image, using the game apparatus 10 whose owner is not specified or the game apparatus 10 whose user is not specified.
  • In addition, in the image processing program according to the present embodiment, as shown in FIGS. 19A through 19C, the information processing section 31 leads the user to preferentially capture a face image corresponding to an unacquired attribute. Such a process makes it possible to assist a face image collection process of a user who wishes to acquire face images having as balanced attributes as possible.
  • In addition, in the image processing program according to the present embodiment, display is performed such that a real world image obtained from a real camera and a virtual space image including an object present behind the real world image are combined.
  • Therefore, in the image processing program according to the present embodiment, it is possible to generate an image capable of attracting the user's interest, by performing drawing so as to represent unreality in a background in which a real world image is used.
  • In addition, when an object is present behind the real world image in a combined image to be displayed (e.g., the enemy object EO present in the second space 2), a substance image of the object is displayed in the real world image (boundary surface 3), in an area where an opening is present. Further, a shadow image of the object is displayed in the real world image, in an area where an opening is not present (see FIG. 24). Furthermore, the substance image and the shadow image are each an image corresponding to the orientation based on the placement direction or the moving direction of the object in the virtual space.
  • Therefore, in the image processing program according to the present embodiment, it is possible to generate an image in which the user can recognize the activities, such as the number and the moving directions, of objects present behind the real world image.
  • In addition, in the image processing program according to the present embodiment, an image of an unreal space, such as an image of outer space, can be used as image data of the back wall BW. The image of the unreal space can be viewed through an opening in the real world image. The opening is specified at a position in the virtual space. Then, the orientation of the real camera and the orientation of the virtual camera are associated together.
  • Therefore, in the image processing program according to the present embodiment, it is possible to provide an opening at a position corresponding to the orientation of the real camera, and represent the opening at the same position in the real world image. That is, in the image processing program according to the present embodiment, even when the orientation of the real camera has changed, the opening is represented at the same position in real space. This makes it possible to generate an image that can be recognized by the user as if real space is linked with the unreal space.
  • In addition, the real world image in which an opening is represented is generated by the multiplication of the real world image obtained from the real camera and alpha values.
  • Therefore, in the image processing program according to the present embodiment, it is possible to represent and generate an opening by a simplified method.
  • In addition, an opening in the real world image that is generated by the enemy object EO passing through the boundary surface 3 is generated by multiplying: the opening shape data Df3 included in the enemy object data Df; by the opening determination data corresponding to a predetermined position.
  • Therefore, in the image processing program according to the present embodiment, it is possible to set an opening corresponding to the shape of a character having collided, by a simplified method.
  • In addition, in the image processing program according to the present embodiment, it is possible to draw a shadow image by comparing alpha values. Further, it is possible to switch between the on/off states of the drawing of a shadow image by changing alpha values set in the silhouette data Df2.
  • Therefore, in the image processing program according to the present embodiment, it is possible to leave the drawing of a shadow image to the GPU, and also switch between the display and hiding of a shadow by a simplified operation.
  • Second Embodiment
  • With reference to FIGS. 41 and 42, a description is given below of an image processing apparatus that executes an image processing program according to a second embodiment of the present invention. In the first embodiment, the first game is executed in the face image acquisition process 1 (step 10 of FIG. 14) and the face image acquisition process 2 (step 12 of FIG. 14). When the user has succeeded in the first game, the game apparatus 10 permits the image acquired in the face image acquisition process 1 and the face image acquisition process 2 to be stored in the saved data storage area Do. Then, if the user succeeds in the first game, the user can sequentially add face images acquired by a similar process to the saved data storage area Do. Then, based on the face images collected in the saved data storage area Do, the game apparatus 10 creates character objects, such as enemy objects EO, for example, in accordance with an operation of the user or automatically. Then, the game apparatus 10 causes the character objects created based on the face images collected by the user, to appear for the user in the first game (step 106 of FIG. 15 and step 129 of FIG. 16), the second game (step 18 of FIG. 14), and the like, and provides the user with a virtual world on which human relationships and the like in the real world are reflected. Accordingly, in such a virtual world on which the real world is reflected, the user can enjoy executing, for example, the game as shown in FIGS. 20A through 26. In the present embodiment, a description is given of an example of another game processing performed in such a virtual world on which the real world is reflected. Similarly to the first embodiment, the game according to the present embodiment is also provided to the user by the information processing section 31 of the game apparatus 10 executing the image processing program expanded in the main memory 32. Further, similarly to the game according to the first embodiment, the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12, the processes shown in FIG. 14. Furthermore, for example, similarly to the second game (step 18 of FIG. 14), the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • FIG. 41 is an example of a screen displayed on the upper LCD 22 of the game apparatus 10 according to the present embodiment. The procedure of the creation of this screen is similar to that of the first embodiment. That is, the case is assumed where, for example, the user holds the lower housing 11 with both hands as shown in FIG. 4, such that the lower housing 11 and the upper housing 21 of the game apparatus 10 are in the open state. At this time, the user can view the display screen of the upper LCD 22. Further, in this state, the outer capturing section 23 can, for example, capture space ahead in the line of sight of the user. During the execution of the game according to the present embodiment, the game apparatus 10 displays in the background of the screen an image captured by the outer capturing section 23. More specifically, the information processing section 31 texture-maps, on a frame-by-frame basis, an image captured by the outer capturing section 23 onto the background portion of the screen of the game. When the user has changed the orientation of the game apparatus 10 while holding it in their hand, an image acquired through the outer capturing section 23 in the direction of the line of sight of the outer capturing section 23 after the change is displayed in the background of the game. That is, in the background of the screen shown in FIG. 41, an image acquired from the direction in which the user has directed the outer capturing section 23 of the game apparatus 10 is embedded.
  • In addition, on the background of the screen, for example, an enemy object EO1 is displayed, the enemy object EO1 created in accordance with the procedure described with reference to the example of the screen shown in FIG. 10 and the cast determination process in FIG. 18. Display is performed such that a face image selected in the cast determination process in FIG. 18 is texture-mapped on the facial surface portion of the enemy object EO1. It should be noted that the facial surface portion of the enemy object EO1 does not necessarily need to be formed by texture mapping. Alternatively, the enemy object EO1 may be displayed by simply combining the peripheral portion H13 of the head shape of the enemy object EO shown in FIG. 9 with the face image. Thus, hereinafter, an expression is used, such as “a face image is attached to the facial surface portion of an enemy object”.
  • In addition, in FIG. 41, around the enemy object EO1, enemy objects EO2 through EO7, which are smaller than the enemy object EO1, are displayed. As described above, on the screen shown in FIG. 41, seven enemy objects EO in total are displayed, namely the enemy objects EO1 through EO7. In the process of the game apparatus 10 according to the present embodiment, however, the number of enemy objects EO is not limited to seven. It should be noted that as has already been described in the first embodiment, when the enemy objects EO1 through EO7 do not need to be distinguished from one another, the enemy objects EO are referred to as “enemy objects EO”.
  • In addition, to any one or more of the enemy objects EO1 through EO7, e.g., to the enemy object EO6, the same face image as that of the enemy object EO1 is attached. On the other hand, to the other enemy objects EO2 through EO5, face images different from that of the enemy object EO1 are attached.
  • In addition, on the screen shown in FIG. 41, an aiming cursor AL for attacking the enemy object EO1 and the like is displayed. The positional relationships and the relative movement relationships between the aiming cursor AL and the background, and between the aiming cursor AL and the enemy objects EO1 and the like, are similar to those described in the first embodiment (FIGS. 23 through 26).
  • That is, the enemy object EO1 freely moves around on the screen shown in FIG. 41. More specifically, the enemy object EO1 freely moves around in a virtual space having an image captured by the outer capturing section 23 as its background. Accordingly, it seems to the user viewing the upper LCD 22 as if the enemy object EO1 freely moves in the space where the user themselves is placed. Further, the enemy objects EO2 through EO7 are placed around the enemy object EO1.
  • Therefore, when the user has changed the orientation of the game apparatus 10 relative to the enemy objects EO1 through EO7 that freely move around in the virtual space, the user can point the aiming cursor AL displayed on the screen at the enemy objects EO1 through EO7. When the user has pressed the operation button 14B (A button) corresponding to a trigger button in the state where the aiming cursor AL is pointed at the enemy objects EO1 through EO7, the user can fire a bullet at the enemy objects EO1 through EO7.
  • In the game according to the present embodiment, however, an attack on, among the enemy objects EO1 through EO7, those other than one having the same face image as that of the enemy object EO1 is not a valid attack. For example, when the enemy object EO1 or an enemy object having the same face image as that of the enemy object EO1 has been attacked by the user, the user scores points, or the enemy objects lose points. Further, when the enemy objects EO2 through EO7, each of which is smaller in dimensions than the enemy object EO1, have been attacked by the user, the user scores more points. Alternatively, when the enemy objects EO2 through EO7 have been attacked, the enemy objects lose more points than when the enemy object EO1 has been attacked. An attack on, among the enemy objects EO2 through EO7, those having face images different from that of the enemy object EO1, however, is an invalid attack. That is, the user is obliged to attack an enemy object having the same face image as that of the enemy object EO1. Hereinafter, an enemy object having a face image different from that of the enemy object EO1 is referred to as a “misidentification object”. It should be noted that in FIG. 41, the enemy objects EO2 through EO7 have head shapes of the same type. Among the enemy objects EO2 through EO7, however, any of the enemy objects EO2 through EO5, EO7, and the like, which are misidentification objects, may have head shapes of different types. It should be noted that as has already been described, when the enemy objects EO1 through EO7 and the like do not need to be distinguished from one another, the enemy objects EO are referred to simply as “enemy objects EO”.
  • With reference to FIG. 42, a description is given of an example of the operation of the image processing program executed by the information processing section 31 of the game apparatus 10. FIG. 42 is a flow chart showing an example of the operation of the information processing section 31. In this process, the information processing section 31, for example, receives the selection of a face image, and generates enemy objects (step 30). The process of step 30 is, for example, similar to the cast determination process in FIG. 18.
  • Next, the information processing section 31 generates misidentification objects (step 31). The misidentification objects may be generated by, for example, attaching face images other than the face image of the enemy objects EO specified in step 30, to the facial surface portion of the head shape of the enemy objects EO. The specification of the face images of the misidentification objects is not limited. For example, the face images of the misidentification objects may be selected from among face images already acquired by the user, as shown in FIGS. 7 and 8. Alternatively, for example, the face images of the misidentification objects may be stored in advance in the data storage internal memory 35 before the shipment of the game apparatus 10. Yet alternatively, the face images of the misidentification objects may be stored in the data storage internal memory 35 simultaneously at the installation or the upgrading of the image processing program. Further, for example, face images obtained by deforming the face image of the enemy objects EO, for example, face images obtained by switching parts of the face, such as eyes, nose, and mouth, with those of another face image, may be used for the misidentification objects.
  • Next, the information processing section 31 starts the game of the enemy objects EO and the misidentification objects (step 32). Then, the information processing section 31 determines whether or not the user has made an attack (step 33). The attack of the user is detected by a trigger input, for example, the pressing of the operation buttons 14B in the state where the aiming cursor AL shown in FIG. 41 is pointed at the enemy objects EO. When the user has made an attack, the information processing section 31 determines whether or not the attack has been made on an appropriate enemy object EO (step 35). When an appropriate enemy object EO has been attacked, the information processing section 31 destroys the enemy object EO, and adds points to the score of the user (step 36). On the other hand, in the determination in step 35, when an attack on a misidentification object, which is not the appropriate enemy objects EO, has been detected, the information processing section 31 performs nothing on the assumption that the attack is invalid. Further, in the determination in step 33, when the user has not made an attack, the information processing section 31 performs another process (step 34). Said another process is, for example, a process specific to each game. Examples of said another process include: a process of propagating the enemy object EO6 and the misidentification objects EO2 through EO5 in FIG. 41; and a process of switching the position of the enemy object EO6 and the positions of the misidentification objects EO2 through EO7 and the like in FIG. 41.
  • Then, the information processing section 31 determines whether or not the game is to be ended (step 37). The game is ended, for example, when the user has destroyed all the propagating enemy objects EO, or when the score of the user has exceeded a reference value. Alternatively, the game is ended, for example, when the enemy objects EO have propagated so as to exceed a predetermined limit, or when the points lost by the user have exceeded a predetermined limit. When the game is not to be ended, the information processing section 31 returns to step 33.
  • As described above, based on the image processing program according to the present embodiment, it is possible to create enemy objects EO using collected face images, and execute the game. This enables the user to execute a game in a virtual reality world, based on face images of people existing in the real world.
  • In addition, based on the game apparatus 10 according to the present embodiment, the game is executed by confusing the user in combination of appropriate enemy objects EO and misidentification objects. Accordingly, the user needs to correctly recognize the face images of the enemy objects EO. As a result, the user requires a capacity to distinguish the enemy objects EO and concentration. Thus, the game apparatus 10 according to the present embodiment makes it possible to cause the user a sense of tension when the game is executed, or to stimulate the user's brain while the user recognizes the face images.
  • In the second embodiment, for example, as in the cast determination process in step 16 and the process of the execution of the game in step 18 of FIG. 14, the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game. As the process of step 30 of FIG. 42, however, for example, the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the present embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15, when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game. With such a configuration, as in the first embodiment, the user tackles the game increasingly enthusiastically in order to save the acquired face image.
  • Third Embodiment
  • With reference to FIGS. 43 and 44, a description is given below of an image processing apparatus that executes an image processing program according to a third embodiment of the present invention. Similarly to the game according to the first embodiment, the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12, the processes shown in FIG. 14. Further, for example, similarly to the second game (step 18 of FIG. 14), the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • That is, as in the case described in the second embodiment, the information processing section 31 of the game apparatus 10 executes the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of FIG. 14) and the second game (the game executed in step 18 of FIG. 14). Further, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment also as the first game according to the first embodiment (the game executed in step 106 of FIG. 15 and step 129 of FIG. 16).
  • In addition, in the second embodiment, a description is given of an example of the game processing of the game where a face image is acquired, and enemy objects EO including the acquired face image and misidentification objects are used. Then, in the second embodiment, it is determined that an attack on a misidentification object is an invalid attack.
  • In the present embodiment, a description is given of a game where, when an attack on a misidentification object has been detected, a part of the face image of an enemy object EO is replaced with a part of another face image, instead of the game according to the second embodiment. For example, the enemy object EO is formed by combining the peripheral portion of the enemy object EO (see H13 in FIG. 9) with a face image of the user. A face image of a person close to the user may be used instead of the face image of the user. In this case, the face image of the user (or the face image of the close person) may represent the state of being constrained by the enemy object EO. Then, when the user has defeated the enemy object EO in the game, representation may be made such that the face image of the user (or the face image of the close person) is released from the enemy object EO. On the other hand, when the user has continued to fail in the game, the face image of the user (or the face image of the close person) constrained by the enemy object EO gradually deforms. Then, when the limit of the deformation has exceeded a certain limit, the game may be ended.
  • FIG. 43 is an example of a screen displayed on the upper LCD 22 according to the present embodiment. In the example of FIG. 43, an enemy object EO1 (an example of a first character object) is displayed. Further, around the enemy object EO1, an enemy object EO11 (an example of a second character object) and misidentification objects EO12 through EO16 (examples of a third character object), which are smaller in dimensions than the enemy object EO1, are displayed. To the enemy object EO11, the same face image as that of the enemy object EO1 is attached. On the other hand, the configuration of the misidentification objects EO12 through EO16 is similar to the second embodiment, and face images different from that of the enemy object EO1 are attached to the misidentification objects EO12 through EO16. It should be noted that the configuration of FIG. 43 is illustrative, and the number of enemy objects being smaller in dimensions than the enemy object EO1, namely the enemy object EO11, is not limited to one.
  • In the game according to the present embodiment, for example, it is easy to point the aiming cursor AL at the enemy object EO1, which is larger in dimensions, and therefore, even when the user has attacked the enemy object EO1 and a bullet has hit the enemy object EO1, the points scored by the user or the damage inflicted on the enemy object EO1 are small. Further, it is difficult to point the aiming cursor AL at the enemy object EO11, which is smaller in dimensions, and therefore, when the user has attacked the enemy object EO11 and a bullet has hit the enemy object EO11, the points scored by the user or the damage inflicted on the enemy object EO11 are greater than those in the case of the enemy object EO1.
  • In addition, in the present embodiment, when the misidentification objects EO12 through EO16 have been attacked by the user, a part of the face image attached to the enemy object EO1 is replaced with that of another face image. For example, in the case of FIG. 43, in the face image attached to the enemy object EO1, an eyebrow and an eye are replaced with an eyebrow and an eye of the face image EO13. As described above, when the user is attempting to attack the smaller enemy object EO11, the misidentification objects EO12 through EO16 lead to the deformation of the enemy object EO1 by confusing the user. It should be noted that in FIG. 43, the misidentification objects EO12 through EO16 have head shapes of the same type. Any of the misidentification objects EO12 through EO16, however, may have head shapes of different types.
  • With reference to FIG. 44, a description is given of an example of the operation of the image processing program executed by the information processing section 31 of the game apparatus 10. FIG. 44 is a flow chart showing an example of the operation of the information processing section 31. In this process, the processes of steps 40 through 42 are similar to the processes of steps 30 through 32 of FIG. 42, and therefore are not described.
  • Then, when having detected an attack on the enemy objects EO (step 43), the information processing section 31 reduces the deformation of the face image, and brings the face image of the enemy object EO1 closer to the face image that is originally attached (step 44). In this case, when the enemy object EO11 shown in FIG. 43, which is smaller in dimensions, has been attacked by the user, the user may score more points than an attack on the enemy object EO1, which is larger in dimensions. When the enemy object EO11, which is smaller in dimensions, has been attacked by the user, the degree of the reduction of the deformation of the face image may be greater than when the enemy object EO1, which is larger in dimensions, has been attacked by the user.
  • On the other hand, when having detected an attack on the misidentification objects EO12 through EO16 and the like (step 45), the information processing section 31 advances the switching of parts of the face image attached to the enemy object EO1. That is, the information processing section 31 additionally deforms the face image (step 46). Further, when having detected a state other than an attack on the enemy objects EO and an attack on the misidentification objects, the information processing section 31 performs another process (step 47). Said another process is similar to that in the case of step 34 of FIG. 42. For example, the information processing section 31 propagates the enemy objects EO.
  • Then, the information processing section 31 determines whether or not the game is to be ended (step 48). It is determined that the game is to be ended, for example, when the deformation of the face image of the enemy object EO has exceeded a reference limit. Alternatively, it is determined that the game is to be ended, for example, when the user has destroyed the enemy objects EO and scored points of a predetermined limit. When the game is not to be ended, the information processing section 31 returns to step 43.
  • As described above, based on the game apparatus 10 according to the present embodiment, when the user has succeeded in attacking the enemy objects EO, the deformed face image is restored. Further, when the misidentification objects have been attacked by the user, the deformation of the face image further is advanced. Accordingly, the user needs to tackle the game with their concentration, and this increases a sense of tension during the execution of the game, and therefore makes it possible to train concentration. Further, based on the game apparatus 10 according to the present embodiment, a face image of the user or a face image of a person close to the user is deformed. This makes it possible to increase the possibility that the user becomes enthusiastic about a game in a virtual reality world on which the real world is reflected.
  • In the third embodiment, for example, as in the process of step 30 of FIG. 42, the cast determination process in step 16, and the process of the execution of the game in step 18 of FIG. 14, the description is given, assuming the case where the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game. As the process of step 40 of FIG. 44, however, for example, the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the third embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15, when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game. With such a configuration, as in the first embodiment, the user tackles the game increasingly enthusiastically.
  • Fourth Embodiment
  • With reference to FIGS. 45 and 46, a description is given below of an image processing apparatus that executes an image processing program according to a fourth embodiment of the present invention. Similarly to the game according to the first embodiment, the game according to the present embodiment may be, for example, executed as the first game (step 106 of FIG. 15 and step 129 of FIG. 16) for the face image acquisition process 1 in step 10 and the face image acquisition process 2 in step 12, the processes shown in FIG. 14. Further, for example, similarly to the second game (step 18 of FIG. 14), the game according to the present embodiment may be executed on the assumption that face images are collected and accumulated in the saved data storage area Do of the game.
  • That is, as in the case described in the second embodiment, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment as an example of the processing of the cast determination process in the first embodiment (step 16 of FIG. 14) and the second game (the game executed in step 18 of FIG. 14). Further, the information processing section 31 of the game apparatus 10 can execute the game according to the present embodiment also as the first game according to the first embodiment (the game executed in step 106 of FIG. 15 and step 129 of FIG. 16).
  • In addition, in the third embodiment, a description is given of an example of the game processing of the game where a face image is acquired, and enemy objects EO including the acquired face image and misidentification objects are used. Further, in the third embodiment, when an attack on a misidentification object has been detected, a part of the face image of one of the enemy objects EO is replaced with a part of another face image.
  • In the present embodiment, a description is given of a process where at the start of the game, a part of the face included in an enemy object EO is already replaced with a part of another face image, and when the user has won the game, the part of the face included in the enemy object EO returns to that of the original face image, instead of the game according to the second embodiment and the game according to the third embodiment.
  • FIG. 45 is an example of a screen displayed on the upper LCD 22 according to the present embodiment. On the left of the screen, a list of face images that can be attached to enemy objects EO (a character column) is displayed as characters. The screen of the game apparatus 10, however, does not need to include the list of face images. Further, for example, when the information processing section 31 has detected, through the GUI, that the user has performed an operation of requesting the display of a list, the list of face images may be displayed. Furthermore, the display position of the list of face images is not limited to the left of the screen as shown in FIG. 45. In the character column, face images, such as face images PS1 through PS5, are displayed.
  • In addition, on the screen shown in FIG. 45, an enemy object EO20 and enemy objects EO21 through EO25 are displayed. The enemy object EO20, the enemy objects EO21 through EO25, and the like are, for example, enemy objects EO created on the screen shown in FIGS. 7 through 9 in the first embodiment, or in the selection operation as shown in the cast determination process in FIG. 18. In the present embodiment, however, the enemy object EO20 is drawn larger at the center of the screen, and the enemy object EO21 through EO25 and the like are drawn around the enemy object EO20. Further, for example, to the enemy object EO20, the face image PS1 has been originally attached. Still further, for example, to the enemy object EO22, the face image PS2 has been originally attached. Furthermore, to the enemy object EO25, the face image PS5 has been originally attached.
  • Before the start of the game, however, parts of the faces are switched between the enemy object EO20 and the enemy objects EO21 through EO25. For example, noses are switched between the enemy object EO20 and the enemy object EO22. Further, for example, left eyebrows and left eyes are switched between the enemy object EO20 and the enemy object EO25. The switching of parts of the faces may be, for example, performed on a polygon-by-polygon basis when the face images are texture-mapped, the polygons forming three-dimensional models onto which the face images are texture-mapped.
  • Such switching of parts of the faces may be performed by, for example, randomly changing the number of parts to be switched and target parts to be switched. Further, for example, the number of parts to be switched may be determined in accordance with a success or a failure in, and the score of, the game that has already been executed or another game. For example, when the performance, or the degree of achievement, of the user has been excellent in the game that has already been executed, the number of parts to be switched is decreased. When the performance, or the degree of achievement, of the user has been poor, the number of parts to be switched is increased. Alternatively, the game may be divided into levels, and the number of parts to be switched may be changed in accordance with the level of the game. For example, the number of parts to be switched is decreased at an introductory level, whereas the number of parts to be switched is increased at an advanced level.
  • In addition, a face image of the user may be acquired by the inner capturing section 24, face recognition may be performed, and the number of parts to be switched may be determined in accordance with the expression obtained from the recognition. For example, a determination may be made on: the case where the face image is smiling; the case where the face image is surprised; the case where the face image is sad; and the case where the face image is almost expressionless. Then, the number of parts to be switched may be determined in accordance with the determination result. The expression of the face may be determined in accordance with: the dimensions of the eyes; the area of the mouth; the shape of the mouth; the positions of the contours of the cheeks relative to reference points, such as the centers of the eyes, the center of the mouth, and the nose; and the like. For example, an expressionless face image of the user may be registered in advance, and the expression of the user's face may be estimated from the difference values between: values obtained when a face image of the user has been newly acquired, such as the dimensions of the eyes, the area of the mouth, the shape of the mouth, the positions of the contours of the cheeks from the reference points, and the like; and values obtained from the face image registered in advance. It should be noted that such a method of estimating the expression of the face is not limited to the above procedure, and various procedures can be used.
  • In the present embodiment, in the game apparatus 10, the user executes a game where the user fights with the enemy objects EO, parts of whose faces are switched as shown in FIG. 45, by attacking the enemy objects EO. The manner of making an attack is similar to those of the procedures described in the first through third embodiments. Then, when the user has won battles with the enemy objects EO, the switched parts may return to the original face images.
  • With reference to FIG. 46, a description is given of an example of the operation of the image processing program executed by the information processing section 31 of the game apparatus 10. In this process, the process of step 90 is similar to those of step 30 of FIG. 42 and step 40 of FIG. 44. In the present embodiment, subsequently, the information processing section 31 of the game apparatus 10 switched parts of the faces (step 91). Then, the information processing section 31 executes the game (step 92). Then, the information processing section 31 determines whether or not the game has been successful (step 93). When the game has been successful, the information processing section 31 restores the faces, whose parts have been switched in the process of step 91, to the faces in the states of being originally captured (step 94).
  • As described above, based on the image processing program according to the present embodiment, for example, the game is started in the state where a face image of the user has been acquired by the inner capturing section 24, and parts of the faces have been switched between the acquired face image and another face image. Then, when the user has succeeded in the game, for example, when the user has won battles with the enemy objects EO, the face images whose parts are switched are restored to the original face images.
  • Therefore, for example, when the face image, a part of whose face is switched, is a face image of the user themselves, or is a face image of a person intimate with the user, the user is given a high motivation to succeed in the game.
  • In addition, parts of the faces are switched at the start of the game according to the present embodiment, in accordance with the performance in another game, and therefore, it is possible to give the user a handicap or an advantage based on the result of said another game. Further, parts of the faces are switched in accordance with the level of the game, and therefore, it is possible to represent the difficulty level of the game by the degree of the deformation of the faces.
  • In the fourth embodiment, for example, as in the processes of step 30 of FIG. 42 and step 40 of FIG. 44, the cast determination process in step 16 of FIG. 14, and the process of the execution of the game in step 18, the description is given, assuming the case where the game is executed by creating enemy objects EO based on face images already stored in the saved data storage area Do of the game. As the process of step 90 of FIG. 45, however, for example, the processes of steps 100 through 105 of FIG. 15 may be performed. That is, the game according to the present embodiment may be executed in the state where a face image has been acquired in the face image acquisition process, but has yet to be stored in the saved data storage area Do of the game. Then, as in steps 107 through 110 of FIG. 15, when the game has been successful, the acquired face image may be stored in the saved data storage area Do of the game. With such a configuration, as in the first embodiment, the user tackles the game increasingly enthusiastically in order to save the acquired face image.
  • Fifth Embodiment
  • With reference to FIGS. 47 through 51, a description is given below of an image processing apparatus that executes an image processing program according to a fifth embodiment of the present invention. In the above embodiments, before the start of the game for storing a face image in the saved data storage area Do (e.g., the first game), the face image, which is an acquisition target, is acquired by the face image acquisition process that uses a camera image captured by the inner capturing section 24 or the outer capturing section 23. Then, after the face image has been acquired, the game is executed, and when the user has succeeded in the game, permission is given to store the face image in the saved data storage area Do. In the fifth embodiment, during the execution of a game where the user fights with enemy objects EO, a face image, which is an acquisition target, is acquired from a camera image captured by the inner capturing section 24 or the outer capturing section 23. When conditions for succeeding in acquiring the face image are satisfied, permission is given to store the face image acquired during the game in the saved data storage area Do. That is, in the fifth embodiment, a face image is acquired from a camera image captured during the execution of a predetermined game, and when conditions for succeeding in acquiring the face image have been satisfied in the game, permission is given to store the face image in the saved data storage area Do. Accordingly, this game results in a game where permission is given to store the face image in the saved data storage area Do. Similarly to the above embodiments, the game according to the present embodiment is also provided to the user by the information processing section 31 of the game apparatus 10 executing the image processing program expanded in the main memory 32. In the following descriptions, as an example of the acquisition of a face image during the game, a face image is acquired during the execution of the second game described above (step 18 of FIG. 14).
  • FIG. 47 is an example of a screen displayed on the upper LCD 22 of the game apparatus 10 according to the present embodiment. The procedure of the creation of this screen is similar to that of the first embodiment. That is, the case is assumed where, for example, the user holds the lower housing 11 with both hands as shown in FIG. 4, such that the lower housing 11 and the upper housing 21 of the game apparatus 10 are in the open state. At this time, the user can view the display screen of the upper LCD 22. Further, in this state, the outer capturing section 23 can, for example, capture space ahead in the line of sight of the user. During the execution of the game according to the present embodiment, the game apparatus 10 displays in the background of the screen a camera image CI captured by the outer capturing section 23. More specifically, the information processing section 31 texture-maps, on a frame-by-frame basis, an image captured by the outer capturing section 23 onto the background portion of the screen of the game. That is, on the upper LCD 22, a real-time real world image (moving image) captured by the real camera built into the game apparatus 10 is displayed in the background portion. When the user has changed the orientation of the game apparatus 10 while holding it in their hand, an image acquired through the outer capturing section 23 in the direction of the line of sight of the outer capturing section 23 after the change is displayed in the background of the game. That is, in the background of the screen shown in FIG. 47, an image acquired from the direction in which the user has directed the outer capturing section 23 of the game apparatus 10 is embedded. It should be noted that in the example of the screen shown in FIG. 47, a person facing the outer capturing section 23 in a full-face manner is included as a subject of the camera image CI displayed as the background of the screen.
  • In addition, on the background of the screen, for example, enemy objects EO and an aiming cursor AL are displayed, the enemy objects EO and the aiming cursor AL created in accordance with the procedure described in the above embodiments. Display is performed such that face images selected in the cast determination process and the like are texture-mapped on the facial surface portions of the enemy objects EO. Then, when the user of the game apparatus 10 has pressed the operation button 14B (A button) corresponding to a trigger button in the state where the aiming cursor AL is pointed at the enemy objects EO, the user can fire a bullet at the enemy objects EO.
  • Also during the game, the game apparatus 10 sequentially performs a predetermined face recognition process on the camera image CI captured by the real camera (e.g., the outer capturing section 23), and determines the presence or absence of a person's face in the camera image CI. Then, when the game apparatus 10 has determined in the face recognition process that a person's face is present in the camera image CI, and conditions for the appearance of an acquisition target object AO have been satisfied, an acquisition target object AO appears from the portion recognized as a face in the camera image CI.
  • As shown in FIG. 48, the acquisition target object AO is displayed by texture-mapping a face image extracted from the camera image CI onto a predetermined portion of predetermined polygons (e.g., the facial surface portion of a three-dimensional model representing a human head shape. As an example, the acquisition target object AO is displayed by attaching, as a texture, an image of the portion recognized as a face in the camera image CI to the surface of a three-dimensional model of a head shape formed by combining a plurality of polygons. It should be noted that in the game where the acquisition target object AO appears, the acquisition target object AO is not limited to one obtained by texture-mapping an image of a recognized face onto a three-dimensional model. For example, the acquisition target object AO may be displayed as a plate physical body, to the main surface of which the image of the portion recognized as a face that has been clipped from the camera image CI is attached, or may be displayed as an image simply held in a two-dimensional pixel array.
  • For example, similarly to the enemy objects EO, the acquisition target object AO is placed in the virtual space described above, and an image of the virtual space (virtual world image), in which the acquisition target object AO and/or the enemy objects EO are viewed from the virtual camera, is combined with a real world image obtained from the camera image CI, whereby display is performed on the upper LCD 22 as if the acquisition target object AO and/or the enemy objects EO are placed in real space. In accordance with an attack operation using the game apparatus 10 (e.g., pressing the button 14B (A button)), a bullet object BO is fired in the direction of the aiming cursor AL, and the acquisition target object AO also serves as a target of attack for the user. Then, when the user has won a battle with the acquisition target object AO, the user can store in the saved data storage area Do the face image attached to the acquisition target object AO.
  • It should be noted that not only winning a battle with the acquisition target object AO, but also completing the game where the user attacks the enemy objects EO, that is, completing the game that has already been executed when the face of the face image attached to the acquisition target object AO has been recognized, may be added to conditions for storing the face image of the acquisition target object AO in the saved data storage area Do. For example, possible conditions for completing the game where the user attacks the enemy objects EO may be that a predetermined number or more of enemy objects EO are defeated. In this case, the specification of the game is that, during the execution of the game where the user attacks the enemy objects EO, when the acquisition target object AO has appeared in the middle of the game and has been defeated, the face image of the acquisition target object AO can be additionally acquired.
  • It should be noted that a face image used for the acquisition target object AO may be a face image obtained from a face recognized in the camera image CI (a still image), or may be a face image obtained from a face recognized by repeatedly performing face recognition on the repeatedly captured camera image CI (a moving image). For example, in the second case, when the expression and the like of the person's face repeatedly captured in the camera image CI has changed, the changes are reflected on a texture of the acquisition target object AO. That is, it is possible to reflect in real time the expression of the person captured by the real camera of the game apparatus 10, on the expression of the face image attached to the acquisition target object AO.
  • In addition, the acquisition target object AO that appears from the portion recognized as a face in the camera image CI may be placed so as to always overlap the recognized portion when displayed in combination with the camera image CI. For example, changes in the direction and the position of the game apparatus 10 (i.e., the direction and the position of the outer capturing section 23) in real space also change the imaging range captured by the game apparatus 10, and therefore also change the camera image CI displayed on the upper LCD 22. In this case, the game apparatus 10 changes the position and the direction of the virtual camera in the virtual space in accordance with the motion of the game apparatus 10 in real space. With this, the acquisition target object AO displayed as if placed in real space is displayed as if placed at the same position in real space even when the direction and the position of the game apparatus 10 have changed in real space. Further, on the upper LCD 22, a real-time real world image captured by the real camera built into the game apparatus 10 is displayed, and therefore, a subject may move in real space. In this case, the game apparatus 10 sequentially performs a face recognition process on the repeatedly captured camera image CI, and thereby sequentially places the acquisition target object AO in the virtual space such that the acquisition target object AO is displayed so as to overlap the position of the recognized face when combined with the camera image CI. Thus, even when changes in the imaging direction and the imaging position of the game apparatus 10 or a change in the position of the captured person have changed in the camera image CI the position and the size of the face image having appeared as the acquisition target object AO, it is possible to draw the acquisition target object AO so as to overlap the face image by these processes.
  • It should be noted that the acquisition target object AO displayed on the upper LCD 22 may be displayed by, for example, enlarging, reducing, or deforming the face image actually captured and displayed in the camera image CI, or may be displayed by changing the display direction of the model to which the face image is attached. Such image processing differentiates the actually captured face image from the acquisition target object AO, and therefore enables the user of the game apparatus 10 to easily determine that the acquisition target object AO has appeared from the camera image CI.
  • Next, with reference to FIGS. 49 through 51, a description is given of specific processing operations performed by executing the image processing program according to the fifth embodiment. It should be noted that FIG. 49 is a subroutine flow chart showing an example of a detailed operation of a during-game face image acquisition process performed by executing the image processing program. FIG. 50 is a subroutine flow chart showing an example of a detailed operation of a yet-to-appear process performed in step 202 of FIG. 49. FIG. 51 is a subroutine flow chart showing an example of a detailed operation of an already-appeared process performed in step 208 of FIG. 49.
  • It should be noted that programs for performing these processes are included in a memory built into the game apparatus 10 (e.g., the data storage internal memory 35), or included in the external memory 45 or the data storage external memory 46, and the programs are: loaded from the built-in memory, or loaded from the external memory 45 through the external memory I/F 33 or from the data storage external memory 46 through the data storage external memory I/F 34, into the main memory 32 when the game apparatus 10 is turned on; and executed by the CPU 311
  • The processing operations performed by executing the image processing program according to the fifth embodiment are performed as follows. For the processing operations performed by executing the image processing program according to the first embodiment, a during-game face image acquisition process described later is performed during the game processing described with reference to FIG. 29, only in each cycle of the game processing (e.g., performed once during steps 52 through 59). Thus, in the following descriptions, only the processing operations added to the first embodiment are described, and other processing operations are not described in detail.
  • In addition, various data stored in the main memory 32 in accordance with the execution of the image processing program according to the fifth embodiment is similar to the various data stored in accordance with the execution of the image processing program according to the first embodiment, except that appearance flag data, face recognition data, and acquisition target object data are further stored. It should be noted that the appearance flag data indicates an appearance flag indicating whether the current state of the appearance of the acquisition target object AO is “yet to appear”, “during appearance”, or “already appeared”, and the appearance flag is set to “yet to appear” in the initialization in step 51 described above (FIG. 29). Further, the face recognition data indicates the most recent face image obtained from faces sequentially recognized in the repeatedly captured camera image CI, and the position of the face image in the camera image CI. Furthermore, the acquisition target object data includes: data of a three-dimensional model corresponding to the acquisition target object AO; texture data for performing mapping on the three-dimensional model; data indicating the placement direction and the placement position of the three-dimensional model; and the like.
  • Referring to FIG. 49, the information processing section 31 determines whether or not the acquisition target object AO has yet to appear (step 201). For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 201 described above, based on whether or not the appearance flag is set to “yet to appear”. When the acquisition target object AO has yet to appear, the information processing section 31 proceeds to the subsequent step 202. On the other hand, when the acquisition target object AO is not in the state of having yet to appear, the information processing section 31 proceeds to the subsequent step 203.
  • In step 202, the information processing section 31 performs a yet-to-appear process, and proceeds to the subsequent step 203. With reference to FIG. 50, a description is given below of the yet-to-appear process performed by the information processing section 31 in step 203.
  • Referring to FIG. 50, the information processing section 31 performs a predetermined face recognition process on the camera image indicated by the real camera image data Db, stores the face recognition result in the main memory 32 (step 211), and proceeds to the subsequent step. Here, the face recognition process may be performed sequentially by the information processing section 31, using the camera image, independently of the processing of the flow chart shown in FIG. 50. In this case, when a person's face has been recognized in the camera image, the information processing section 31 acquires the face recognition result in step 211 described above, and stores the face recognition result in the main memory 32.
  • Next, the information processing section 31 determines whether or not conditions for the appearance of the acquisition target object AO in the virtual space have been satisfied (step 212). For example, the conditions for the appearance of the acquisition target object AO, on an essential condition that a person's face has been recognized in the camera image in step 221 described above, may be: that the acquisition target object AO appears only once from the start to the end of the game; that the acquisition target object AO appears at predetermined time intervals; that in accordance with the disappearance of the acquisition target object AO from the virtual world, a new acquisition target object AO appears; or that the acquisition target object AO appears at a random time. When the conditions for the appearance of the acquisition target object AO have been satisfied, the information processing section 31 proceeds to the subsequent step 213. On the other hand, when the conditions for the appearance of the acquisition target object AO have not been satisfied, the information processing section 31 ends the process of this subroutine.
  • In step 213, the information processing section 31 sets an image of the face recognized in the face recognition process in step 211 described above, as a texture of the acquisition target object AO, and proceeds to the subsequent step. For example, in the camera image indicated by the camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 211 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
  • Next, the information processing section 31 sets the acquisition target object AO, using the face image obtained from the face recognized in the face recognition process in step 211 (step 214), and proceeds to the subsequent step. As an example, in accordance with the region of the image of the face recognized in the face recognition process in step 211, the information processing section 31 sets the size and the shape of a polygon (e.g., a planar polygon) corresponding to the state of the start of the appearance of the acquisition target object AO, and sets the acquisition target object AO corresponding to the state of the start of the appearance by attaching the texture of the face image set in step 213 to the main surface of the polygon, to thereby update the acquisition target object data.
  • Next, the information processing section 31 newly places the acquisition target object AO in the virtual space (step 215), and proceeds to the subsequent step. For example, when the camera image is displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 211, to thereby update the acquisition target object data.
  • In the present embodiment, an image is generated by rendering with a perspective projection from the virtual camera the virtual space where the acquisition target object AO is newly placed in addition to the enemy objects EO, and a display image including at least the generated image is displayed. Here, to make representation such that the acquisition target object AO appears from the face image in the camera image displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO in the virtual space such that the acquisition target object AO overlaps the region corresponding to the face image in the boundary surface 3 on which the texture of the camera image is mapped, and performs a perspective projection on the placed acquisition target object AO from the virtual camera. It should be noted that the method of placing the acquisition target object AO in the virtual space is similar to the example of the placement of the enemy object EO described with reference to FIGS. 33 through 39, and therefore is not described in detail.
  • Next, the information processing section 31 sets the appearance flag to “during appearance” to thereby update the appearance flag data (step 216), and ends the process of this subroutine.
  • Referring back to FIG. 49, in step 203, the information processing section 31 determines whether or not the acquisition target object AO is appearing. For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 203 described above, based on whether or not the appearance flag is set to “during appearance”. When the acquisition target object AO is appearing, the information processing section 31 proceeds to the subsequent step 204. On the other hand, when the acquisition target object AO is not appearing, the information processing section 31 proceeds to the subsequent step 207.
  • In step 204, the information processing section 31 performs a during-appearance process, and proceeds to the subsequent step. For example, in step 204, the information processing section 31 represents the state of the acquisition target object AO appearing, by gradually changing the face image included in the camera image to a three-dimensional object. Specifically, as in step 211 described above, the information processing section 31 sets the face image as a texture of the acquisition target object AO, based on the result of a face recognition performed on the camera image. Then, as an example, the information processing section 31 sets the acquisition target object AO by performing a morphing process for changing a planar polygon to predetermined three-dimensional polygons (e.g., a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape). Then, as in step 215, the information processing section 31 places the acquisition target object AO subjected to the morphing process at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 204, to thereby update the acquisition target object data. When the acquisition target object AO appears from the image of the face recognized in the real world image, the acquisition target object AO is represented so as to gradually change from planar to three-dimensional in the face image, by performing such a morphing process.
  • It should be noted that the three-dimensional polygons, to which the planar polygon is changed by the morphing process, include polygons of various possible shapes. As a first example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of the head of a predetermined character. In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the facial surface of the head-shaped polygons. As a second example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to plate polygons having a predetermined thickness. In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto the main surface of plate polygons. As a third example, the acquisition target object AO is generated by performing the morphing process to change the planar polygon to three-dimensional polygons having the shape of a predetermined weapon (e.g., missile-shaped polygons). In this case, the image of the face recognized in the camera image in the face recognition process is mapped as a texture onto a part of the weapon-shaped polygons (e.g., mapped onto the missile-shaped polygons at the head of the missile).
  • Next, the information processing section 31 determines whether or not the during-appearance process on the acquisition target object AO has ended (step 205). For example, when the morphing process on the acquisition target object AO has reached its final stage, the information processing section 31 determines that the during-appearance process has ended. Then, when the during-appearance process on the acquisition target object AO has ended, the information processing section 31 proceeds to the subsequent step 206. On the other hand, when the during-appearance process on the acquisition target object AO has not ended, the information processing section 31 proceeds to the subsequent step 207. For example, when the polygon corresponding to the acquisition target object AO has changed to a three-dimensional model by repeating the morphing process in step 204, the three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape, the information processing section 31 determines that the morphing process on the acquisition target object AO is at the final stage.
  • In step 206, the information processing section 31 sets the appearance flag to “already appeared” to thereby update the appearance flag data, and proceeds to the subsequent step 207.
  • In step 207, the information processing section 31 determines whether or not the acquisition target object AO has already appeared. For example, with reference to the appearance flag data, the information processing section 31 makes a determination in step 207 described above, based on whether or not the appearance flag is set to “already appeared”. When the acquisition target object AO has already appeared, the information processing section 31 proceeds to the subsequent step 208. On the other hand, when the acquisition target object AO has not already appeared, the information processing section 31 ends the process of this subroutine.
  • In step 208, the information processing section 31 performs an already-appeared process, and ends the process of the subroutine. With reference to FIG. 51, a description is given below of the already-appeared process performed by the information processing section 31 in step 208 described above.
  • Referring to FIG. 51, the information processing section 31 performs a predetermined face recognition process on the camera image indicated by the real camera image data Db, stores the face recognition result as face recognition data in the main memory 32 (step 221), and proceeds to the subsequent step. Here, the face recognition process may also be performed sequentially by the information processing section 31, using the camera image, independently of the processing of the flow chart shown in FIG. 51. In this case, when a person's face has been recognized in the camera image, the information processing section 31 acquires the face recognition result in step 221 described above, and stores the face recognition result as face recognition data in the main memory 32.
  • Next, the information processing section 31 sets an image of the face recognized in the face recognition process in step 221 described above (an image included in the face area in the camera image), as a texture of the acquisition target object AO (step 222), and proceeds to the subsequent step. For example, in the camera image indicated by the real camera image data Db, the information processing section 31 sets an image included in the region of the face indicated by the face recognition result of the face recognition process in step 101 described above, as a texture of the acquisition target object AO, to thereby update the acquisition target object data using the set texture.
  • Next, the information processing section 31 sets the acquisition target object AO corresponding to the region of the image of the face recognized in the face recognition process in step 221 described above (step 223), and proceeds to the subsequent step. For example, the information processing section 31 sets the acquisition target object AO by attaching the texture of the face image set in step 222 to the facial surface portion of a three-dimensional model formed by combining a plurality of polygons so as to represent a human head shape, to thereby update the acquisition target object data. It should be noted that in step 223, the polygons to which the face image obtained from the face recognized in the face recognition process in step 221 is attached as a texture may be, for example, enlarged, reduced, or deformed, or the texture of the face image may be deformed.
  • Next, the information processing section 31 places the acquisition target object AO set in step 223 described above in the virtual space (step 224), and proceeds to the subsequent step. For example, as in step 215, when the camera image is displayed on the upper LCD 22, the information processing section 31 places the acquisition target object AO at the position in the virtual space, at which a perspective projection is performed such that the acquisition target object AO overlaps the position of the face image obtained from the face recognized in step 221, to thereby update the acquisition target object data. It should be noted that in step 223, the acquisition target object AO may be placed such that the facial surface portion to which the texture of the face image is attached opposes the virtual camera, or the orientation of the acquisition target object AO may be changed to a given direction in accordance with the progression of the game.
  • Next, the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space (step 225). For example, using the position of the acquisition target object AO indicated by the acquisition target object data and the position of the bullet object BO indicated by the bullet object data Dg, the information processing section 31 determines whether or not the acquisition target object AO and the bullet object BO have made contact with each other in the virtual space. When the acquisition target object AO and the bullet object BO have made contact with each other, the information processing section 31 proceeds to the subsequent step 226. On the other hand, when the acquisition target object AO and the bullet object BO have not made contact with each other, the information processing section 31 proceeds to the subsequent step 229.
  • In step 226, the information processing section 31 performs a point addition process, and proceeds to the subsequent step. For example, in the point addition process, the information processing section 31 adds predetermined points to the score of the game indicated by the score data Dh, to thereby update the score data Dh using the score after the addition. Further, in the point addition process, the information processing section 31 performs a process of causing the bullet object BO having made contact based on the determination in step 225 described above, to disappear from the virtual space (e.g., initializing the bullet object data Dg concerning the bullet object BO having made contact with the acquisition target object AO, such that the bullet object BO is not present in the virtual space).
  • Next, the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO having made contact with the bullet object BO has been successful (step 227). As an example of means for determining whether or not the acquisition of the face image has been successful, the information processing section 31 performs the process of step 227. Then, when the acquisition of the face image has been successful, the information processing section 31 proceeds to the subsequent step 228. On the other hand, when the acquisition of the face image has not been successful, the information processing section 31 proceeds to the subsequent step 228.
  • Here, a success in the acquisition of the face image is, for example, the case where the user has won a battle with the acquisition target object AO. As an example, a predetermined life value for existing in the virtual space is set for the acquisition target object AO, and when the acquisition target object AO has made contact with the bullet object BO, a predetermined number is subtracted from the life value. Then, when the life value of the acquisition target object AO has become 0 or below, the acquisition target object AO is caused to disappear from the virtual space, and it is determined that the acquisition of the face image attached to the acquisition target object AO has been successful.
  • In step 228, when the acquisition of the face image has been successful, the information processing section 31 saves the data that indicates the face image obtained from the face recognized in step 221 and is stored in the main memory 32, in addition to data of the face image that has been saved in the saved data storage area Do up to the current time, and proceeds to the subsequent step 229. As an example of the means for saving, the CPU 311 of the information processing section 31 performs the process of step 228. As described above, the saved data storage area Do is a storage area in which the information processing section 31 can write and read and which is constructed in, for example, the data storage internal memory 35 or the data storage external memory 46. When data of a new face image is stored in the saved data storage area Do, the information processing section 31 can display the data of the new face image on the screen of the upper LCD 22, for example, in addition to the list of face images described with reference to FIGS. 7 and 8.
  • At this time, to manage the face image newly saved in the saved data storage area Do of the game, the information processing section 31 generates and saves the face image management information Dn1 described with reference to FIG. 12. That is, the information processing section 31 newly generates face image identification information, and sets the face image identification information as a record of the face image management information Dn1. Further, the information processing section 31 sets the address and the like of the face image newly saved in the saved data storage area Do, as the address of face image data. Furthermore, the information processing section 31 sets the source of acquiring the face image, the estimation of gender, the estimation of age, pieces of related face image identification information 1 through N, and the like.
  • In addition, the information processing section 31 may estimate the attributes of the face image added to the saved data storage area Do, to thereby update the aggregate result of the face image attribute aggregate table Dn2 described with reference to FIG. 13. That is, the information processing section 31 may newly estimate the gender, the age, and the like of the face image added to the saved data storage area Do, and may reflect the estimations on the aggregate result of the face image attribute aggregate table Dn2.
  • In addition, the information processing section 31 may permit the user to, for example, copy or modify the data stored in the saved data storage area Do, or transfer the data through the wireless communication module 36. Then, the information processing section 31 may, for example, save, copy, modify, or transfer the face image stored in the saved data storage area Do in accordance with an operation of the user through the GUI, or with an operation of the user through the operation buttons 14.
  • In addition, in the process of step 228 described above, the information processing section 31 may cause the acquisition target object AO that is the target used to succeed in the acquisition of the face image, to disappear from the virtual space. In this case, the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to succeed in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
  • In step 229, the information processing section 31 determines whether or not the acquisition of the face image attached to the acquisition target object AO present in the virtual space has failed. Then, when the acquisition of the face image attached to the acquisition target object AO has failed, the information processing section 31 proceeds to the subsequent step 230. It should be noted that in the case where a plurality of acquisition target objects AO are present, when any one of the face images attached to the acquisition target objects AO has failed, the information processing section 31 proceeds to the subsequent step 230, On the other hand, when the acquisition of none of the face images attached to the acquisition target objects AO has failed, the information processing section 31 ends the process of this subroutine.
  • Here, a failure in the acquisition of the face image is, for example, the case where the user has lost a battle with the acquisition target object AO. As an example, when the acquisition target object AO has continued to be present in the virtual space for a predetermined time or longer, it is determined that the acquisition of the face image attached to the acquisition target object AO has failed.
  • In step 230, when the acquisition of the face image has failed, the information processing section 31 discards the data that indicates the face image obtained from the face recognized in step 221 described above and is stored main memory 32, and ends the process of the subroutine. It should be noted that in the process of step 230 described above, the information processing section 31 may cause the acquisition target object AO that is the target used to fail in the acquisition of the face image, to disappear from the virtual space. In this case, the information processing section 31 initializes the acquisition target object data concerning the acquisition target object AO that is the target used to fail in the acquisition of the face image, such that the acquisition target object AO is not present in the virtual space.
  • As described above, based on the processes of FIGS. 49 through 51 according to the fifth embodiment described above, a face image obtained from a face recognized in a camera image captured during the game where the user attacks the enemy objects BO, serves as a target to be newly saved in the saved data storage area Do. Then, to save the face image acquired during the game in the saved data storage area Do, the user requires a game result sufficient to succeed in the acquisition of the face image in the game already executed. In this game, display is performed such that on a real world image obtained from a real camera, a virtual world image showing an acquisition target object represented as if a face image in the real world image slides out is superimposed. This makes it possible to display a new image as if the acquisition target object is present in real space. Further, in the game, for example, in addition to character objects using face images that have been saved in the saved data storage area Do up to the current time, the acquisition target object AO to which the face image acquired during the game is attached is caused to appear, whereby the user who executes the game with the game apparatus 10 can collect a new face image and add the new face image to the saved data storage area Do, while reflecting the real world during the execution of the game and human relationships in the real world.
  • It should be noted that in the fifth embodiment described above, as an example, when the user has attacked and defeated the acquisition target object AO that appears during the game, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do. Alternatively, permission may be given to store the face image in the saved data storage area Do, by executing another game where the user fights with the acquisition target object AO. As an example, in a game where the user competes with enemy objects in score, the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has scored more points than the acquisition target object AO that has appeared during the game, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do. As another example, in a game where the user overcomes obstacles set by enemy objects, the acquisition target object AO appears, to which a face image included in a camera image captured during the game is attached. Then, when the user has overcome the obstacles set by the acquisition target object AO that has appeared during the game, and the user has reached a goal, permission is given to store the face image attached to the acquisition target object AO in the saved data storage area Do.
  • In addition, in the first through fifth embodiments described above, as an example, a face image acquired in the image processing based on the flow chart shown in FIG. 14 (a face image acquired in the face image acquisition process before the execution of the game for storing a face image in the saved data storage area Do, in the first through fourth embodiments; and a face image acquired during the execution of the game for storing a face image in the saved data storage area Do, in the fifth embodiment) serves as a target to be stored in the saved data storage area Do. Alternatively, a face image already acquired in an application different from the application of the image processing may serve as a target to be stored in the saved data storage area Do. For example, the game apparatus 10 includes a capturing section (camera), and therefore, based on a camera capturing application different from the application of the image processing based on the flow chart shown in FIG. 14, can capture an image with the capturing section, display the captured image on a screen, and save data of the captured image in a storage medium, such as the data storage internal memory 35 and the data storage external memory 46. Further, the game apparatus 10 can receive data including a captured image from another device, and can also save the received data in a storage medium, such as the data storage internal memory 35 and the data storage external memory 46, by executing a communication application. As described above, a face image obtained from a face recognized in an image obtained in advance by executing the camera capturing application or the communication application may serve a target to be stored in the saved data storage area Do.
  • When a face image already acquired by an application different from the application of the image processing serves as a target to be stored in the saved data storage area Do, at least one face image is extracted by performing a face recognition process on photographed images saved during the execution of the different application, and the extracted face image serves as a target to be stored. Specifically, to prompt selection of a character (face image) to appear in the first game or the second game, when the character is displayed on the upper LCD 22 and/or the lower LCD 12 (e.g., steps 30, 40, 90, 103, 126, 140, 160, and 162), at least one character including a face image acquired in advance by executing the different application is also displayed as a selection target. In this case, before the character is displayed, at least one face image is extracted by performing a face recognition process on the photographed images saved in advance, and a given face image among the extracted face images is displayed as a selection target in addition to the character. Then, when the user has selected the character including the face image obtained by the extraction to appear in the game, and the user has won a battle with the character, the face image is stored in the saved data storage area Do.
  • As described above, a face image already acquired by an application different from the application of the image processing also serves as a target to be stored in the saved data storage area Do. This increases the variations of face images that can be acquired by the user, and therefore makes it easy to collect face images. Additionally, a face image unexpected by the user is suddenly added as a target to participate in the game, and therefore, it is also possible to prevent weariness in collecting face images.
  • In the above descriptions, as an example, the angular velocities generated in the game apparatus 10 are detected, and the motion of the game apparatus 10 in real space is calculated using the angular velocities. Alternatively, the motion of the game apparatus 10 may be calculated using another method. As a first example, the motion of the game apparatus 10 may be calculated using the accelerations detected by the acceleration sensor 39 built into the game apparatus 10. As an example, when the computer performs processing on the assumption that the game apparatus 10 having the acceleration sensor 39 is in a static state (i.e., performs processing on the assumption that the acceleration detected by the acceleration sensor 39 is the gravitational acceleration only), if the game apparatus 10 is actually in a static state, it is possible to determine, based on the detected acceleration, whether or not the game apparatus 10 is tilted relative to the direction of gravity, and also possible to determine to what degree the game apparatus 10 is tilted. As another example, when it is assumed that the game apparatus 10 having the acceleration sensor 39 is in a dynamic state, the acceleration sensor 39 detects the acceleration corresponding to the motion of the acceleration sensor 39 in addition to a component of the gravitational acceleration. This makes it possible to determine the motion direction and the like of the game apparatus 10 by removing the component of the gravitational acceleration by a predetermined process. Specifically, when the game apparatus 10 having the acceleration sensor 39 is moved by being dynamically accelerated with the user's hand, it is possible to calculate various motions and/or positions of the game apparatus 10 by processing the acceleration signals generated by the acceleration sensor 39. It should be noted that even when it is assumed that the acceleration sensor 39 is in a dynamic state, it is possible to determine the tilt of the game apparatus 10 relative to the direction of gravity by removing the acceleration corresponding to the motion of the acceleration sensor 39 by a predetermined process.
  • As a second example, the motion of the game apparatus 10 may be calculated using the amount of movement of a camera image captured in real time by the real camera built into the game apparatus 10 (the outer capturing section 23 or the inner capturing section 24). For example, when the motion of the game apparatus 10 has changed the imaging direction and the imaging position of the real camera, the camera image captured by the real camera also changes. Accordingly, it is possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, using changes in the camera image captured by the real camera built into the game apparatus 10. As an example, a predetermined physical body is recognized in a camera image captured by the real camera built into the game apparatus 10, and the imaging angles and the imaging positions of the physical body are chronologically compared to one another. This makes it possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, from the amounts of changes in the imaging angle and the imaging position. As another example, the entire camera images captured by the real camera built into the game apparatus 10 are chronologically compared to one another. This makes it possible to calculate the angle of change in the imaging direction of the real camera, the amount of movement of the imaging position, and the like, from the amounts of changes in the imaging direction and the imaging range in the entire image.
  • As a third example, the motion of the game apparatus 10 may be calculated by combining at least two of: the angular velocities generated in the game apparatus 10; the accelerations generated in the game apparatus 10; and a camera image captured by the game apparatus 10. This makes it possible that in the state where it is difficult to estimate the motion of the game apparatus 10 in order to calculate the motion from one parameter, the motion of the game apparatus 10 is calculated by combining this parameter with another parameter, whereby the motion of the game apparatus 10 is calculated so as to compensate for such a state. As an example, to calculate the motion of the game apparatus 10 in the second example described above, if the captured camera image has moved chronologically in a horizontal direction, it may be difficult to accurately determine whether the capturing angle of the game apparatus 10 has rotated about the vertical axis, or the game apparatus 10 has moved horizontally. In this case, it is possible to easily determine, using the angular velocities generated in the game apparatus 10, whether the game apparatus 10 has moved so as to rotate or moved horizontally.
  • In addition, as a fourth example, the motion of the game apparatus 10 may be calculated using so-called AR (augmented reality) technology.
  • In addition, in the above descriptions, as an example, mainly, a planar image (a planar view image, as opposed to the stereoscopically visible image described above) of the real world based on a camera image CI acquired from either one of the outer capturing section 23 and the inner capturing section 24 is displayed on the upper LCD 22. Alternatively, an image stereoscopically visible with the naked eye (a stereoscopic image) may be displayed on the upper LCD 22. For example, as described above, the game apparatus 10 can display on the upper LCD 22 a stereoscopically visible image (stereoscopic image) using camera images acquired from the left outer capturing section 23 a and the right outer capturing section 23 b. In this case, drawing is performed such that the enemy objects EO are present in the stereoscopic image displayed on the upper LCD 22, and the acquisition target object AO appears from the stereoscopic image.
  • For example, to draw the enemy objects EO and the acquisition target object AO in the stereoscopic image, the image processing described above is performed using a left-eye image obtained from the left outer capturing section 23 a and a right-eye image obtained from the right outer capturing section 23 b. Specifically, in the image processing described above, either one of the left-eye image and the right-eye image is used as the camera image from which a face image is extracted by performing a face recognition process, and the enemy objects EO or the acquisition target object AO obtained by mapping a texture of the face image obtained from the one of the images are set in the virtual space. Further, a perspective transformation is performed from two virtual cameras (a stereo camera), on the enemy objects EO, the acquisition target object AO, and the bullet object BO, and the like that are placed in the virtual space, whereby a left-eye virtual world image and a right-eye virtual world image are obtained. Then, a left-eye display image is generated by combining a left-eye real world image with the left-eye virtual world image, and a right-eye display image is generated by combining a right-eye real world image with the right-eye virtual world image. Then, the left-eye display image and the right-eye display image are output to the upper LCD 22.
  • In addition, in the above descriptions, a real-time moving image captured by the real camera built into the game apparatus 10 is displayed on the upper LCD 22, and display is performed such that the enemy objects EO and the acquisition target object AO appear in the moving image (camera image) captured by the real camera. In the present invention, however, the images to be displayed on the upper LCD 22 have various possible variations. As a first example, a moving image recorded in advance, or a moving image or the like obtained from television broadcast or another device, is displayed on the upper LCD 22. In this case, the moving image is displayed on the upper LCD 22, and the enemy objects EO and the acquisition target object AO appear in the moving image. As a second example, a still image obtained from the real camera built into the game apparatus 10 or another real camera is displayed on the upper LCD 22. In this case, the still image obtained from the real camera is displayed on the upper LCD 22, and the enemy objects EO and the acquisition target object AO appear in the still image. Here, the still image obtained from the real camera may be a still image of the real world captured in real time by the real camera built into the game apparatus 10, or may be a still image of the real world photographed in advance by the real camera or another real camera, or may be a still image obtained from television broadcast or another device.
  • In addition, in the above embodiments, the upper LCD 22 is a parallax barrier type liquid crystal display device, and therefore is capable of switching between stereoscopic display and planar display by controlling the on/off states of the parallax barrier. In another embodiment, for example, the upper LCD 22 may be a lenticular type liquid crystal display device, and therefore may be capable of displaying a stereoscopic image and a planar image. Also in the case of the lenticular type, an image is displayed stereoscopically by dividing two images captured by the outer capturing section 23, each into vertical strips, and alternately arranging the divided vertical strips. Also in the case of the lenticular type, an image can be displayed in a planar manner by causing the user's right and left eyes to view one image captured by the inner capturing section 24. That is, even the lenticular type liquid crystal display device is capable of causing the user's left and right eyes to view the same image by dividing one image into vertical strips, and alternately arranging the divided vertical strips. This makes it possible to display an image, captured by the inner capturing section 24, as a planar image.
  • In addition, in the above embodiments, the descriptions are given using the hand-held game apparatus 10. The present invention, however, may be achieved by causing a stationary game apparatus or an information processing apparatus, such as a general personal computer, to execute the image processing program according to the present invention. Alternatively, in another embodiment, not only a game apparatus but also any hand-held electronic device may be used, such as a personal digital assistant (PDA), a mobile phone, a personal computer, or a camera. For example, a mobile phone may include two display sections and a real camera on the main surface of a housing.
  • In addition, in the above descriptions, the image processing is performed by the game apparatus 10. Alternatively, at least some of the process steps in the image processing may be performed by another device. For example, when the game apparatus 10 is configured to communicate with another device (e.g., a server or another game apparatus), the process steps in the image processing may be performed by the cooperation of the game apparatus 10 and said another device. As an example, the game apparatus 10 performs a face image acquisition process and game processing for permitting face images to be saved in an accumulating manner, and the face images that serve as targets to be permitted to be saved when the game has been successful may be saved in another device. In this case, a plurality of game apparatuses 10 save face images in another device in an accumulating manner, and this further encourages collection of face images. Additionally, this may also possibly create a different enjoyment by browsing face images saved by other game apparatuses 10. As another example, another device may perform the processes of steps 52 through 57 of FIG. 29, and the game apparatus 10 may perform the processes of steps 58 and 59 of FIG. 29, by the cooperation of the game apparatus 10 and said another device. Thus, the image processing described above can be performed by a processor or by the cooperation of a plurality of processors, the processor and the plurality of processors included in an information processing system that includes at least one information processing apparatus. Further, in the above embodiments, the processing of the flow chart described above is performed in accordance with the execution of a predetermined program by the information processing section 31 of the game apparatus 10. Alternatively, some or all of the processing may be performed by a dedicated circuit provided in the game apparatus 10.
  • In addition, the shape of the game apparatus 10, and the shapes, the number, the placement, or the like of the various buttons of the operation button 14, the analog stick 15, and the touch panel 13 that are provided in the game apparatus 10 are merely illustrative, and the present invention can be achieved with other shapes, numbers, placements, and the like. Further, the processing orders, the setting values, the criterion values, and the like that are used in the image processing described above are also merely illustrative, and it is needless to say that the present invention can be achieved with other orders and values.
  • In addition, the image processing program (game program) described above may be supplied to the game apparatus 10 not only from an external storage medium, such as the external memory 45 or the data storage external memory 46, but also via a wireless or wired communication link. Further, the program may be stored in advance in a non-volatile storage device of the game apparatus 10. It should be noted that examples of the information storage medium having stored thereon the program may include a CD-ROM, a DVD, and any other optical disk storage medium similar to these, a flexible disk, a hard disk, a magnetic optical disk, and a magnetic tape, as well as a non-volatile memory. Furthermore, the information storage medium for storing the program may be a volatile memory that temporarily stores the program. Such storage media can be defined as storage media that can be read by a computer or the like. For example, a computer or the like is caused to read and execute the program stored in each of these storage media, and thereby can provide the various functions described above.
  • While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention. It is understood that the scope of the invention should be interpreted only by the appended claims. It is also understood that one skilled in the art can implement the invention in the equivalent range based on the description of the invention and common technical knowledge, from the description of the specific embodiments of the invention. Further, throughout the specification, it should be understood that terms in singular form include the concept of plurality unless otherwise specified. Thus, it should be understood that articles or adjectives indicating the singular form (e.g., “a”, “an”, “the”, and the like in English) include the concept of plurality unless otherwise specified. Furthermore, it should be understood that terms used in the present specification have meanings generally used in the art unless otherwise specified. Therefore, unless otherwise defined, all the jargons and technical terms have the same meanings as those generally understood by one skilled in the art of the invention. In the event of any contradiction, the present specification (including meanings defined herein) has priority.
  • (Appended Notes)
  • The above embodiments can be exemplified by the following forms (referred to as “appended notes”). The components included in each appended note can be combined with the components included in the other appended notes.
  • (Appended Note 1)
  • A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • an image acquisition step of acquiring a face image;
  • a step of creating a first character object based on the acquired face image; and
  • a game processing step of executing a game by displaying the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing step including:
      • a step of contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
      • a step of invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
  • (Appended Note 2)
  • A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • an image acquisition step of acquiring at least one face image;
  • a step of creating a first character object, the first character object including one of the acquired face images; and
  • a game processing step of executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing step including:
      • a step of advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
      • a step of, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
  • (Appended Note 3)
  • A computer-readable storage medium having stored thereon a game program to be executed by a computer that displays an image on a display device, the game program causing the computer to execute as:
  • an image acquisition step of acquiring a face image;
  • a step of creating a character object, the character object including a face image obtained by deforming the acquired face image;
  • a game processing step of receiving an operation of a player, and advancing a game related to the face image;
  • a step of determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
  • a step of, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
  • (Appended Note 4)
  • An image processing apparatus connectable to a display device, the image processing apparatus comprising:
  • image acquisition means for acquiring a face image;
  • means for creating a first character object based on the acquired face image; and
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing means including:
      • means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
      • means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
  • (Appended Note 5)
  • An image processing apparatus connectable to a display device, the image processing apparatus comprising:
  • image acquisition means for acquiring at least one face image; means for creating a first character object, the first character object including one of the acquired face images; and
  • game processing means for executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing means including:
      • means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
      • means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
  • (Appended Note 6)
  • An image processing apparatus connectable to a display device, the image processing apparatus comprising:
  • image acquisition means for acquiring a face image;
  • means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
  • game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
  • means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
  • means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
  • (Appended Note 7)
  • An image processing apparatus comprising:
  • a display device;
  • image acquisition means for acquiring a face image;
  • means for creating a first character object based on the acquired face image; and
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing means including:
      • means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
      • means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
  • (Appended Note 8)
  • An image processing apparatus comprising:
  • a display device;
  • image acquisition means for acquiring at least one face image;
  • means for creating a first character object, the first character object including one of the acquired face images; and
  • game processing means for executing a game by displaying on the display device the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing means including:
      • means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
      • means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
  • (Appended Note 9)
  • An image processing apparatus comprising:
  • a display device;
  • image acquisition means for acquiring a face image;
  • means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
  • game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
  • means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
  • means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
  • (Appended Note 10)
  • An image processing system comprising:
  • a capturing device;
  • a display device that displays information including an image acquired by the capturing device; and
  • an image processing apparatus that cooperates with the capturing device and the display device,
  • the image processing apparatus including:
      • image acquisition means for acquiring a face image;
      • means for creating a first character object based on the acquired face image; and
      • game processing means for executing a game by displaying on the display device the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing means including:
      • means for contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
      • means for invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
  • (Appended Note 11)
  • An image processing system comprising:
  • a capturing device;
  • a display device that displays information including an image acquired by the capturing device; and
  • an image processing apparatus that cooperates with the capturing device and the display device,
  • the image processing apparatus including:
      • image acquisition means for acquiring at least one face image;
      • means for creating a first character object, the first character object including one of the acquired face images; and
      • game processing means for executing a game by displaying on the display device the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing means including:
      • means for advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
      • means for, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
  • (Appended Note 12)
  • An image processing system comprising:
  • a capturing device;
  • a display device that displays information including an image acquired by the capturing device; and
  • an image processing apparatus that cooperates with the capturing device and the display device,
  • the image processing apparatus including:
      • image acquisition means for acquiring a face image;
      • means for creating a character object, the character object including a face image obtained by deforming the acquired face image;
      • game processing means for receiving an operation of a player, and advancing a game related to the face image by displaying the character object on the display device;
      • means for determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
      • means for, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
  • (Appended Note 13)
  • An information processing method performed by a computer that displays an image on a display device, the computer executing:
  • an image acquisition step of acquiring a face image;
  • a step of creating a first character object based on the acquired face image; and
  • a game processing step of executing a game by displaying the first character object together with a second character object, the second character object being different from the first character object,
  • the game processing step including:
      • a step of contributing to a success in the game by an attack on the first character object, the attack made in accordance with an operation of a player; and
      • a step of invalidating an attack on the second character object, the attack made in accordance with an operation of the player.
  • (Appended Note 14)
  • An information processing method performed by a computer that displays an image on a display device, the computer executing:
  • an image acquisition step of acquiring at least one face image;
  • a step of creating a first character object, the first character object including one of the acquired face images; and
  • a game processing step of executing a game by displaying the first character object together with a second character object and a third character object, the second character object being smaller in dimensions than the first character object and including the one of the acquired face images, the third character object being smaller in dimensions than the first character object and including a face image other than the one of the acquired face images,
  • the game processing step including:
      • a step of advancing deformation of the face image included in the first character object by an attack on the third character object, the attack made in accordance with an operation of a player; and
      • a step of, by an attack on the second character object in accordance with an operation of the player, reversing the deformation such that the face image included in the first character object approaches the acquired original face image.
  • (Appended Note 15)
  • An information processing method performed by a computer that displays an image on a display device, the computer executing:
  • an image acquisition step of acquiring a face image;
  • a step of creating a character object, the character object including a face image obtained by deforming the acquired face image;
  • a game processing step of receiving an operation of a player, and advancing a game related to the face image;
  • a step of determining a success or a failure in the game, the success or the failure made in accordance with an operation of the player; and
  • a step of, when a result of the game has been successful, restoring the deformed face image to the acquired original face image.
  • A storage medium having stored thereon a game program, an image processing apparatus, an image processing system, and an image processing method, according to the present invention can generate a new image by combining a real world image with a virtual world image, and therefore are suitable for use as a game program, an image processing apparatus, an image processing system, an image processing method, and the like that perform a process of displaying various images on a display device.

Claims (22)

1. A computer-readable storage medium having stored thereon a game program to be executed by a computer of a game apparatus that displays an image on a display device, the game program causing the computer to execute:
an image acquisition step of acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
a step of creating a first character object, the first character object being a character object including the face image stored in the first storage area;
a first game processing step of, in the predetermined game, advancing a game related to the first character object in accordance with an operation of a player;
a determination step of determining a success in the game related to the first character object; and
a step of, at least when a success in the game has been determined in the determination step, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
2. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
in the image acquisition step, the face image is acquired and temporarily stored in the first storage area before the start of the predetermined game.
3. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
a step of creating a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area, wherein
in the first game processing step, in the predetermined game, a game related to the second character object is additionally advanced in accordance with an operation of the player.
4. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
a step of creating a second character object, the second character object being a character object including a face image selected automatically or by the player from among the face images saved in the second storage area; and
a second game processing step of advancing a game related to the second character object in accordance with an operation of the player.
5. The computer-readable storage medium having stored thereon the game program according to claim 2, wherein
the game apparatus is capable of acquiring an image from a capturing device, and
in the image acquisition step, the face image is acquired from the capturing device before the start of the predetermined game.
6. The computer-readable storage medium having stored thereon the game program according to claim 5, wherein
the game apparatus is capable of acquiring an image from a first capturing device that captures a front direction of a display surface of the display device, and an image from a second capturing device that captures a direction of a back surface of the display surface of the display device, the first capturing device and the second capturing device serving as the capturing device, and
the image acquisition step includes:
a step of acquiring a face image captured by the first capturing device in preference to acquiring a face image captured by the second capturing device; and
a step of, after the face image from the first capturing device has been saved in the second storage area, permitting the face image captured by the second capturing device to be acquired.
7. The computer-readable storage medium having stored thereon the game program according to claim 1, further causing the computer to execute:
a step of specifying attributes of the face images saved in the second storage area; and
a step of prompting the player to acquire a face image corresponding to an attribute different from the attributes specified from the face images saved in the second storage area.
8. The computer-readable storage medium having stored thereon the game program according to claim 3, wherein
the first game processing step includes:
a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player, and
in the first game processing step, an attack on the first character object is a valid attack for succeeding in the game related to the first character object, and an attack on the second character object is an invalid attack for succeeding in the game related to the first character object.
9. The computer-readable storage medium having stored thereon the game program according to claim 4, further causing the computer to execute:
a step of creating a third character object, the third character object being a character object including a face image different from the face image included in the second character object, wherein
the second game processing step includes:
a step of advancing the game related to the second character obj ect by attacking the character objects in accordance with an operation of the player, and
in the second game processing step, an attack on the second character object is a valid attack for succeeding in the game related to the second character object, and an attack on the third character object is an invalid attack for succeeding the game related to the second character object.
10. The computer-readable storage medium having stored thereon the game program according to claim 3, further causing the computer to execute:
a step of creating a third character object, the third character object being a character object including the face image stored in the first storage area and being smaller in dimensions than the first character object; and
a step of creating a fourth character object, the fourth character object being a character object including a face image different from the face image stored in the first storage area and being smaller in dimensions than the first character object, wherein
the first game processing step includes:
a step of advancing the game related to the first character object by attacking the character objects in accordance with an operation of the player;
a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the first character object; and
a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the first character object approaches the original face image stored in the first storage area.
11. The computer-readable storage medium having stored thereon the game program according to claim 4, further causing the computer to execute:
a step of creating a third character object, the third character object being a character object including the same face image as the face image included in the second character object and being smaller in dimensions than the second character object; and
a step of creating a fourth character object, the fourth character object being a character object including a face image different from the face image included in the second character object and being smaller in dimensions than the second character object, wherein
the second game processing step includes:
a step of advancing the game related to the second character obj ect by attacking the character objects in accordance with an operation of the player;
a step of, when the fourth character object has been attacked, advancing deformation of the face image included in the second character object; and
a step of, when the third character object has been attacked, reversing the deformation such that the face image included in the second character object approaches the original face image saved in the second storage area.
12. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
in the step of creating the first character object, a character object including a face image obtained by deforming the face image stored in the first storage area is created as the first character object, and
the first game processing step includes:
a step of, when the game related to the first character object has been successful, restoring the deformed face image to the original face image stored in the first storage area.
13. The computer-readable storage medium having stored thereon the game program according to claim 4, wherein
in the step of creating the second character object, a character object including a face image obtained by deforming the face image saved in the second storage area is created as the second character object, and
the second game processing step includes:
a step of, when the game related to the second character object has been successful, restoring the deformed face image to the original face image saved in the second storage area.
14. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
in the image acquisition step, the face image is acquired and temporarily stored in the first storage area during the predetermined game, and
in the first game processing step, in accordance with the creation of the first character object based on the acquisition of the face image during the predetermined game, the first character object is caused to appear in the predetermined game, and the game related to the first character object is advanced.
15. The computer-readable storage medium having stored thereon the game program according to claim 14, further causing the computer to execute:
a captured image acquisition step of acquiring a captured image captured by a real camera;
a display image generation step of generating a display image in which a virtual character object that appears in the predetermined game is placed so as to have, as a background, the captured image acquired in the captured image acquisition step; and
a display control step of displaying on the display device the display image generated in the display image generation step, wherein
in the image acquisition step, during the predetermined game, at least one face image is extracted from the captured image displayed on the display device, and is temporarily stored in the first storage area.
16. The computer-readable storage medium having stored thereon the game program according to claim 15, wherein
in the display image generation step, the display image is generated by placing the first character object such that, when displayed on the display device, the first character object overlaps a position of the face image in the captured image, the face image extracted in the image acquisition step.
17. The computer-readable storage medium having stored thereon the game program according to claim 16, wherein
in the captured image acquisition step, captured images of a real world captured in real time by the real camera are repeatedly acquired,
in the display image generation step, the captured images repeatedly acquired in the captured image acquisition step are sequentially set as the background,
in the image acquisition step, face images corresponding to the already extracted face image are repeatedly acquired from the captured images sequentially set as the background,
in the step of creating the first character object, the first character object is repeatedly created so as to include the face images repeatedly acquired in the image acquisition step, and
in the display image generation step, the display image is generated by placing the repeatedly created first character object such that, when displayed on the display device, the repeatedly created first character object overlaps positions of the face images in the respective captured images, the face images repeatedly acquired in the image acquisition step.
18. The computer-readable storage medium having stored thereon the game program according to claim 1, wherein
the game apparatus is capable of using image data stored in storage means for storing data not temporarily, and
in the image acquisition step, before the start of the predetermined game, at least one face image is extracted from the image data stored in the storage means, and is temporarily stored in the first storage area.
19. An image processing apparatus connectable to a display device, the image processing apparatus comprising:
image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
means for creating a character object, the character object including the face image stored in the first storage area;
game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying a game related to the character object on the display device;
determination means for determining a success in the game related to the character object; and
means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
20. An image processing apparatus comprising:
a display device;
image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
means for creating a character object, the character object including the face image stored in the first storage area;
game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying the game related to the character object on the display device;
determination means for determining a success in the game related to the character object; and
means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
21. An image processing system that displays information including an image on a display device, the image processing system comprising:
image acquisition means for acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
means for creating a character object, the character object including the face image stored in the first storage area;
game processing means for, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player by displaying the game related to the character object on the display device;
determination means for determining a success in the game related to the character object; and
means for, at least when a success in the game has been determined by the determination means, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
22. An image processing method performed by a computer that displays an image on a display device, the computer executing:
an image acquisition step of acquiring a face image and temporarily storing the acquired face image in a first storage area, during a predetermined game or before a start of the predetermined game;
a step of creating a character object, the character object including the face image stored in the first storage area;
a game processing step of, in the predetermined game, advancing a game related to the character object in accordance with an operation of a player;
a determination step of determining a success in the game related to the character object; and
a step of, at least when a success in the game has been determined in the determination step, saving the face image stored in the first storage area, in a second storage area in an accumulating manner.
US13/080,989 2010-10-15 2011-04-06 Storage medium having stored thereon game program, image processing apparatus, image processing system, and image processing method Abandoned US20120094773A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2010-232869 2010-10-15
JP2010232869 2010-10-15
JP2010-293443 2010-12-28
JP2010293443A JP5827007B2 (en) 2010-10-15 2010-12-28 Game program, image processing apparatus, image processing system, and image processing method

Publications (1)

Publication Number Publication Date
US20120094773A1 true US20120094773A1 (en) 2012-04-19

Family

ID=45934619

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/080,989 Abandoned US20120094773A1 (en) 2010-10-15 2011-04-06 Storage medium having stored thereon game program, image processing apparatus, image processing system, and image processing method

Country Status (2)

Country Link
US (1) US20120094773A1 (en)
JP (1) JP5827007B2 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120077582A1 (en) * 2010-09-24 2012-03-29 Hal Laboratory Inc. Computer-Readable Storage Medium Having Program Stored Therein, Apparatus, System, and Method, for Performing Game Processing
US20120214591A1 (en) * 2011-02-22 2012-08-23 Nintendo Co., Ltd. Game device, storage medium storing game program, game system, and game process method
US20120268493A1 (en) * 2011-04-22 2012-10-25 Nintendo Co., Ltd. Information processing system for augmented reality
US20130050500A1 (en) * 2011-08-31 2013-02-28 Nintendo Co., Ltd. Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
US20130222647A1 (en) * 2011-06-27 2013-08-29 Konami Digital Entertainment Co., Ltd. Image processing device, control method for an image processing device, program, and information storage medium
US20140092133A1 (en) * 2012-10-02 2014-04-03 Nintendo Co., Ltd. Computer-readable medium, image processing device, image processing system, and image processing method
US20140125701A1 (en) * 2012-11-06 2014-05-08 Nintendo Co., Ltd. Computer-readable medium, information processing apparatus, information processing system and information processing method
US20140232746A1 (en) * 2013-02-21 2014-08-21 Hyundai Motor Company Three dimensional augmented reality display apparatus and method using eye tracking
US8922588B2 (en) 2011-08-31 2014-12-30 Nintendo Co., Ltd. Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
US20150109481A1 (en) * 2013-10-18 2015-04-23 Nintendo Co., Ltd. Computer-readable recording medium recording information processing program, information processing apparatus, information processing system, and information processing method
US20150302635A1 (en) * 2014-03-17 2015-10-22 Meggitt Training Systems Inc. Method and apparatus for rendering a 3-dimensional scene
US20160196662A1 (en) * 2013-08-16 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for manufacturing virtual fitting model image
WO2016122973A1 (en) * 2015-01-26 2016-08-04 Brian Mullins Real time texture mapping
US20180005555A1 (en) * 2015-03-20 2018-01-04 Ricoh Company, Ltd. Display apparatus, display control method, and display system
US20180053490A1 (en) * 2015-02-27 2018-02-22 Sharp Kabushiki Kaisha Display device and method of displaying image on display device
US10089776B2 (en) * 2015-09-04 2018-10-02 Arm Limited Graphics processing systems
US10282895B2 (en) 2014-07-24 2019-05-07 Arm Limited Transparency parameter determination when rendering a scene for output in graphics processing systems
US10341478B2 (en) * 2017-07-03 2019-07-02 Essential Products, Inc. Handheld writing implement form factor mobile device
US10441890B2 (en) * 2012-01-18 2019-10-15 Kabushiki Kaisha Square Enix Game apparatus
US10462345B2 (en) 2017-08-11 2019-10-29 Essential Products, Inc. Deformable structure that compensates for displacement of a camera module of a camera accessory
US10607063B2 (en) * 2015-07-28 2020-03-31 Sony Corporation Information processing system, information processing method, and recording medium for evaluating a target based on observers
US10614619B2 (en) 2015-02-27 2020-04-07 Arm Limited Graphics processing systems
US10636213B2 (en) 2015-10-26 2020-04-28 Arm Limited Graphics processing systems
US11294453B2 (en) * 2019-04-23 2022-04-05 Foretell Studios, LLC Simulated reality cross platform system
US20220212104A1 (en) * 2020-03-18 2022-07-07 Tencent Technology (Shenzhen) Company Limited Display method and apparatus for virtual environment picture, and device and storage medium
US20220258054A1 (en) * 2021-02-15 2022-08-18 Nintendo Co., Ltd. Non-transitory computer-readable storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method
US11471770B2 (en) * 2018-07-23 2022-10-18 Cygames, Inc. Game program, game server, game system, and game device
US20230015224A1 (en) * 2020-01-14 2023-01-19 Hewlett-Packard Development Company, L.P. Face orientation-based cursor positioning on display screens

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015196091A (en) * 2014-04-02 2015-11-09 アップルジャック 199 エル.ピー. Sensor-based gaming system for avatar to represent player in virtual environment
JP5925347B1 (en) * 2015-02-26 2016-05-25 株式会社Cygames Information processing system and program, server, terminal, and medium
JP6727505B2 (en) * 2017-08-31 2020-07-22 株式会社コナミデジタルエンタテインメント Game management device and program

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5595389A (en) * 1993-12-30 1997-01-21 Eastman Kodak Company Method and apparatus for producing "personalized" video games using CD discs
US20030083132A1 (en) * 2000-04-17 2003-05-01 Igt System for and method of capturing a player's image for incorporation into a game
US20030100363A1 (en) * 2001-11-28 2003-05-29 Ali Guiseppe C. Method and apparatus for inputting appearance of computer operator into a computer program
US20040147314A1 (en) * 2000-10-11 2004-07-29 Igt Frame capture of actual game play
US6863608B1 (en) * 2000-10-11 2005-03-08 Igt Frame buffer capture of actual game play
US20050078125A1 (en) * 2003-09-25 2005-04-14 Nintendo Co., Ltd. Image processing apparatus and storage medium storing image processing program
US6919892B1 (en) * 2002-08-14 2005-07-19 Avaworks, Incorporated Photo realistic talking head creation system and method
US20080168807A1 (en) * 2007-01-17 2008-07-17 Dominique Dion Coin operated entertainment system
US20090305782A1 (en) * 2008-06-10 2009-12-10 Oberg Gregory Keith Double render processing for handheld video game device
US20120083330A1 (en) * 2010-10-05 2012-04-05 Zynga Game Network, Inc. System and Method for Generating Achievement Objects Encapsulating Captured Event Playback

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08305892A (en) * 1995-05-11 1996-11-22 Sega Enterp Ltd Image processor and game device equipped with the same
JP2000235656A (en) * 1999-02-15 2000-08-29 Sony Corp Image processor, method and program providing medium
JP3615501B2 (en) * 2000-11-09 2005-02-02 株式会社ソニー・コンピュータエンタテインメント Object forming method, object forming program, recording medium on which object forming program is recorded, and gaming apparatus
JP2005006992A (en) * 2003-06-19 2005-01-13 Aruze Corp Game device, game program, and recording medium recording the game program
JP2005196670A (en) * 2004-01-09 2005-07-21 Sony Corp Mobile terminal system and method for generating object
JP4632250B2 (en) * 2005-09-15 2011-02-16 Kddi株式会社 Attribute determination entertainment device by face
JP2008183053A (en) * 2007-01-26 2008-08-14 Taito Corp Game system and game program
JP2010142592A (en) * 2008-12-22 2010-07-01 Nintendo Co Ltd Game program and game device
JP5558730B2 (en) * 2009-03-24 2014-07-23 株式会社バンダイナムコゲームス Program and game device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5595389A (en) * 1993-12-30 1997-01-21 Eastman Kodak Company Method and apparatus for producing "personalized" video games using CD discs
US20030083132A1 (en) * 2000-04-17 2003-05-01 Igt System for and method of capturing a player's image for incorporation into a game
US20040147314A1 (en) * 2000-10-11 2004-07-29 Igt Frame capture of actual game play
US6863608B1 (en) * 2000-10-11 2005-03-08 Igt Frame buffer capture of actual game play
US20030100363A1 (en) * 2001-11-28 2003-05-29 Ali Guiseppe C. Method and apparatus for inputting appearance of computer operator into a computer program
US6919892B1 (en) * 2002-08-14 2005-07-19 Avaworks, Incorporated Photo realistic talking head creation system and method
US20050078125A1 (en) * 2003-09-25 2005-04-14 Nintendo Co., Ltd. Image processing apparatus and storage medium storing image processing program
US20080168807A1 (en) * 2007-01-17 2008-07-17 Dominique Dion Coin operated entertainment system
US20090305782A1 (en) * 2008-06-10 2009-12-10 Oberg Gregory Keith Double render processing for handheld video game device
US20120083330A1 (en) * 2010-10-05 2012-04-05 Zynga Game Network, Inc. System and Method for Generating Achievement Objects Encapsulating Captured Event Playback

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9095774B2 (en) * 2010-09-24 2015-08-04 Nintendo Co., Ltd. Computer-readable storage medium having program stored therein, apparatus, system, and method, for performing game processing
US20120077582A1 (en) * 2010-09-24 2012-03-29 Hal Laboratory Inc. Computer-Readable Storage Medium Having Program Stored Therein, Apparatus, System, and Method, for Performing Game Processing
US20120214591A1 (en) * 2011-02-22 2012-08-23 Nintendo Co., Ltd. Game device, storage medium storing game program, game system, and game process method
US20120268493A1 (en) * 2011-04-22 2012-10-25 Nintendo Co., Ltd. Information processing system for augmented reality
US20130222647A1 (en) * 2011-06-27 2013-08-29 Konami Digital Entertainment Co., Ltd. Image processing device, control method for an image processing device, program, and information storage medium
US8866848B2 (en) * 2011-06-27 2014-10-21 Konami Digital Entertainment Co., Ltd. Image processing device, control method for an image processing device, program, and information storage medium
US20130050500A1 (en) * 2011-08-31 2013-02-28 Nintendo Co., Ltd. Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
US9710967B2 (en) * 2011-08-31 2017-07-18 Nintendo Co., Ltd. Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
US8922588B2 (en) 2011-08-31 2014-12-30 Nintendo Co., Ltd. Information processing program, information processing system, information processing apparatus, and information processing method, utilizing augmented reality technique
US10441890B2 (en) * 2012-01-18 2019-10-15 Kabushiki Kaisha Square Enix Game apparatus
US9478068B2 (en) * 2012-10-02 2016-10-25 Nintendo Co., Ltd. Computer-readable medium, image processing device, image processing system, and image processing method
US20140092133A1 (en) * 2012-10-02 2014-04-03 Nintendo Co., Ltd. Computer-readable medium, image processing device, image processing system, and image processing method
US9691179B2 (en) * 2012-11-06 2017-06-27 Nintendo Co., Ltd. Computer-readable medium, information processing apparatus, information processing system and information processing method
US20140125701A1 (en) * 2012-11-06 2014-05-08 Nintendo Co., Ltd. Computer-readable medium, information processing apparatus, information processing system and information processing method
US20140232746A1 (en) * 2013-02-21 2014-08-21 Hyundai Motor Company Three dimensional augmented reality display apparatus and method using eye tracking
US20160196662A1 (en) * 2013-08-16 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for manufacturing virtual fitting model image
US20150109481A1 (en) * 2013-10-18 2015-04-23 Nintendo Co., Ltd. Computer-readable recording medium recording information processing program, information processing apparatus, information processing system, and information processing method
US9602740B2 (en) * 2013-10-18 2017-03-21 Nintendo Co., Ltd. Computer-readable recording medium recording information processing program, information processing apparatus, information processing system, and information processing method for superimposing a virtual image on a captured image of real space
US20150302635A1 (en) * 2014-03-17 2015-10-22 Meggitt Training Systems Inc. Method and apparatus for rendering a 3-dimensional scene
US9875573B2 (en) * 2014-03-17 2018-01-23 Meggitt Training Systems, Inc. Method and apparatus for rendering a 3-dimensional scene
US10282895B2 (en) 2014-07-24 2019-05-07 Arm Limited Transparency parameter determination when rendering a scene for output in graphics processing systems
WO2016122973A1 (en) * 2015-01-26 2016-08-04 Brian Mullins Real time texture mapping
US9659381B2 (en) 2015-01-26 2017-05-23 Daqri, Llc Real time texture mapping for augmented reality system
US10614619B2 (en) 2015-02-27 2020-04-07 Arm Limited Graphics processing systems
US20180053490A1 (en) * 2015-02-27 2018-02-22 Sharp Kabushiki Kaisha Display device and method of displaying image on display device
US10049605B2 (en) * 2015-03-20 2018-08-14 Ricoh Company, Limited Display apparatus, display control method, and display system
US20180005555A1 (en) * 2015-03-20 2018-01-04 Ricoh Company, Ltd. Display apparatus, display control method, and display system
US10607063B2 (en) * 2015-07-28 2020-03-31 Sony Corporation Information processing system, information processing method, and recording medium for evaluating a target based on observers
US10089776B2 (en) * 2015-09-04 2018-10-02 Arm Limited Graphics processing systems
US10636213B2 (en) 2015-10-26 2020-04-28 Arm Limited Graphics processing systems
US10341478B2 (en) * 2017-07-03 2019-07-02 Essential Products, Inc. Handheld writing implement form factor mobile device
US10462345B2 (en) 2017-08-11 2019-10-29 Essential Products, Inc. Deformable structure that compensates for displacement of a camera module of a camera accessory
US11471770B2 (en) * 2018-07-23 2022-10-18 Cygames, Inc. Game program, game server, game system, and game device
US11294453B2 (en) * 2019-04-23 2022-04-05 Foretell Studios, LLC Simulated reality cross platform system
US20230015224A1 (en) * 2020-01-14 2023-01-19 Hewlett-Packard Development Company, L.P. Face orientation-based cursor positioning on display screens
US20220212104A1 (en) * 2020-03-18 2022-07-07 Tencent Technology (Shenzhen) Company Limited Display method and apparatus for virtual environment picture, and device and storage medium
US11759709B2 (en) * 2020-03-18 2023-09-19 Tencent Technology (Shenzhen) Company Limited Display method and apparatus for virtual environment picture, and device and storage medium
US20220258054A1 (en) * 2021-02-15 2022-08-18 Nintendo Co., Ltd. Non-transitory computer-readable storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method
US11738265B2 (en) * 2021-02-15 2023-08-29 Nintendo Co., Ltd. Non-transitory computer-readable storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method
US11771991B2 (en) 2021-02-15 2023-10-03 Nintendo Co., Ltd. Non-transitory computer-readable storage medium having stored therein information processing program, information processing apparatus, information processing system, and information processing method

Also Published As

Publication number Publication date
JP5827007B2 (en) 2015-12-02
JP2012101024A (en) 2012-05-31

Similar Documents

Publication Publication Date Title
US20120094773A1 (en) Storage medium having stored thereon game program, image processing apparatus, image processing system, and image processing method
US8956227B2 (en) Storage medium recording image processing program, image processing device, image processing system and image processing method
US9495800B2 (en) Storage medium having stored thereon image processing program, image processing apparatus, image processing system, and image processing method
JP5865357B2 (en) Avatar / gesture display restrictions
TWI469813B (en) Tracking groups of users in motion capture system
US11839811B2 (en) Game processing program, game processing method, and game processing device
JP5627973B2 (en) Program, apparatus, system and method for game processing
JP3904562B2 (en) Image display system, recording medium, and program
CN105073210B (en) Extracted using the user&#39;s body angle of depth image, curvature and average terminal position
KR100969873B1 (en) Robot game system and robot game method relating virtual space to real space
US11738270B2 (en) Simulation system, processing method, and information storage medium
JP5675260B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
US20110305398A1 (en) Image generation system, shape recognition method, and information storage medium
JP5939733B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
JP2000350860A (en) Composite reality feeling device and method for generating composite real space picture
JP6795322B2 (en) Program and AR experience providing equipment
JP5563613B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
JP3413128B2 (en) Mixed reality presentation method
JP2009213575A (en) Multiplayer type gun game device
JP3904706B2 (en) Playground equipment and method for controlling the same
JP7371199B1 (en) Game program, game system, game device, and game control method
JP2024056210A (en) GAME PROGRAM, GAME SYSTEM, GAME DEVICE, AND GAME CONTROL METHOD
JP2024056670A (en) GAME PROGRAM, GAME SYSTEM, GAME DEVICE, AND GAME CONTROL METHOD
JP2024005495A (en) Terminal device control program, terminal device, and terminal device control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: NINTENDO CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SUZUKI, TOSHIAKI;REEL/FRAME:026083/0722

Effective date: 20110329

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION