WO2015025442A1 - Information processing device and information processing method - Google Patents

Information processing device and information processing method Download PDF

Info

Publication number
WO2015025442A1
WO2015025442A1 PCT/JP2014/002529 JP2014002529W WO2015025442A1 WO 2015025442 A1 WO2015025442 A1 WO 2015025442A1 JP 2014002529 W JP2014002529 W JP 2014002529W WO 2015025442 A1 WO2015025442 A1 WO 2015025442A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
marker
image
camera
processing apparatus
Prior art date
Application number
PCT/JP2014/002529
Other languages
French (fr)
Japanese (ja)
Inventor
義勝 金丸
佐藤 文昭
雄一 西澤
Original Assignee
株式会社ソニー・コンピュータエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ソニー・コンピュータエンタテインメント filed Critical 株式会社ソニー・コンピュータエンタテインメント
Publication of WO2015025442A1 publication Critical patent/WO2015025442A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to an information processing apparatus and an information processing method for performing information processing in response to a user operation.
  • Information processing devices such as portable game machines and PDAs (Personal Digital Assistants) are widely used.
  • many information processing apparatuses are equipped with a communication function, and a multifunctional information processing apparatus in which functions of a mobile phone, a PDA, and the like are integrated into one has appeared, such as a smartphone.
  • Such an information processing apparatus includes a large-capacity memory and a high-speed processor, and a user can enjoy various applications by installing an application program in the information processing apparatus.
  • AR augmented reality
  • Patent Document 1 A technique has also been proposed (see, for example, Patent Document 1).
  • AR technology fuses the real world and the virtual world by an approach different from the construction of a virtual world based on user motion recognition.
  • the present invention has been made in view of such problems, and an object thereof is to provide a technique that can effectively use the AR technique in information processing.
  • An aspect of the present invention relates to an information processing apparatus.
  • the information processing apparatus includes a captured image acquisition unit that acquires data of the captured image from a camera that is capturing a real space, an image analysis unit that analyzes the captured image and detects a marker present in the captured space, a camera A relative position relationship between the object space and the object space, a space definition unit that defines a three-dimensional coordinate system corresponding to the object space and a screen corresponding to the field of view of the camera, and information processing corresponding to the detected marker And an information processing unit that arranges a virtual object corresponding to the marker in the three-dimensional coordinate system and an object image formed by projecting the virtual object on the screen is superimposed on the captured image to generate an output image An output image generation unit that outputs to a display device, and the information processing unit arranges an operation object that is a user operation means for information processing as a virtual object. It is characterized in.
  • This information processing method is an information processing method performed by an information processing device using a photographed image, the step of acquiring data of the photographed image from a camera that is photographing a real space and storing it in a memory, and analyzing the photographed image Detecting a marker existing in the object space, identifying a relative positional relationship between the camera and the object space, and a screen corresponding to the field of view of the camera and a three-dimensional coordinate system corresponding to the object space.
  • An object image formed by projecting a virtual object on a screen, defining a step, executing information processing corresponding to the detected marker, placing a virtual object corresponding to the marker in a three-dimensional coordinate system Are superimposed on the captured image read from the memory, and an output image is generated and output to the display device.
  • a user operation means to the information processing.
  • AR technology can be effectively used for information processing such as games.
  • (A) is a figure which shows the front surface of an electronic device
  • (b) is a figure which shows the back surface of an electronic device.
  • (A) is a figure which shows the upper surface of an electronic device
  • (b) is a figure which shows the lower surface of an electronic device
  • (c) is a figure which shows the left side surface of an electronic device.
  • It is a figure which shows the circuit structure of an electronic device.
  • It is a figure which shows the functional block of the information processing apparatus in this Embodiment.
  • FIG. 11 It is a figure which shows the example of a change of a screen when a user brings a camera close to a marker in real space from the state where the screen example of FIG. 11 is displayed. It is a figure which shows the example of a screen displayed in battle
  • FIG. 4 is a flowchart illustrating a processing procedure in which an information processing unit and an output image generation unit update a display image in response to a change in the position and orientation of a camera in the present embodiment. It is a figure which shows the example of the display screen in the aspect which implement
  • FIG. 1A shows the front surface of the information processing apparatus 10.
  • the information processing apparatus 10 is formed by a horizontally long casing, and the left and right areas gripped by the user have an arcuate outline.
  • a rectangular touch panel 50 is provided on the front surface of the information processing apparatus 10.
  • the touch panel 50 includes a display device 20 and a transparent front touch pad 21 that covers the surface of the display device 20.
  • the display device 20 is an organic EL (Electro-Liminescence) panel and displays an image.
  • the display device 20 may be a display unit such as a liquid crystal panel.
  • the front touch pad 21 is a multi-touch pad having a function of detecting a plurality of points touched at the same time, and the touch panel 50 is configured as a multi-touch screen.
  • a triangle button 22a, a circle button 22b, a x button 22c, and a square button 22d are provided on the right side of the rhombus.
  • operation buttons 22 are provided on the right side of the rhombus.
  • an upper key 23a, a left key 23b, a lower key 23c, and a right key 23d are provided on the left side of 50.
  • the user can input the eight directions of up / down / left / right and diagonal by operating the direction key 23.
  • a left stick 24 a is provided below the direction key 23, and a right stick 24 b is provided below the operation button 22.
  • the user tilts the left stick 24a or the right stick 24b (hereinafter, collectively referred to as “analog stick 24”), and inputs a direction and a tilt amount.
  • An L button 26a and an R button 26b are provided on the left and right tops of the housing.
  • the operation button 22, the direction key 23, the analog stick 24, the L button 26a, and the R button 26b constitute operation means operated by the user.
  • a front camera 30 is provided in the vicinity of the operation button 22.
  • a left speaker 25a and a right speaker 25b (hereinafter, collectively referred to as “speaker 25”) that output sound are provided, respectively.
  • a HOME button 27 is provided below the left stick 24a, and a START button 28 and a SELECT button 29 are provided below the right stick 24b.
  • FIG. 1B shows the back surface of the information processing apparatus 10.
  • a rear camera 31 and a rear touch pad 32 are provided on the rear surface of the information processing apparatus 10.
  • the rear touch pad 32 is configured as a multi-touch pad, like the front touch pad 21.
  • the information processing apparatus 10 is equipped with two cameras and a touch pad on the front surface and the back surface.
  • FIG. 2A shows the upper surface of the information processing apparatus 10.
  • the L button 26a and the R button 26b are provided on the left and right ends of the upper surface of the information processing apparatus 10, respectively.
  • a power button 33 is provided on the right side of the L button 26 a, and the user turns the power on or off by pressing the power button 33.
  • the information processing apparatus 10 has a power control function of transitioning to a suspended state when a time during which the operating means is not operated (no operation time) continues for a predetermined time. When the information processing apparatus 10 enters the suspended state, the user can return the information processing apparatus 10 from the suspended state to the awake state by pressing the power button 33.
  • the game card slot 34 is an insertion slot for inserting a game card, and this figure shows a state where the game card slot 34 is covered with a slot cover.
  • An LED lamp that blinks when the game card is being accessed may be provided in the vicinity of the game card slot 34.
  • the accessory terminal 35 is a terminal for connecting a peripheral device (accessory), and this figure shows a state where the accessory terminal 35 is covered with a terminal cover. Between the accessory terminal 35 and the R button 26b, a-button 36a and a + button 36b for adjusting the volume are provided.
  • FIG. 2B shows the lower surface of the information processing apparatus 10.
  • the memory card slot 37 is an insertion slot for inserting a memory card, and this figure shows a state in which the memory card slot 37 is covered with a slot cover.
  • an audio input / output terminal 38, a microphone 39, and a multi-use terminal 40 are provided on the lower surface of the information processing apparatus 10.
  • the multi-use terminal 40 corresponds to USB (Universal Serial Bus) and can be connected to other devices via a USB cable.
  • USB Universal Serial Bus
  • FIG. 2C shows the left side surface of the information processing apparatus 10.
  • a SIM card slot 41 which is a SIM card insertion slot, is provided.
  • FIG. 3 shows a circuit configuration of the information processing apparatus 10.
  • the wireless communication module 71 is configured by a wireless LAN module compliant with a communication standard such as IEEE 802.11b / g, and is connected to an external network such as the Internet via a wireless access point.
  • the wireless communication module 71 may have a Bluetooth (registered trademark) protocol communication function.
  • the mobile phone module 72 corresponds to the third generation (3rd Generation) digital mobile phone system conforming to the IMT-2000 (International Mobile Telecommunication 2000) standard defined by the ITU (International Telecommunication Union). Connect to the telephone network 4.
  • a SIM card 74 Into the SIM card slot 41, a SIM card 74 in which a unique ID number for specifying the telephone number of the mobile phone is recorded is inserted. By inserting the SIM card 74 into the SIM card slot 41, the mobile phone module 72 can communicate with the mobile phone network 4.
  • the CPU (Central Processing Unit) 60 executes a program loaded in the main memory 64.
  • a GPU (Graphics Processing Unit) 62 performs calculations necessary for image processing.
  • the main memory 64 is composed of a RAM (Random Access Memory) or the like, and stores programs, data, and the like used by the CPU 60.
  • the storage 66 is configured by a NAND flash memory (NAND-type flash memory) or the like, and is used as a built-in auxiliary storage device.
  • the motion sensor 67 detects the movement of the information processing apparatus 10, and the geomagnetic sensor 68 detects the geomagnetism in the triaxial direction.
  • the GPS control unit 69 receives a signal from a GPS satellite and calculates a current position.
  • the front camera 30 and the rear camera 31 capture an image and input image data.
  • the front camera 30 and the rear camera 31 are constituted by CMOS image sensors (Complementary Metal Oxide Semiconductor Image Sensor).
  • the display device 20 is an organic EL display device and has a light emitting element that emits light by applying a voltage to the cathode and the anode. In the power saving mode, the voltage applied between the electrodes is made lower than usual, so that the display device 20 can be dimmed and power consumption can be suppressed.
  • the display device 20 may be a liquid crystal panel display device provided with a backlight. In the power saving mode, by reducing the amount of light from the backlight, the liquid crystal panel display device can be in a dimmed state and power consumption can be suppressed.
  • the operation unit 70 includes various operation means in the information processing apparatus 10. Specifically, the operation button 22, the direction key 23, the analog stick 24, the L button 26a, the R button 26b, the HOME button 27, and START. A button 28, a SELECT button 29, a power button 33, a-button 36a, and a + button 36b are included.
  • the front touchpad 21 and the rear touchpad 32 are multi-touchpads, and the front touchpad 21 is disposed on the surface of the display device 20.
  • the speaker 25 outputs sound generated by each function of the information processing apparatus 10, and the microphone 39 inputs sound around the information processing apparatus 10.
  • the audio input / output terminal 38 inputs stereo sound from an external microphone and outputs stereo sound to external headphones or the like.
  • a game card 76 in which a game file is recorded is inserted.
  • the game card 76 has a recording area in which data can be written.
  • data is written / read by the media drive.
  • a memory card 78 is inserted into the memory card slot 37.
  • the multi-use terminal 40 can be used as a USB terminal, and is connected to a USB cable 80 to transmit / receive data to / from another USB device.
  • a peripheral device is connected to the accessory terminal 35.
  • the information processing apparatus 10 executes information processing such as game and electronic data creation, output of various contents such as electronic books, web pages, videos, music, communication, and the like according to user operations.
  • a necessary program may be loaded from various internal storage devices to the main memory 64 and all processing may be performed in the information processing apparatus 10 under the control of the CPU 60 or may be connected via a network. It may be carried out while requesting a part of the processing from the server and receiving the result.
  • various types and types of processing executed by the information processing apparatus 10 are conceivable and not particularly limited. Hereinafter, a case where a game is executed will be described as an example.
  • a card of a pattern associated with the game is placed in an arbitrary place such as a table installed in a room, and the rear camera 31 of the information processing apparatus 10 is used.
  • the information processing apparatus 10 detects the presence of the card from the photographed image, the information processing apparatus 10 starts a game associated with the card.
  • the user continues to capture the real space, and the information processing apparatus 10 superimposes an image of a tool or a character used in the game on the captured image of the real space and displays the image on the display device 20. .
  • the user progresses the game by operating the information processing apparatus 10 while photographing the real space. Thereby, for example, it is possible to enjoy a battle game with a virtual character on the stage of one's own room.
  • curd used as the opportunity to start a game should just be a thing which can be detected with a picked-up image and can be matched with a game in advance
  • a predetermined picture, character, It may be a card or a three-dimensional object that describes at least one of the figures.
  • curd and solid thing which have a predetermined shape may be sufficient.
  • a display on which at least one of a predetermined shape, a picture, a character, and a figure is displayed, or an electronic device having the display may be used.
  • markers are collectively referred to as “markers”.
  • the marker may be used alone, or one game may be selected by combining a plurality of markers.
  • the target to be associated with the marker does not have to be a game unit.
  • a game unit such as an application including a game, a unit larger than the game, a command to be input in the game, various parameters for determining the game environment, a character to be drawn, an object, etc. Smaller units may be used.
  • FIG. 4 shows functional blocks of the information processing apparatus 10.
  • Each functional block included in the control unit 100 can be configured by the CPU 60, the GPU 62, the main memory 64, and the like as described above in terms of hardware, and various storage devices in the information processing apparatus 10 and the like in terms of software.
  • This is realized by a program loaded into the main memory 64 from the loaded recording medium. Therefore, it is understood by those skilled in the art that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof, and is not limited to any one.
  • the control unit 100 analyzes an input information acquisition unit 102 that acquires information related to a user operation, a captured image acquisition unit 104 that acquires captured image data, a captured image storage unit 106 that stores captured image data, and a captured image.
  • Image analysis unit 108 for performing marker detection and space definition, marker correspondence information storage unit 109 for storing information relating to the marker to be detected, information processing unit 114 for performing information processing corresponding to the marker, and information relating to the object to be drawn
  • An object data storage unit 116, and an output image generation unit 118 that generates an output image by superimposing an object image on the captured image.
  • the input information acquisition unit 102 receives information related to user operations performed on the information processing apparatus 10 as input signals from the operation unit 70, the front touchpad 21, and the motion sensor 67, respectively. Each input signal is converted into operation content information in accordance with a predetermined rule, and supplied to the captured image acquisition unit 104, the image analysis unit 108, and the information processing unit 114. As will be described later, in the present embodiment, the operation based on the interaction with the image world displayed on the display device 20 is basically used. Therefore, the operation via the operation unit 70 included in the information processing device 10 is preferably an initial operation or the like. Stop at the minimum scene.
  • the captured image acquisition unit 104 acquires captured image data by causing the rear camera 31 to start capturing in accordance with a process start operation by the user. By using the rear camera 31, the user can naturally perform operations for moving the information processing apparatus 10 while viewing the display device 20 to change the field of view or progress the game.
  • the captured image acquisition unit 104 acquires captured image data at a predetermined rate in real time, appropriately assigns an identification number, supplies it to the image analysis unit 108 and stores it in the captured image storage unit 106.
  • the identification number is transmitted between the functional blocks in order to identify the captured image to be processed in a series of subsequent processes.
  • the image analysis unit 108 includes a marker detection unit 110 that detects a marker present in the field of view of the camera, and a space definition unit 112 that tracks an object existing in real space and the position and orientation of the camera and defines a coordinate system used for graphics drawing. including.
  • the marker detection unit 110 detects a marker in the captured image based on the marker information registered in the marker correspondence information storage unit 109.
  • the marker detection technique any of various object recognition methods that have been proposed in the field of computer vision or the like may be employed.
  • Random Forest method and Random Ferns method can be used as recognition methods based on features. Specifically, a decision tree group learned using local patches randomly sampled from the prepared marker image is created, and a local patch including feature points detected from the captured image is input to the decision tree group. Then, the presence probability distribution of the marker is acquired from the output result at the leaf node.
  • the marker correspondence information storage unit 109 stores data representing marker characteristics in accordance with the technique used for marker detection.
  • the above technique has a remarkable effect in that there is no need to write a dedicated code for marker detection in the marker, and any image or three-dimensional object can be used as the marker. It is not intended to limit the form.
  • the marker detecting unit 110 may detect the marker by using a marker in which a dedicated code is written, and reading the marker.
  • the space definition unit 112 analyzes the photographed image to track the environmental shape formed by the object existing in the real space (photographed space) that is the subject of photographing, and the position and orientation of the camera that is photographing the environment shape. . Then, a three-dimensional world coordinate system is defined in the object space, and the camera coordinate system is sequentially defined according to the movement of the camera. For example, a SLAM (Simultaneous Localization And Mapping) method is used as a technique for tracking the environmental shape and the position and orientation of the camera.
  • SLAM Simultaneous Localization And Mapping
  • the SLAM method is a method of tracking the movement of a feature point for each local patch including the feature point detected from a captured image and updating a predetermined state variable at each time step based on the tracking.
  • state variables as the camera position and orientation (rotation angle), moving speed, angular velocity, the position of at least one feature point of the object existing in the object space, etc., the object space and the sensor surface of the camera.
  • the positional relationship (distance and angle) and thus the positional relationship between the world coordinate system and the camera coordinate system can be acquired for each captured image.
  • the camera may be a stereo camera, or an infrared irradiation unit and an infrared sensor may be separately provided, and the distance from the camera to the subject may be acquired by a known method to acquire the environmental shape and the like.
  • the position and orientation of the camera may be calculated based on output signals from the motion sensor 67 and the geomagnetic sensor 68 provided in the information processing apparatus 10.
  • Various means for image analysis performed by the marker detection unit 110 and the space definition unit 112 are conceivable as described above, and are described in, for example, Patent Document 1. Therefore, detailed description thereof is omitted here.
  • the information processing unit 114 executes information processing such as a game associated with the marker detected by the marker detection unit 110. Therefore, the marker correspondence information storage unit 109 stores a marker and a game that starts when the marker is detected in association with each other. The marker correspondence information storage unit 109 further stores an object to be drawn in each game in association with each other. The information processing unit 114 determines the placement of each object in the world coordinate system defined by the space definition unit 112 and requests the output image generation unit 118 to generate an output image including drawing of the object.
  • the output image generation unit 118 reads the captured image data from the captured image storage unit 106, and draws an object on the captured image to generate an output image.
  • the object model data is stored in the object data storage unit 116 together with the identification information.
  • the drawing process performed here converts an object placed in the world coordinate system into a camera coordinate system defined from the position and orientation of the camera, and projects it onto the screen.
  • the basic processing procedure is general. It may be the same as computer graphics.
  • the generated output image data is output to the display device 20 via the frame memory and displayed immediately. Note that during a period in which no marker is detected in the captured image, the output image generation unit 118 may use the captured image as it is as an output image.
  • the object data storage unit 116 further stores information that defines the actions of characters included as objects. Although a specific example will be described later, in the present embodiment, since the distance and relative angle between the camera and the character can be sequentially specified, when those parameters satisfy a predetermined condition, the character performs a predetermined motion (hereinafter referred to as “specific motion”). ”). Thereby, the interaction between the camera moved by the user and the virtual character is realized.
  • specific motion a predetermined motion
  • FIG. 5 is a diagram for explaining an example of the usage environment of the information processing apparatus 10 in the present embodiment.
  • a real space 120 is provided with a table 122, on which a clock 124 and a pencil stand 126 are placed.
  • the user places the marker 128 a on the table 122 and images it with the rear camera 31 of the information processing apparatus 10.
  • the marker 128a is a card on which a picture of a cat is drawn.
  • the field of view of the rear camera 31 is an area indicated by a dotted rectangle 130, for example.
  • FIG. 6 illustrates the relationship between the real space and the coordinate system for drawing the object, and shows a state where the real space 120 in FIG. 5 is viewed from the upper right.
  • a clock 124, a pencil stand 126, and a marker 128 a are placed on the table 122, and the camera sensor surface coincides with the screen 140.
  • the space definition unit 112 reads the actual size of the marker from the marker correspondence information storage unit 109, and displays the marker image in the captured image. Compare with Thereby, the initial value of the distance and rotation angle from the marker 128a to the screen 140, and hence the position and orientation of the camera can be derived.
  • the relationship between the world coordinate system and the camera coordinate system is determined, and as a result, to the screen 140 of the object placed in the world coordinate system. Projection becomes possible. Further, the space definition unit 112 tracks the feature points of the captured image such as the marker 128a, the table 122, the clock 124, and the pencil stand 126 by the SLAM method, so that the camera can move and change its posture. Even so, the relationship between the two coordinate systems can be obtained sequentially. Thereby, the change of the object image according to the movement of the camera can be expressed accurately.
  • the image plane of the marker 128a is a table upper surface or a floor surface, and the surface is defined as a reference surface, thereby expressing the object on the surface. it can.
  • FIG. 7 is a flowchart illustrating a processing procedure in which the information processing apparatus 10 executes a game with object drawing triggered by marker detection.
  • the information processing apparatus does not start the game itself in response to the marker but starts processing the game and other functions after receiving various mode selections related to the game. Therefore, as a concept encompassing them, an object started by the information processing apparatus 10 is called an “application”.
  • the captured image acquisition unit 104 starts acquisition of captured image data by causing the rear camera 31 to start capturing (S10). Thereafter, the captured image acquisition unit 104 sequentially acquires captured images (image frames) at a predetermined rate.
  • the captured image data is supplied to the image analysis unit 108 and stored in the captured image storage unit 106.
  • the output image generation unit 118 reads the photographed image data from the photographed image storage unit 106 and outputs it to the display device 20 to display the photographed image as it is like the electronic viewfinder (S12). Thereby, the user can move the information processing apparatus 10 while confirming the image photographed by the rear camera 31 with the display device 20, and can obtain a desired visual field.
  • the marker detection unit 110 of the image analysis unit 108 performs marker detection processing by analyzing the captured image (S14). Therefore, the marker detection unit 110 refers to data representing the feature of each marker registered in the marker correspondence information storage unit 109.
  • the output image generation unit 118 continues to output the captured image as it is to the display device 20 by receiving the notification from the image analysis unit 108 (N in S16, S12).
  • the space definition unit 112 determines the world coordinate system as shown in FIG. 6 and sets the screen 140 with respect to the world coordinate system according to the position and orientation of the camera. (S18). Since the position and orientation of the camera may change constantly, the movement is tracked by the SLAM method described above.
  • the marker detection unit 110 identifies an application associated with the detected marker by referring to the marker correspondence information storage unit 109, and notifies the information processing unit 114 of identification information of the application (S20). In response to this, the information processing unit 114 executes the notified application in cooperation with the output image generation unit 118 (S22).
  • an object such as a character or a game tool associated with an application is placed in the world coordinate system, and is drawn by projecting it onto the screen 140, and the output image is superimposed on the captured image.
  • the character is caused to perform a specific action according to the movement of the camera, or the game is advanced according to a user operation.
  • the generated output image is sequentially displayed on the display device 20, so that an image that changes according to the progress of processing such as camera movement or game can be displayed.
  • the shooting is continued until the user performs an overall operation stop operation such as ending the shooting, marker detection is performed, and the corresponding application is executed (N in S26, S12 to S22).
  • an operation for ending the shooting is performed, a series of processing ends (Y in S26).
  • FIG. 8 shows an example of a screen displayed on the display device 20 at the application execution stage in S22 of FIG.
  • This figure shows, for example, one screen of an application that has started execution as a result of detecting the marker 128a after photographing the real space 120 shown in FIG.
  • a cat character 150a which is an application or an object associated with the marker 128a
  • icons 152a, 152b, and 152c are drawn so as to surround the marker 128b.
  • the “icon” is used if the associated process is started or the corresponding item is selected by designating the corresponding area, the purpose of use, the process to be started, and the type of selection target
  • the shape is not limited. Therefore, the button graphics as shown in the figure are also called “icons”.
  • the icons 152a, 152b, and 152c are for selecting a function to be executed by the application.
  • the character 150a and the icons 152a, 152b, and 152c are first placed on the world coordinate system, and then projected and drawn on a screen that matches the sensor surface of the camera. It can be expressed as it is placed.
  • the information processing unit 114 associates an area in the front touchpad 21 corresponding to the position of the icons 152a, 152b, and 152c on the display screen with the function represented by each icon. Thereby, when the user touches the front touchpad 21 at the position of a desired icon, the information processing unit 114 starts processing of the corresponding function. Since the icons 152a, 152b, and 152c are drawn as if they are placed on a table in the real world, the position of the icon on the screen changes when the field of view of the camera changes. Therefore, the information processing unit 114 constantly acquires information related to the drawing area of each icon from the output image generation unit 118, and updates the detection area of each icon on the front touchpad 21 according to the movement of the camera.
  • the icons 152a, 152b, and 152c may also have a three-dimensional shape.
  • FIG. 9 shows an example of the structure of data stored in the marker correspondence information storage unit 109, which is referred to when the marker detection unit 110 performs marker detection and corresponding application identification in S16 and S20 of FIG.
  • the marker correspondence information 300 includes an identification information column 302, a feature amount information column 304, a size column 306, and a corresponding application column 308.
  • an identification number assigned to each marker to be registered is stored in the identification information column 302, an identification number assigned to each marker to be registered is stored.
  • the feature amount information column 304 a marker template image or data identification information representing the feature amount is stored.
  • the names of images such as “cat image” and “fox image” are shown.
  • Data bodies such as images and feature quantities are stored separately in association with image names.
  • the feature amount information column 304 may store image data, feature number data identification numbers, the data itself, or a pointer indicating the storage address of the data.
  • the size column 306 stores the size of each marker. In the case of the figure, since it is assumed that a rectangular card is used as a marker, it is described in a format of “vertical length ⁇ horizontal length (mm)”, but the format depends on the shape of the marker. It can be various. A combination of a plurality of parameters such as size and shape may be used.
  • the corresponding application column 308 stores the identification information of the application associated with each marker. Although the names of games such as “air hockey game” and “card game” are shown in the same figure, an application identification number, a software main body, or a pointer indicating a storage address of the software may be stored.
  • the marker detection unit 110 reads out each registered marker image or its feature amount based on the identification information in the feature amount information column 304, and uses it to detect a marker in the captured image. By using the above method for detection, it is possible to detect a marker with high robustness against a change in magnification.
  • the marker detection unit 110 identifies an application associated with the detected marker with reference to the corresponding application column 308.
  • the space definition unit 112 defines the world coordinate system according to the unit length of the real space based on the size of the marker described in the size column 306, and acquires the positional relationship between the subject including the marker and the camera. To do.
  • the marker and application do not have to correspond one-to-one.
  • the marker with the identification information “001” and the marker with the identification information “002” both correspond to the “air hockey game”.
  • a combination of multiple markers may correspond to one application.
  • a predetermined application or game can be executed only when a plurality of markers are collected, and enjoyment of marker collection can also be provided.
  • a combination of a plurality of seals collected on a single card, such as a stamp rally may be used.
  • the game environment such as an air hockey table in an air hockey game
  • the size of the markers such as the identification information “001” marker and the identification information “002” marker in FIG.
  • You may change the size of the object which represents.
  • the unit length in the world coordinate system can correspond to the unit length in the real space based on the marker size as described above.
  • each object can be drawn assuming an arbitrary size in the real space.
  • the size of the object may be changed depending on the interval between the placed markers.
  • FIG. 10 is a flowchart illustrating an example of a processing procedure in which the information processing unit 114 executes the application corresponding to the marker in S22 of FIG. Note that it is understood by those skilled in the art that the processing procedure and processing content can be variously changed depending on the content of the application, and the present embodiment is not limited to this. Further, the flowchart of FIG. 9 particularly shows a processing procedure for the application itself, and processing for changes in the position and orientation of the camera performed in parallel will be described later.
  • the information processing unit 114 identifies a character, an icon, or the like associated with the application, and the output image generation unit 118 detects them.
  • the object is drawn on the photographed image (S30). Modeling data of each object is read from the object data storage unit 116 in accordance with a request from the information processing unit 114.
  • the initial screen displayed thereby is, for example, the screen example 148 shown in FIG.
  • the output image generating unit 118 reads the explanatory note image from the object data storage unit 116. Is superimposed on the captured image (S34). At this time, it is desirable not to represent the explanatory image in a plane but to display it without destroying the real space world in the captured image. For example, in the captured image, the description image is texture-mapped to a rectangular area where the marker image appears, so that the description card is placed at the marker position.
  • the display is continued until the user performs an operation to end the display of the description (N in S36, S34).
  • the battle mode is shifted to the battle mode of S42 to S48 (Y of 36, Y of S40).
  • the process proceeds to the watching mode of S52 to S58 (Y in S36, N in S40, Y in S50). If an operation for terminating the display of the explanatory note is performed, the display is returned to the initial screen displayed in S30 (Y in S36, N in S40, N in S50), unless an operation for terminating the application itself is performed. , S38 N, S30). An icon for returning the display to the initial screen may be displayed together with the description.
  • the initial position and size of the object to be drawn during the battle are determined in the world coordinate system, and are projected onto the screen to be superimposed on the captured image (S42). Then, the game is advanced while appropriately moving the tool and the object of the opponent character according to the user operation on the information processing apparatus 10 (S44).
  • the information processing unit 114 calculates the movement of the object, and the output image generation unit 118 updates the display image by drawing the object on the captured image at a predetermined frame rate (S46).
  • shooting and display of a real space with a camera provided in the information processing apparatus 10 constitutes a part of the user interface. Therefore, by utilizing the movement of the information processing apparatus 10 itself detected by the motion sensor 67 as means for user operation in the game, a sense of unity is created in a series of operations including camera shooting, and the user can easily understand.
  • other operation means may be used as appropriate.
  • the processes of S44 and S46 are continued until the battle is ended or the user performs an end operation (N in S48, S44, S46).
  • the display is returned to the initial screen displayed in S30 (Y in S48, N in S38, S30), unless an operation for ending the application itself is performed.
  • an initial position and a size of an object to be drawn at the time of watching are determined in the world coordinate system, and are projected onto the screen to be superimposed on the captured image (S52).
  • a situation in which a plurality of characters start a battle is displayed.
  • the game is automatically advanced so that the characters battle each other according to the set level (S54).
  • the information processing unit 114 calculates the movement of the object, and the output image generation unit 118 updates the display image by drawing the object on the captured image at a predetermined frame rate (S56).
  • the processes of S54 and S56 are continued until the battle between the characters ends or the user performs an operation to end the watching (N of S58, S54, S56).
  • the display is returned to the initial screen displayed in S30 (Y in S58, N in S38, S30) unless an operation for ending the application itself is performed. If an operation for terminating the application is performed at any stage, the processing of the application is terminated (Y in S38).
  • the application termination operation may be accepted at any time during the execution of the application.
  • FIG. 11 shows an example of the explanatory note display screen displayed in S34 of FIG. This screen is displayed after the “how to play” icon 152a is touched on the initial screen shown in FIG. 8, for example. Further, in this example, based on the photographed image of the real space 120 shown in FIG. 5, the marker having the identification information “001” is detected from the marker correspondence information 300 shown in FIG. 9, and the “air hockey game” application is executed. It is assumed that
  • a description image 164a, a cat character 150b, an icon 162a for returning the description to the previous page, and an icon 162b for proceeding to the next page are drawn on the photographed image.
  • image data such as the image 164 b is stored in the object data storage unit 116.
  • the description image may be composed of a plurality of pages, in which case the icons 162a and 162b are drawn.
  • the description image can be expressed as if it is placed on a table in the real space as shown in the figure by texture mapping the region where the marker is originally reflected.
  • the image to be prepared as the description image may be a still image as shown in the figure or a moving image.
  • an application including a game since an application including a game is assumed, an image explaining how to play is displayed.
  • the content to be displayed is not limited to this. That is, the contents displayed may vary depending on the contents of the application such as an electronic book, a web page, and a movie.
  • the icon 162a, the icon 162b, and the character 150b are drawn by the same procedure as described in FIG.
  • the description image 164a is replaced with a natural expression by crossfading the previous or next page image.
  • a page turning animation may be inserted.
  • the character 150b stands up and moves from the state on the initial screen shown in FIG. 8 so that the explanatory note can be seen. Furthermore, it may be expressed as if the character 150b is changing explanations or turning pages.
  • FIG. 12 shows a screen change example when the user brings the camera closer to the marker in the real space from the state where the screen example 160 shown in FIG. 11 is displayed.
  • the explanatory note image 164c, the icons 162c and 162d, and the character 150c are the same objects as the explanatory note image 164a, the icons 162a and 162b, and the character 150b in the screen example 160 of FIG.
  • the position of the object in the world coordinate system is determined, when the camera is brought close to it, it is copied up or part of it protrudes from the field of view as in real space. This gives the impression that the description and icons are actually placed on the table.
  • the explanatory note is small and difficult to read, it is possible to make the explanatory note viewed close by bringing the camera closer.
  • FIG. 13 shows a screen example in the battle mode displayed in S46 of FIG. This screen is displayed, for example, immediately after the “battle” icon 152b is touched or during the battle on the initial screen shown in FIG. In this example as well, it is assumed that an “air hockey game” application is executed, as in FIG.
  • an air hockey table 182 a pack 186, mallets 184a and 184b, and a cat character 150c are drawn on the photographed image.
  • the mallet 184 b in front of the user moves in the same direction as the information processing apparatus 10 on the air hockey base 182, thereby returning the pack 186.
  • the movement of the information processing apparatus 10 is detected by the motion sensor 67, and the movement amount and speed thereof are converted into the movement amount and speed of the mallet 184b drawn on the screen.
  • the conversion rule is also stored in the object data storage unit 116, and the information processing unit 114 refers to it to determine the movement of the mallet at each time step and the movement of the pack that rebounds. This makes it possible to change the movement of the pack in the same way as actual air hockey, such as moving the mallet left and right at the moment of hitting to change the hitting angle of the pack, or moving the mallet forward and smashing. Can do.
  • the opponent's cat character 150c is the same as the character drawn on the initial screen of FIG. In the battle mode, the character 150c is drawn so as to face the camera, that is, the user with the air hockey table interposed therebetween.
  • the cat character 150c also grasps the virtual information processing apparatus and moves its own mallet 184a by moving it. As a result, it is possible to produce a sense of reality that is actually playing against a virtual character.
  • the air hockey base 182 draws the marker 128c in the photographed image so as to be in a predetermined direction and position.
  • the air hockey table 182 in the figure shows a state in which only the top plate is floating in the air, but a portion below the top plate may be added and expressed as a three-dimensional structure, or a scoreboard or the like may be further expressed. Good.
  • the top plate portion of the air hockey base 182 transparent or translucent, the concealed portion of the photographed image is reduced, and the impression that is performed in real space can be given more.
  • the air hockey base 182 may be disposed directly above the marker 128c, but when the top plate portion is transparent or semi-transparent, it is desirable that the air hockey base 182 be shifted so that the pack 186 or the like is not easily seen due to the design of the marker. .
  • the relative relationship between the camera and the world coordinate system can be specified based on objects around the marker such as a clock or a pencil stand. Therefore, once the marker is detected and the air hockey base 182 is arranged so as to correspond to the marker, even if the marker is out of the field of view by a game operation or the like, or the user removes the marker, the relative position with the surrounding objects Thus, the air hockey table 182 can be drawn so as to be at the same position in the world coordinate system. The same applies to other objects such as characters and icons.
  • FIG. 14 shows a screen example in the watching mode displayed in S56 of FIG. This screen is displayed, for example, immediately after the “watching” icon 152c is touched or during watching in the initial screen shown in FIG. This example also assumes that an application of “air hockey game” is executed, as in FIGS. 11 and 13.
  • an air hockey table 194, a pack 198, mallets 196a and 196b, a cat character 150d, and a fox character 192 are drawn on the photographed image.
  • drawing is performed so that the cat character 150d and the fox character 192 face each other with the air hockey table 194 interposed therebetween.
  • Each of them moves their mallet 196a, 196b by moving a virtual information processing apparatus.
  • the user can watch this pattern from various distances and angles by changing the position and posture of the camera.
  • the air hockey table is arranged immediately above the marker 128d in the real space, the actual movement of the user changes the camera distance and angle around the marker 128d.
  • the cat character 150d is the same as the character drawn in the initial screen of FIG. 8 or the battle mode of FIG.
  • the level set for the cat character 150d is increased. That is, the user creates a situation where the cat character 150d is trained to be a player. Then, by fighting with other characters in the watching mode, it is possible to provide enjoyment such as cheering with the feeling that the cat character 150d that he grew up participates in the external game, or strengthening in the fighting mode. .
  • FIG. 15 shows the structure of data stored in the marker correspondence information storage unit 109, which is referred to when the information processing unit 114 specifies an object to be drawn corresponding to the marker in S30, S34, S42, and S52 of FIG.
  • An example is shown.
  • the marker correspondence information 300 in FIG. 9 a mode is assumed in which the character drawn by the marker is changed even in the same application.
  • elements other than the character may be changed by the marker, or all elements may be uniquely determined by the application without being changed.
  • the processing branch by the marker is completed when the application is identified with reference to the marker correspondence information 300 in FIG. 9, and the information processing unit 114 does not need to refer to the marker correspondence information storage unit 109.
  • the object information 400 includes an application field 402, a marker field 404, a first character field 406, a second character field 408, a display icon field 410, an explanation image field 412, and a tool field 414.
  • the application column 402 stores application identification information.
  • the identification information corresponds to the identification information stored in the corresponding application column 308 in the marker correspondence information 300 of FIG. Although the names of the games such as “air hockey game” and “card game” are shown in the same figure, as with the marker correspondence information 300 in FIG. 9, the application identification number, the software body, or a pointer indicating the storage address of the software Etc. may be stored.
  • the marker column 404 stores marker identification numbers.
  • the identification number corresponds to the identification number stored in the identification information column 302 in the marker correspondence information 300 of FIG.
  • the information processing unit 114 identifies the object to be drawn from the object information 400 based on them.
  • the first character column 406 stores identification information of the first character appearing in all modes in the example of FIG.
  • the second character column 408 stores identification information of the second character that is the opponent in the watching mode in the example of FIG.
  • the first character column 406 and the second character column 408 show the names of character models such as “cat” and “fox”, but they may be identification numbers.
  • the modeling data for each character is stored in the object data storage unit 116 in association with the character identification information.
  • the display icon field 410, the explanation image field 412, and the tool field 414 have the same notation and relationship with the data body.
  • the display icon column 410 stores identification information of icons drawn in S30 of FIG. In the case of the figure, the names of icons such as “how to play”, “match”, “watch” are shown. These correspond to the “how to play” icon 152a, the “match” icon 152b, and the “watch” icon 152c in the screen example 148 of FIG.
  • the identification information of the explanation text image drawn in S34 of FIG. 10 is stored.
  • the names of the image group of the explanatory note covering a plurality of pages such as “hockey operation (1), (2),...” Are shown.
  • One of the pages corresponds to the explanatory images 164a and 164b in FIG.
  • the tool column 414 stores identification information of game tool objects drawn in the battle mode or the watching mode.
  • the names of the tool object models such as “air hockey table”, “mallet”, and “pack” are shown. These objects correspond to the air hockey tables 182 and 194, the packs 186 and 198, and the mallets 184a, 184b, 196a, and 196b in the screen examples of FIGS.
  • the information processing unit 114 identifies an object to be drawn in each mode of the application with reference to the object information 400, and requests the output image generation unit 118 to draw.
  • the output image generation unit 118 reads the modeling data stored in the object data storage unit 116 based on the identification information, and draws each object.
  • the information processing unit 114 arranges an object to be drawn based on the position and size of the marker in the world coordinate system defined by the space definition unit 112. For example, as described above, by preparing markers of a plurality of sizes even in the same application, the size of an object such as a character or an air hockey table may be changed according to the size of the marker.
  • the object size may be changed according to the game execution status or the user's request regardless of the marker size. For example, in the watching mode, when the character you grew up wins the opponent's character, you can set up a special battle mode that allows you to play with the raised character as big as a human in real space and the tool object also being real size You may make it provide.
  • FIG. 16 shows an example of a screen displayed in the special battle mode.
  • the screen example 200 basically has the same configuration as the screen example 180 in the normal battle mode shown in FIG. However, the difference is that the air hockey table 202 is the actual size, and the cat character 150e is the same size as a human being. Therefore, as compared with the case of FIG. 13, the user is in a state of photographing the real space at a position retracted from the table on which the marker 128d is placed. According to the size change of the air hockey table 202, its arrangement is also adjusted as appropriate. In the figure, an air hockey base 202 is disposed so as to overlap the marker 128d.
  • FIG. 17 is a flowchart showing a processing procedure for drawing an object in the battle mode of FIG. 10 when it is allowed to change the size of the object in this way.
  • the information processing unit 114 first determines the size and arrangement of objects in the world coordinate system (S70). As described above, the size of the object is uniquely derived from the marker size according to a predetermined rule, or is determined by referring to a setting value for a special battle mode.
  • the information processing unit 114 adjusts the sensitivity of the movement of the tool object to the movement of the information processing apparatus 10 (S72). .
  • the movable width of the mallet differs greatly between a desktop-sized air hockey base as shown in FIG. 13 and a full-size air hockey base as shown in FIG. For this reason, if the conversion rule from the movement amount of the information processing apparatus 10 to the movement amount of the mallet is fixed regardless of the size, the information processing apparatus 10 needs to be moved unnaturally when the size is large or small. It is conceivable that the mallet moves too much when it is sized.
  • the “sensitivity” may be a parameter indicating the magnitude of the response of the tool object to the movement of the information processing apparatus, and variables to be used are not particularly limited, such as a movement amount ratio, a speed ratio, and an acceleration ratio.
  • the ratio Vm / Va of the mallet speed Vm obtained with respect to the speed Va of the information processing apparatus 10 is set to be inversely proportional to the magnification of the object size. That is, the mallet speed Vm is obtained from the object size magnification S and the speed Va of the information processing apparatus 10 as follows.
  • Vm kVa / S
  • the above equation is only an example, and sensitivity adjustment may be performed by various methods.
  • the ratio of the mallet movement amount to the air hockey table width may be calculated from the movement amount of the information processing apparatus 10.
  • the processing of S70 and S72 determines the object size and position, and the movement of the object relative to the movement of the information processing apparatus 10. Therefore, at the start of the game, the object is arranged and drawn accordingly. The drawing process is repeated while moving the object according to (S74).
  • FIG. 18 shows an example of the structure of data stored in the object data storage unit 116 and referred to by the information processing unit 114 for causing the character to perform a specific action when the application is executed.
  • the specific action setting information 500 includes a character field 502, a target part field 504, a relative angle threshold value field 506, a distance threshold value field 508, and an action field 510.
  • the character column 502 stores identification information of the character that is the subject of action. In the case of the figure, the name of the character model such as “cat” is shown.
  • the identification information corresponds to the identification information stored in the first character column 406 in the object information 400 of FIG.
  • the relative angle threshold column 506, and the distance threshold column 508 combinations of conditions for causing each character to perform a specific action are stored. Since this condition can be freely set with respect to the distance and relative angle between the character and the camera, there are various other contents and expression formats of the condition. In the case of the figure, for each part of the character, a condition for determining a situation in which the camera approaches the part within a predetermined angle is shown as an example.
  • FIG. 19 is a diagram for explaining how to express the conditions set in the specific operation setting information 500 of FIG.
  • a part 210 represents a part where conditions are set, such as a character's head or hand.
  • An angle ⁇ (0 ⁇ ⁇ ⁇ 180 °) between the normal vector n1 at the reference point 212 (vertex in the figure) of the part and the normal vector n2 of the screen 214 corresponding to the sensor surface of the camera, and the reference A threshold is set for the distance A between the point 212 and the center of the screen 214.
  • the reference point 212 is looked into.
  • the angle ⁇ is equal to or greater than the threshold value and the distance A is equal to or less than the threshold value, a specific action is generated.
  • the target part field 504 is a target part for which a condition is set and its reference point
  • the relative angle threshold value field 506 is the angle ⁇ threshold value
  • the distance threshold value field 508 is the distance A value.
  • the action column 510 stores identification information of a specific action performed by the character when a condition set for the target part is satisfied. In the setting example shown in the figure, when the camera moves so as to look into the top of the cat character, the cat character jumps in the direction of the camera (second line of the specific action setting information 500).
  • the user acquires the item by performing an action of pushing the item being gripped toward the camera (third line of the specific action setting information 500).
  • the character speaks a hint or the like related to the game strategy (the fourth line of the specific action setting information 500).
  • a program for controlling the movement of the actual character is stored in the object data storage unit 116 separately in correspondence with the identification information of the specific action described in the action column 510.
  • conditions can be set for the distance and relative angle between the object and the individual of the camera, so that the operation of the object as a reaction can be prepared with abundant variations. Because the position and orientation of the information processing device can be changed with a high degree of freedom with respect to the position and orientation of objects that change from moment to moment, by using those combinations as input values, you can enjoy accidental output results .
  • FIG. 20 is a flowchart illustrating a processing procedure in which the information processing unit 114 and the output image generation unit 118 update the display image in response to changes in the position and orientation of the camera. This process is performed in parallel with the process of executing the application shown in FIG. First, the information processing unit 114 constantly acquires information on the position and orientation of the camera from the space definition unit 112 to monitor whether or not they have changed (S80). If there is no change, only monitoring is continued (N in S80).
  • the output image generation unit 118 acquires a new camera coordinate system from the space definition unit 112 (S82). This means that the screen 140 in FIG. 6 has been moved in response to changes in the position and orientation of the camera.
  • the specific action setting information 500 in FIG. 18 is referred to, and it is determined whether or not the distance and relative angle between the object drawn at that time and the camera satisfy the set condition (S84). If not satisfied (N in S84), the output image generation unit 118 projects and redraws each object on the screen after movement, and updates the display image by superimposing it on the captured image (S88). In this case, naturally, the movement of the object corresponding to the application process is expressed.
  • the information processing unit 114 acquires the identification information of the specific action associated with the condition from the specific action setting information 500. Then, based on the action generation program corresponding to the identification information, the character is moved to perform the specific action (S86). Then, the output image generation unit 118 projects and redraws each object including the character performing the specific action on the screen after movement, and updates the display image by superimposing it on the captured image (S88).
  • the corresponding process is started by touching the front touch pad 21 at the position of the icon on the display screen.
  • it is expressed as if icons are also placed on a plane such as a table on which markers are placed in real space. Therefore, as another operation method, it is recognized that the icon is touched by actually reaching out to the position in the real space where the icon appears to be placed on the screen and touching a plane such as a table. .
  • FIG. 21 shows an example of a display screen in a mode in which icon operation is realized by finger movement in real space.
  • the user has put out the right hand 222 within the field of view of the camera while the screen example 148 shown in FIG. 8 is displayed.
  • the user's left hand is holding the information processing apparatus 10 and photographing the real space, and is closer to the table side than in the case of FIG.
  • a captured image including the marker 128a and the user's right hand 222, a cat character 150a drawn with computer graphics, an icon 152c, and the like are mixed.
  • the user's right hand 222 in the captured image is recognized by the existing hand recognition technology.
  • the finger stops in the area of the icon 152c in the captured image for a time longer than a predetermined threshold it is determined that the touch has been made.
  • it is determined that the finger touches the area corresponding to the icon 152c on the table in the real space by tracking the feature point of the finger by the SLAM method or the like. If it is determined that a touch has been made, the information processing unit 114 starts processing corresponding to the icon.
  • Hand recognition and tracking of feature points are performed at any time by the image analysis unit 108. Therefore, when the touch determination is performed based on the finger movement in the real space, the information processing unit 114 sequentially notifies the image analysis unit 108 of the position of the currently displayed icon in the world coordinate system. Then, the image analysis unit 108 performs touch determination in the three-dimensional space and notifies the information processing unit 114 of the result. When touch determination is performed in the captured image, the information processing unit 114 acquires the position of the finger in the captured image from the image analysis unit 108 as needed, and performs the touch determination by itself.
  • a drawing object is sandwiched in a photographed image in which a virtually drawn icon 152c is placed on a table in real space, and the user's actual right hand 222 is further on the icon 152c. It will be in the state. Therefore, the output image generation unit 118 erases the portion hidden in the right hand 222 of the icon 152c by a general hidden surface removal process.
  • the contour line of the right hand 222 can also be specified by the above hand recognition technique or the existing contour tracking technique.
  • the output image generation unit 118 creates a sense of reality by applying a finger shadow 224 as the finger approaches the icon 152c. This process can also be realized by a general shadowing technique. At this time, a global illumination model may be used, or a shadow may be cast by a ray tracing method or the like by estimating a light source from a captured image.
  • the information processing apparatus corresponding to the marker is started by the information processing apparatus by photographing the marker placed in the real space using the information processing apparatus provided with the camera.
  • the object image is superimposed on the photographed image so that the object drawn by computer graphics exists in the real space being photographed. Then, an object such as a tool used in the game is moved according to the movement of the information processing apparatus.
  • the movement of the character is changed depending on the distance between the camera position and the character and the relative angle.
  • draw icons as if they are placed in real space, including icons and descriptions.
  • the icon can be operated by touching a touch pad on a display screen provided in the information processing apparatus or a surface in a real space where the icon is virtually placed. In this way, all operations and information display can be performed in the augmented reality space existing in the display screen, and a user interface that maintains the world view can be realized. In addition, more natural and intuitive operation is possible compared to a general input device or a cursor.
  • the object to be drawn is generated corresponding to a marker whose size is known, drawing can be performed assuming the size of the object in real space. Therefore, it is also possible to make the character the same size as an actual human or make the tool full size, and in the resulting screen world, you can actually play with the character in your room etc. Can produce.
  • the size of the object is variable, the operability can be maintained even if the size is changed by adjusting the sensitivity of the movement of the object such as the tool with respect to the movement of the information processing apparatus according to the size. .
  • 10 information processing devices 20 display devices, 21 front touchpad, 31 rear camera, 60 CPU, 62 GPU, 64 main memory, 66 storage, 67 motion sensor, 70 operation unit, 100 control unit, 102 input information acquisition unit, 104 A captured image acquisition unit, 106, a captured image storage unit, 108, an image analysis unit, 109, a marker correspondence information storage unit, 110, a marker detection unit, 112, a space definition unit, 114, an information processing unit, 116, an object data storage unit, and 118, an output image generation unit.
  • the present invention can be used for information processing apparatuses such as computers, game machines, and information terminals.

Abstract

When a marker (128b) which is included in a photographic image is detected, information processing corresponding to the marker (128b) is commenced, and corresponding objects such as a character (150a) and icons (152a, 152b, 152c) are arranged in a three-dimensional coordinate system corresponding to a subject space for rendering on the photographic image and instantly displayed. The icons (152a, 152b, 152c) appear as if placed in the plane on which the marker (128b) is placed and are set so as to be manipulable according to contact on a touch pad on a display screen or pointing by a finger to the corresponding location in the subject space.

Description

情報処理装置および情報処理方法Information processing apparatus and information processing method
 本発明は、ユーザ操作に応じて情報処理を実施する情報処理装置および情報処理方法に関する。 The present invention relates to an information processing apparatus and an information processing method for performing information processing in response to a user operation.
 携帯型のゲーム機やPDA(Personal Digital Assistant)等の情報処理装置が普及している。近年では、多くの情報処理装置が通信機能を搭載し、またスマートフォンのように、携帯電話やPDA等の機能を一つにまとめた多機能型の情報処理装置も登場している。このような情報処理装置は大容量のメモリおよび高速プロセッサを搭載しており、ユーザは、アプリケーションプログラムを情報処理装置にインストールすることで、様々なアプリケーションを楽しめるようになっている。 Information processing devices such as portable game machines and PDAs (Personal Digital Assistants) are widely used. In recent years, many information processing apparatuses are equipped with a communication function, and a multifunctional information processing apparatus in which functions of a mobile phone, a PDA, and the like are integrated into one has appeared, such as a smartphone. Such an information processing apparatus includes a large-capacity memory and a high-speed processor, and a user can enjoy various applications by installing an application program in the information processing apparatus.
 例えば情報処理装置に備えたカメラによって実写した画像に3次元グラフィックスにより描画したオブジェクトを重畳させて表示画像とすることにより、現実世界と仮想世界を融合させた拡張現実(AR:Augmented Reality)の技術も提案されている(例えば特許文献1参照)。 For example, augmented reality (AR) that merges the real world and the virtual world by superimposing an object drawn with 3D graphics on an image captured by a camera provided in the information processing apparatus to form a display image. A technique has also been proposed (see, for example, Patent Document 1).
特開2013-92964号公報JP 2013-92964 A
 ARの技術の利用は、ユーザの動き認識に基づく仮想世界の構築とは異なるアプローチによって現実世界と仮想世界を融合させているといえる。これらの技術は、単に見た目の楽しさを提供するばかりでなく、直感的かつ容易に装置を操作したり出力結果を理解したりするのに有益と考えられる。しかしながら現状のARの適用範囲は限定的であり、より広い範囲への応用が求められている。 It can be said that the use of AR technology fuses the real world and the virtual world by an approach different from the construction of a virtual world based on user motion recognition. These techniques not only provide visual enjoyment, but may be useful for intuitively and easily operating the device and understanding the output results. However, the current application range of AR is limited, and application to a wider range is required.
 本発明はこのような課題に鑑みてなされたものであり、その目的は、情報処理においてARの技術を効果的に利用できる技術を提供することにある。 The present invention has been made in view of such problems, and an object thereof is to provide a technique that can effectively use the AR technique in information processing.
 本発明のある態様は情報処理装置に関する。この情報処理装置は、実空間を撮影中のカメラから当該撮影画像のデータを取得する撮影画像取得部と、撮影画像を解析し、被写空間に存在するマーカを検出する画像解析部と、カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系とカメラの視野に対応するスクリーンを定義する空間定義部と、検出されたマーカに対応する情報処理を実行するとともに、3次元座標系に、マーカに対応する仮想のオブジェクトを配置する情報処理部と、スクリーンに仮想のオブジェクトを投影してなるオブジェクト画像を撮影画像に重畳して出力画像を生成し、表示装置に出力する出力画像生成部と、を備え、情報処理部は、仮想のオブジェクトとして、情報処理に対するユーザ操作手段である操作用オブジェクトを配置することを特徴とする。 An aspect of the present invention relates to an information processing apparatus. The information processing apparatus includes a captured image acquisition unit that acquires data of the captured image from a camera that is capturing a real space, an image analysis unit that analyzes the captured image and detects a marker present in the captured space, a camera A relative position relationship between the object space and the object space, a space definition unit that defines a three-dimensional coordinate system corresponding to the object space and a screen corresponding to the field of view of the camera, and information processing corresponding to the detected marker And an information processing unit that arranges a virtual object corresponding to the marker in the three-dimensional coordinate system and an object image formed by projecting the virtual object on the screen is superimposed on the captured image to generate an output image An output image generation unit that outputs to a display device, and the information processing unit arranges an operation object that is a user operation means for information processing as a virtual object. It is characterized in.
 本発明の別の態様は情報処理方法に関する。この情報処理方法は、情報処理装置が撮影画像を用いて行う情報処理方法であって、実空間を撮影中のカメラから当該撮影画像のデータを取得しメモリに格納するステップと、撮影画像を解析し、被写空間に存在するマーカを検出するステップと、カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系とカメラの視野に対応するスクリーンを定義するステップと、検出されたマーカに対応する情報処理を実行するとともに、3次元座標系に、マーカに対応する仮想のオブジェクトを配置するステップと、スクリーンに仮想のオブジェクトを投影してなるオブジェクト画像を、メモリより読み出した撮影画像に重畳して出力画像を生成し、表示装置に出力するステップと、を含み、配置するステップは、仮想のオブジェクトとして、情報処理に対するユーザ操作手段である操作用オブジェクトを配置することを特徴とする。 Another aspect of the present invention relates to an information processing method. This information processing method is an information processing method performed by an information processing device using a photographed image, the step of acquiring data of the photographed image from a camera that is photographing a real space and storing it in a memory, and analyzing the photographed image Detecting a marker existing in the object space, identifying a relative positional relationship between the camera and the object space, and a screen corresponding to the field of view of the camera and a three-dimensional coordinate system corresponding to the object space. An object image formed by projecting a virtual object on a screen, defining a step, executing information processing corresponding to the detected marker, placing a virtual object corresponding to the marker in a three-dimensional coordinate system Are superimposed on the captured image read from the memory, and an output image is generated and output to the display device. As objects, characterized by arranging the operation object is a user operation means to the information processing.
 なお、以上の構成要素の任意の組み合わせ、本発明の表現を方法、装置、システム、記録媒体、コンピュータプログラムなどの間で変換したものもまた、本発明の態様として有効である。 It should be noted that any combination of the above-described constituent elements, and a conversion of the expression of the present invention between a method, an apparatus, a system, a recording medium, a computer program, etc. are also effective as an aspect of the present invention.
 本発明によると、ARの技術をゲームなどの情報処理に有効に活用できる。 According to the present invention, AR technology can be effectively used for information processing such as games.
(a)は電子機器の前面を示す図であり、(b)は電子機器の背面を示す図である。(A) is a figure which shows the front surface of an electronic device, (b) is a figure which shows the back surface of an electronic device. (a)は電子機器の上面を示す図であり、(b)は電子機器の下面を示す図であり、(c)は電子機器の左側面を示す図である。(A) is a figure which shows the upper surface of an electronic device, (b) is a figure which shows the lower surface of an electronic device, (c) is a figure which shows the left side surface of an electronic device. 電子機器の回路構成を示す図である。It is a figure which shows the circuit structure of an electronic device. 本実施の形態における情報処理装置の機能ブロックを示す図である。It is a figure which shows the functional block of the information processing apparatus in this Embodiment. 本実施の形態における情報処理装置の使用環境例を説明するための図である。It is a figure for demonstrating the example of a use environment of the information processing apparatus in this Embodiment. 本実施の形態において実空間とオブジェクト描画のための座標系の関係を例示する図である。It is a figure which illustrates the relationship between the real space and the coordinate system for object drawing in this Embodiment. 本実施の形態における情報処理装置がオブジェクト描画を伴うゲームを実行する処理手順を示すフローチャートである。It is a flowchart which shows the process sequence in which the information processing apparatus in this Embodiment performs the game accompanied by object drawing. 本実施の形態においてアプリケーションの実行段階で表示装置に表示される画面例を示す図である。It is a figure which shows the example of a screen displayed on a display apparatus in the execution stage of an application in this Embodiment. 本実施の形態においてマーカ検出部がマーカ検出および対応アプリケーション特定を行う際に参照するデータの構造例を示す図である。It is a figure which shows the structural example of the data referred when a marker detection part performs marker detection and corresponding application identification in this Embodiment. 本実施の形態において情報処理部がマーカに対応するアプリケーションを実行する処理手順例を示すフローチャートである。It is a flowchart which shows the example of a process sequence in which the information processing part performs the application corresponding to a marker in this Embodiment. 本実施の形態における説明書きの表示画面例を示す図である。It is a figure which shows the example of a display screen of the explanatory note in this Embodiment. 図11の画面例が表示されている状態から、実空間においてユーザがカメラをマーカに近づけた場合の画面の変化例を示す図である。It is a figure which shows the example of a change of a screen when a user brings a camera close to a marker in real space from the state where the screen example of FIG. 11 is displayed. 本実施の形態において対戦モードで表示される画面例を示す図である。It is a figure which shows the example of a screen displayed in battle | competition mode in this Embodiment. 本実施の形態において観戦モードで表示される画面例を示す図である。It is a figure which shows the example of a screen displayed in watching mode in this Embodiment. 本実施の形態において情報処理部がマーカに対応して描画するオブジェクトを特定する際に参照するデータの構造例を示す図である。It is a figure which shows the structural example of the data referred when the information processing part specifies the object drawn corresponding to a marker in this Embodiment. 本実施の形態における特別な対戦モードにおいて表示される画面の例を示す図である。It is a figure which shows the example of the screen displayed in the special battle | competition mode in this Embodiment. 本実施の形態でオブジェクトのサイズを変化させることを許容した場合において、対戦モード時にオブジェクトを描画する処理手順を示すフローチャートである。It is a flowchart which shows the process sequence which draws an object at the time of battle | competition mode, when changing the size of an object in this Embodiment. 本実施の形態において情報処理部がアプリケーション実行時にキャラクタに特定動作をさせるために参照するデータの構造例を示す図である。It is a figure which shows the structural example of the data referred in order for an information processing part to make a character perform specific operation | movement at the time of application execution in this Embodiment. 図18の特定動作設定情報で設定する条件の表し方を説明するための図である。It is a figure for demonstrating how to express the conditions set with the specific operation | movement setting information of FIG. 本実施の形態において情報処理部と出力画像生成部がカメラの位置や姿勢の変化に対し表示画像を更新する処理手順を示すフローチャートである。4 is a flowchart illustrating a processing procedure in which an information processing unit and an output image generation unit update a display image in response to a change in the position and orientation of a camera in the present embodiment. 実空間での指の動きによりアイコン操作を実現する態様における表示画面の例を示す図である。It is a figure which shows the example of the display screen in the aspect which implement | achieves icon operation by the motion of the finger in real space.
 まず本実施の形態の情報処理装置の外観構成例および回路構成例を説明する。ただしここで示す情報処理装置は一例であり、他の種類の電子機器、端末装置であってもよい。 First, an external configuration example and a circuit configuration example of the information processing apparatus according to the present embodiment will be described. However, the information processing apparatus shown here is an example, and other types of electronic devices and terminal apparatuses may be used.
 図1(a)は、情報処理装置10の前面を示す。情報処理装置10は、横長の筐体により形成され、ユーザが把持する左右の領域は、円弧状の外郭を有している。情報処理装置10の前面には、矩形のタッチパネル50が設けられる。タッチパネル50は、表示装置20と、表示装置20の表面を覆う透明な前面タッチパッド21から構成される。表示装置20は有機EL(Electro-Liminescence)パネルであり、画像を表示する。なお表示装置20は液晶パネルなどの表示手段であってもよい。前面タッチパッド21は、同時にタッチされた複数ポイントの検出機能をもつマルチタッチパッドであって、タッチパネル50はマルチタッチスクリーンとして構成される。 FIG. 1A shows the front surface of the information processing apparatus 10. The information processing apparatus 10 is formed by a horizontally long casing, and the left and right areas gripped by the user have an arcuate outline. A rectangular touch panel 50 is provided on the front surface of the information processing apparatus 10. The touch panel 50 includes a display device 20 and a transparent front touch pad 21 that covers the surface of the display device 20. The display device 20 is an organic EL (Electro-Liminescence) panel and displays an image. The display device 20 may be a display unit such as a liquid crystal panel. The front touch pad 21 is a multi-touch pad having a function of detecting a plurality of points touched at the same time, and the touch panel 50 is configured as a multi-touch screen.
 タッチパネル50の右側には、菱形の頂点にそれぞれ位置する△ボタン22a、○ボタン22b、×ボタン22c、□ボタン22d(以下、総称する場合には「操作ボタン22」とよぶ)が設けられ、タッチパネル50の左側には、上キー23a、左キー23b、下キー23c、右キー23d(以下、総称する場合には「方向キー23」とよぶ)が設けられる。ユーザは方向キー23を操作して、上下左右および斜方の8方向を入力できる。 On the right side of the touch panel 50, a triangle button 22a, a circle button 22b, a x button 22c, and a square button 22d (hereinafter collectively referred to as “operation buttons 22”) are provided on the right side of the rhombus. On the left side of 50, an upper key 23a, a left key 23b, a lower key 23c, and a right key 23d (hereinafter, collectively referred to as “direction key 23”) are provided. The user can input the eight directions of up / down / left / right and diagonal by operating the direction key 23.
 方向キー23の下側には左スティック24aが設けられ、また操作ボタン22の下側には右スティック24bが設けられる。ユーザは左スティック24aまたは右スティック24b(以下、総称する場合には「アナログスティック24」とよぶ)を傾動して、方向および傾動量を入力する。筐体の左右頂部には、Lボタン26a、Rボタン26bが設けられる。操作ボタン22、方向キー23、アナログスティック24、Lボタン26a、Rボタン26bは、ユーザが操作する操作手段を構成する。 A left stick 24 a is provided below the direction key 23, and a right stick 24 b is provided below the operation button 22. The user tilts the left stick 24a or the right stick 24b (hereinafter, collectively referred to as “analog stick 24”), and inputs a direction and a tilt amount. An L button 26a and an R button 26b are provided on the left and right tops of the housing. The operation button 22, the direction key 23, the analog stick 24, the L button 26a, and the R button 26b constitute operation means operated by the user.
 操作ボタン22の近傍に、前面カメラ30が設けられる。左スティック24aの左側および右スティック24bの右側には、それぞれ音声を出力する左スピーカ25aおよび右スピーカ25b(以下、総称する場合には「スピーカ25」とよぶ)が設けられる。また左スティック24aの下側にHOMEボタン27が設けられ、右スティック24bの下側にSTARTボタン28およびSELECTボタン29が設けられる。 A front camera 30 is provided in the vicinity of the operation button 22. On the left side of the left stick 24a and the right side of the right stick 24b, a left speaker 25a and a right speaker 25b (hereinafter, collectively referred to as “speaker 25”) that output sound are provided, respectively. A HOME button 27 is provided below the left stick 24a, and a START button 28 and a SELECT button 29 are provided below the right stick 24b.
 図1(b)は、情報処理装置10の背面を示す。情報処理装置10の背面には、背面カメラ31および背面タッチパッド32が設けられる。背面タッチパッド32は、前面タッチパッド21と同様に、マルチタッチパッドとして構成される。情報処理装置10は、前面および背面において、2つのカメラおよびタッチパッドを搭載している。 FIG. 1B shows the back surface of the information processing apparatus 10. A rear camera 31 and a rear touch pad 32 are provided on the rear surface of the information processing apparatus 10. The rear touch pad 32 is configured as a multi-touch pad, like the front touch pad 21. The information processing apparatus 10 is equipped with two cameras and a touch pad on the front surface and the back surface.
 図2(a)は、情報処理装置10の上面を示す。既述したように、情報処理装置10の上面の左右端側に、Lボタン26a、Rボタン26bがそれぞれ設けられる。Lボタン26aの右側には電源ボタン33が設けられ、ユーザは、電源ボタン33を押下することで、電源をオンまたはオフする。なお情報処理装置10は、操作手段が操作されない時間(無操作時間)が所定時間続くと、サスペンド状態に遷移する電力制御機能を有している。情報処理装置10がサスペンド状態に入ると、ユーザは電源ボタン33を押下することで、情報処理装置10をサスペンド状態からアウェイク状態に復帰させることができる。 FIG. 2A shows the upper surface of the information processing apparatus 10. As described above, the L button 26a and the R button 26b are provided on the left and right ends of the upper surface of the information processing apparatus 10, respectively. A power button 33 is provided on the right side of the L button 26 a, and the user turns the power on or off by pressing the power button 33. Note that the information processing apparatus 10 has a power control function of transitioning to a suspended state when a time during which the operating means is not operated (no operation time) continues for a predetermined time. When the information processing apparatus 10 enters the suspended state, the user can return the information processing apparatus 10 from the suspended state to the awake state by pressing the power button 33.
 ゲームカードスロット34は、ゲームカードを差し込むための差込口であり、この図では、ゲームカードスロット34がスロットカバーにより覆われている状態が示される。なおゲームカードスロット34の近傍に、ゲームカードがアクセスされているときに点滅するLEDランプが設けられてもよい。アクセサリ端子35は、周辺機器(アクセサリ)を接続するための端子であり、この図ではアクセサリ端子35が端子カバーにより覆われている状態が示される。アクセサリ端子35とRボタン26bの間には、ボリュームを調整するための-ボタン36aと+ボタン36bが設けられている。 The game card slot 34 is an insertion slot for inserting a game card, and this figure shows a state where the game card slot 34 is covered with a slot cover. An LED lamp that blinks when the game card is being accessed may be provided in the vicinity of the game card slot 34. The accessory terminal 35 is a terminal for connecting a peripheral device (accessory), and this figure shows a state where the accessory terminal 35 is covered with a terminal cover. Between the accessory terminal 35 and the R button 26b, a-button 36a and a + button 36b for adjusting the volume are provided.
 図2(b)は、情報処理装置10の下面を示す。メモリカードスロット37は、メモリカードを差し込むための差込口であり、この図では、メモリカードスロット37が、スロットカバーにより覆われている状態が示される。情報処理装置10の下面において、音声入出力端子38、マイク39およびマルチユース端子40が設けられる。マルチユース端子40はUSB(Universal Serial Bus)に対応し、USBケーブルを介して他の機器と接続できる。 FIG. 2B shows the lower surface of the information processing apparatus 10. The memory card slot 37 is an insertion slot for inserting a memory card, and this figure shows a state in which the memory card slot 37 is covered with a slot cover. On the lower surface of the information processing apparatus 10, an audio input / output terminal 38, a microphone 39, and a multi-use terminal 40 are provided. The multi-use terminal 40 corresponds to USB (Universal Serial Bus) and can be connected to other devices via a USB cable.
 図2(c)は、情報処理装置10の左側面を示す。情報処理装置10の左側面には、SIMカードの差込口であるSIMカードスロット41が設けられる。 FIG. 2C shows the left side surface of the information processing apparatus 10. On the left side of the information processing apparatus 10, a SIM card slot 41, which is a SIM card insertion slot, is provided.
 図3は、情報処理装置10の回路構成を示す。各構成はバス92によって互いに接続されている。無線通信モジュール71はIEEE802.11b/g等の通信規格に準拠した無線LANモジュールによって構成され、無線アクセスポイントなどを介してインターネットなどの外部ネットワークに接続する。なお無線通信モジュール71は、ブルートゥース(登録商標)プロトコルの通信機能を有してもよい。携帯電話モジュール72は、ITU(International Telecommunication Union;国際電気通信連合)によって定められたIMT-2000(International Mobile Telecommunication 2000)規格に準拠した第3世代(3rd Generation)デジタル携帯電話方式に対応し、携帯電話網4に接続する。SIMカードスロット41には、携帯電話の電話番号を特定するための固有のID番号が記録されたSIMカード74が挿入される。SIMカード74がSIMカードスロット41に挿入されることで、携帯電話モジュール72は、携帯電話網4との間で通信可能となる。 FIG. 3 shows a circuit configuration of the information processing apparatus 10. Each component is connected to each other by a bus 92. The wireless communication module 71 is configured by a wireless LAN module compliant with a communication standard such as IEEE 802.11b / g, and is connected to an external network such as the Internet via a wireless access point. The wireless communication module 71 may have a Bluetooth (registered trademark) protocol communication function. The mobile phone module 72 corresponds to the third generation (3rd Generation) digital mobile phone system conforming to the IMT-2000 (International Mobile Telecommunication 2000) standard defined by the ITU (International Telecommunication Union). Connect to the telephone network 4. Into the SIM card slot 41, a SIM card 74 in which a unique ID number for specifying the telephone number of the mobile phone is recorded is inserted. By inserting the SIM card 74 into the SIM card slot 41, the mobile phone module 72 can communicate with the mobile phone network 4.
 CPU(Central Processing Unit)60は、メインメモリ64にロードされたプログラムなどを実行する。GPU(Graphics Processing Unit)62は、画像処理に必要な計算を実行する。メインメモリ64は、RAM(Random Access Memory)などにより構成され、CPU60が使用するプログラムやデータなどを記憶する。ストレージ66は、NAND型フラッシュメモリ(NAND-type flash memory)などにより構成され、内蔵型の補助記憶装置として利用される。 CPU (Central Processing Unit) 60 executes a program loaded in the main memory 64. A GPU (Graphics Processing Unit) 62 performs calculations necessary for image processing. The main memory 64 is composed of a RAM (Random Access Memory) or the like, and stores programs, data, and the like used by the CPU 60. The storage 66 is configured by a NAND flash memory (NAND-type flash memory) or the like, and is used as a built-in auxiliary storage device.
 モーションセンサ67は、情報処理装置10の動きを検出し、地磁気センサ68は、3軸方向の地磁気を検出する。GPS制御部69は、GPS衛星からの信号を受信し、現在位置を算出する。前面カメラ30および背面カメラ31は、画像を撮像し、画像データを入力する。前面カメラ30および背面カメラ31は、CMOSイメージセンサ(Complementary Metal Oxide Semiconductor Image Sensor)によって構成される。 The motion sensor 67 detects the movement of the information processing apparatus 10, and the geomagnetic sensor 68 detects the geomagnetism in the triaxial direction. The GPS control unit 69 receives a signal from a GPS satellite and calculates a current position. The front camera 30 and the rear camera 31 capture an image and input image data. The front camera 30 and the rear camera 31 are constituted by CMOS image sensors (Complementary Metal Oxide Semiconductor Image Sensor).
 表示装置20は、有機EL表示装置であり、陰極および陽極に電圧を印加することで発光する発光素子を有する。省電力モードでは、電極間に印加する電圧を通常よりも低くすることで、表示装置20を減光状態とすることができ、電力消費を抑えられる。なお表示装置20はバックライトを備えた液晶パネル表示装置であってもよい。省電力モードでは、バックライトの光量を下げることで、液晶パネル表示装置を減光状態として、電力消費を抑えることができる。 The display device 20 is an organic EL display device and has a light emitting element that emits light by applying a voltage to the cathode and the anode. In the power saving mode, the voltage applied between the electrodes is made lower than usual, so that the display device 20 can be dimmed and power consumption can be suppressed. The display device 20 may be a liquid crystal panel display device provided with a backlight. In the power saving mode, by reducing the amount of light from the backlight, the liquid crystal panel display device can be in a dimmed state and power consumption can be suppressed.
 インタフェース90において、操作部70は、情報処理装置10における各種操作手段を含み、具体的には、操作ボタン22、方向キー23、アナログスティック24、Lボタン26a、Rボタン26b、HOMEボタン27、STARTボタン28、SELECTボタン29、電源ボタン33、-ボタン36a、+ボタン36bを含む。前面タッチパッド21および背面タッチパッド32は、マルチタッチパッドであり、前面タッチパッド21は、表示装置20の表面に重ね合わせて配置される。スピーカ25は、情報処理装置10の各機能により生成される音声を出力し、マイク39は、情報処理装置10の周辺の音声を入力する。音声入出力端子38は、外部のマイクからステレオ音声を入力し、外部のヘッドホンなどへステレオ音声を出力する。 In the interface 90, the operation unit 70 includes various operation means in the information processing apparatus 10. Specifically, the operation button 22, the direction key 23, the analog stick 24, the L button 26a, the R button 26b, the HOME button 27, and START. A button 28, a SELECT button 29, a power button 33, a-button 36a, and a + button 36b are included. The front touchpad 21 and the rear touchpad 32 are multi-touchpads, and the front touchpad 21 is disposed on the surface of the display device 20. The speaker 25 outputs sound generated by each function of the information processing apparatus 10, and the microphone 39 inputs sound around the information processing apparatus 10. The audio input / output terminal 38 inputs stereo sound from an external microphone and outputs stereo sound to external headphones or the like.
 ゲームカードスロット34には、ゲームファイルを記録したゲームカード76が差し込まれる。ゲームカード76は、データの書込可能な記録領域を有しており、ゲームカードスロット34に装着されると、メディアドライブにより、データの書込み/読出しが行われる。メモリカードスロット37には、メモリカード78が差し込まれる。メモリカード78は、メモリカードスロット37に装着されると、外付け型の補助記憶装置として利用される。マルチユース端子40は、USB端子として利用でき、USBケーブル80を接続されて、他のUSB機器とデータの送受信を行う。アクセサリ端子35には、周辺機器が接続される。 In the game card slot 34, a game card 76 in which a game file is recorded is inserted. The game card 76 has a recording area in which data can be written. When the game card 76 is inserted into the game card slot 34, data is written / read by the media drive. A memory card 78 is inserted into the memory card slot 37. When the memory card 78 is inserted into the memory card slot 37, it is used as an external auxiliary storage device. The multi-use terminal 40 can be used as a USB terminal, and is connected to a USB cable 80 to transmit / receive data to / from another USB device. A peripheral device is connected to the accessory terminal 35.
 情報処理装置10は、ゲームや電子データ作成などの情報処理、電子書籍、ウェブページ、ビデオ、音楽など各種コンテンツの出力、通信などをユーザ操作に応じて実行する。このとき、必要なプログラムを内部の各種記憶装置からメインメモリ64にロードしたうえ、CPU60の制御のもと、情報処理装置10内で全ての処理を実施してもよいし、ネットワークを介して接続したサーバに処理の一部を要求し、その結果を受信しながら実施してもよい。このように情報処理装置10が実行する処理の種類や処理形態は様々考えられ特に限定されるものではないが、以下、ゲームを実行する場合を例に説明する。 The information processing apparatus 10 executes information processing such as game and electronic data creation, output of various contents such as electronic books, web pages, videos, music, communication, and the like according to user operations. At this time, a necessary program may be loaded from various internal storage devices to the main memory 64 and all processing may be performed in the information processing apparatus 10 under the control of the CPU 60 or may be connected via a network. It may be carried out while requesting a part of the processing from the server and receiving the result. As described above, various types and types of processing executed by the information processing apparatus 10 are conceivable and not particularly limited. Hereinafter, a case where a game is executed will be described as an example.
 以下に述べる一実施形態において、ユーザはあるゲームを実施したいとき、部屋に設置したテーブルなど任意の場所に当該ゲームに対応づけられた図柄のカードを置き、情報処理装置10の背面カメラ31などでリアルタイムに撮影する。情報処理装置10は当該カードの存在を撮影画像から検出すると、それに対応づけられたゲームを起動する。ユーザは実空間の撮影を続行し、情報処理装置10は当該実空間の撮影画像に、ゲームに用いる道具やキャラクタなどのオブジェクトをコンピュータグラフィックスによって描画した画像を重畳して表示装置20に表示する。ユーザは実空間を撮影しながら情報処理装置10を操作することによりゲームを進捗させる。これにより、例えば自分の部屋を舞台に仮想的なキャラクタと対戦ゲームを楽しむことができる。 In one embodiment described below, when a user wants to execute a certain game, a card of a pattern associated with the game is placed in an arbitrary place such as a table installed in a room, and the rear camera 31 of the information processing apparatus 10 is used. Shoot in real time. When the information processing apparatus 10 detects the presence of the card from the photographed image, the information processing apparatus 10 starts a game associated with the card. The user continues to capture the real space, and the information processing apparatus 10 superimposes an image of a tool or a character used in the game on the captured image of the real space and displays the image on the display device 20. . The user progresses the game by operating the information processing apparatus 10 while photographing the real space. Thereby, for example, it is possible to enjoy a battle game with a virtual character on the stage of one's own room.
 なおゲームを開始するきっかけとなる上記カードは、撮影画像によって検出が可能であり、事前にゲームと対応づけることができるものであればよく、所定の図柄のカード以外に、所定の絵、文字、図形の少なくともいずれかを記載したカードや立体物でよい。また所定の形状を有するカードや立体物でもよい。あるいは所定の形状、絵、文字、図形の少なくともいずれかを表示させたディスプレイや当該ディスプレイを有する電子装置などでもよい。以後、これらを総称して「マーカ」と呼ぶ。マーカは単体で用いるほか、複数のマーカの組み合わせによって一つのゲームを選択できるようにしてもよい。またマーカと対応づける対象は、ゲーム単位でなくてもよく、ゲームを含むアプリケーションなどゲームより大きい単位や、ゲーム内で入力すべきコマンド、ゲームの環境を決定づける各種パラメータ、描画するキャラクタ、オブジェクトなどゲームより小さい単位でもよい。 In addition, the said card | curd used as the opportunity to start a game should just be a thing which can be detected with a picked-up image and can be matched with a game in advance, A predetermined picture, character, It may be a card or a three-dimensional object that describes at least one of the figures. Moreover, the card | curd and solid thing which have a predetermined shape may be sufficient. Alternatively, a display on which at least one of a predetermined shape, a picture, a character, and a figure is displayed, or an electronic device having the display may be used. Hereinafter, these are collectively referred to as “markers”. The marker may be used alone, or one game may be selected by combining a plurality of markers. The target to be associated with the marker does not have to be a game unit. A game unit such as an application including a game, a unit larger than the game, a command to be input in the game, various parameters for determining the game environment, a character to be drawn, an object, etc. Smaller units may be used.
 図4は、情報処理装置10の機能ブロックを示す。この図では特に、図3のCPU60、GPU62、メインメモリ64、ストレージ66によって実現される機能を制御部100として示している。制御部100に含まれる各機能ブロックは、ハードウェア的には、上記のとおりCPU60、GPU62、メインメモリ64などで構成することができ、ソフトウェア的には、情報処理装置10内の各種記憶装置や装着された記録媒体からメインメモリ64にロードしたプログラムなどによって実現される。したがって、これらの機能ブロックがハードウェアのみ、ソフトウェアのみ、またはそれらの組合せによっていろいろな形で実現できることは当業者には理解されるところであり、いずれかに限定されるものではない。 FIG. 4 shows functional blocks of the information processing apparatus 10. In this figure, in particular, the functions realized by the CPU 60, GPU 62, main memory 64, and storage 66 of FIG. Each functional block included in the control unit 100 can be configured by the CPU 60, the GPU 62, the main memory 64, and the like as described above in terms of hardware, and various storage devices in the information processing apparatus 10 and the like in terms of software. This is realized by a program loaded into the main memory 64 from the loaded recording medium. Therefore, it is understood by those skilled in the art that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof, and is not limited to any one.
 制御部100は、ユーザ操作に係る情報を取得する入力情報取得部102、撮影画像のデータを取得する撮影画像取得部104、撮影画像のデータを記憶する撮影画像記憶部106、撮影画像を解析してマーカ検出、空間定義を行う画像解析部108、検出するマーカに係る情報を記憶するマーカ対応情報記憶部109、マーカに対応した情報処理を行う情報処理部114、描画するオブジェクトに係る情報を記憶するオブジェクトデータ記憶部116、撮影画像にオブジェクトの画像を重畳するなどして出力画像を生成する出力画像生成部118を含む。 The control unit 100 analyzes an input information acquisition unit 102 that acquires information related to a user operation, a captured image acquisition unit 104 that acquires captured image data, a captured image storage unit 106 that stores captured image data, and a captured image. Image analysis unit 108 for performing marker detection and space definition, marker correspondence information storage unit 109 for storing information relating to the marker to be detected, information processing unit 114 for performing information processing corresponding to the marker, and information relating to the object to be drawn An object data storage unit 116, and an output image generation unit 118 that generates an output image by superimposing an object image on the captured image.
 入力情報取得部102は、情報処理装置10に対してなされたユーザ操作に係る情報を、操作部70、前面タッチパッド21、およびモーションセンサ67からそれぞれ入力信号として受信する。そして各入力信号をあらかじめ定めた規則に則り操作内容の情報に変換したうえ、撮影画像取得部104、画像解析部108、情報処理部114に供給する。後述するように本実施の形態では表示装置20に表示された画像世界とのインタラクションによる操作を基本とするため、好適には情報処理装置10が備える操作部70を介した操作は、初期操作など最低限の場面に止める。 The input information acquisition unit 102 receives information related to user operations performed on the information processing apparatus 10 as input signals from the operation unit 70, the front touchpad 21, and the motion sensor 67, respectively. Each input signal is converted into operation content information in accordance with a predetermined rule, and supplied to the captured image acquisition unit 104, the image analysis unit 108, and the information processing unit 114. As will be described later, in the present embodiment, the operation based on the interaction with the image world displayed on the display device 20 is basically used. Therefore, the operation via the operation unit 70 included in the information processing device 10 is preferably an initial operation or the like. Stop at the minimum scene.
 撮影画像取得部104は、ユーザによる処理の開始操作に応じて背面カメラ31に撮影を開始させることにより撮影画像のデータを取得する。背面カメラ31を利用することにより、ユーザは表示装置20を見ながら情報処理装置10を動かして視野を変化させたりゲームを進捗させたりするための操作を自然なかたちで行える。撮影画像取得部104は、撮影画像のデータを所定のレートでリアルタイムに取得し、適宜識別番号を付与したうえ、画像解析部108に供給するとともに撮影画像記憶部106に格納していく。当該識別番号は、以後の一連の処理において処理対象の撮影画像を識別するため、各機能ブロック間で伝達していく。 The captured image acquisition unit 104 acquires captured image data by causing the rear camera 31 to start capturing in accordance with a process start operation by the user. By using the rear camera 31, the user can naturally perform operations for moving the information processing apparatus 10 while viewing the display device 20 to change the field of view or progress the game. The captured image acquisition unit 104 acquires captured image data at a predetermined rate in real time, appropriately assigns an identification number, supplies it to the image analysis unit 108 and stores it in the captured image storage unit 106. The identification number is transmitted between the functional blocks in order to identify the captured image to be processed in a series of subsequent processes.
 画像解析部108は、カメラの視野に存在するマーカを検出するマーカ検出部110、実空間に存在する物とカメラの位置や姿勢を追跡しグラフィックス描画に用いる座標系を定義する空間定義部112を含む。マーカ検出部110は、マーカ対応情報記憶部109に登録されたマーカの情報に基づき、撮影画像中のマーカを検出する。マーカ検出技術としては、コンピュータビジョンなどの分野でこれまでに提案されている様々な対象物認識手法のいずれかを採用してよい。 The image analysis unit 108 includes a marker detection unit 110 that detects a marker present in the field of view of the camera, and a space definition unit 112 that tracks an object existing in real space and the position and orientation of the camera and defines a coordinate system used for graphics drawing. including. The marker detection unit 110 detects a marker in the captured image based on the marker information registered in the marker correspondence information storage unit 109. As the marker detection technique, any of various object recognition methods that have been proposed in the field of computer vision or the like may be employed.
 例えばあらかじめ準備したマーカ画像と撮影画像とをブロックマッチングしたり、特徴量を照合したりすることにより、撮影画像を部分領域ごとにスコアリングし、スコアの高い領域にマーカの存在を推定する。特徴量に基づく認識手法として、Random Forest法やRandom Ferns法が利用できる。具体的には、準備したマーカ画像からランダムにサンプリングした局所パッチを用いて学習した決定木群を作成しておき、撮影画像から検出した特徴点を含む局所パッチを当該決定木群に入力する。そしてリーフノードにおける出力結果によってマーカの存在確率分布を取得する。 For example, by performing block matching between a marker image prepared in advance and a captured image, or by comparing feature amounts, the captured image is scored for each partial region, and the presence of a marker is estimated in a region having a high score. Random Forest method and Random Ferns method can be used as recognition methods based on features. Specifically, a decision tree group learned using local patches randomly sampled from the prepared marker image is created, and a local patch including feature points detected from the captured image is input to the decision tree group. Then, the presence probability distribution of the marker is acquired from the output result at the leaf node.
 あるいは、特徴点の周囲の輝度勾配を比較するSURF(Speeded Up Robust Features)法などを利用してもよい。マーカ対応情報記憶部109には、マーカ検出に用いる手法に応じた、マーカの特徴を表すデータ格納しておく。なお上記の技術は、マーカ検出のための専用コードなどをマーカに記載する必要がなく、いかなる画像や立体物でもマーカとして利用できる可能性がある、という点で顕著な効果を奏するが、本実施の形態をそれに限定する主旨ではない。すなわち専用コードを記載したマーカを使用し、マーカ検出部110はそれを読み取ることでマーカ検出を行ってもよい。 Alternatively, the SURF (Speeded Up Up Robust Features) method that compares the luminance gradient around the feature points may be used. The marker correspondence information storage unit 109 stores data representing marker characteristics in accordance with the technique used for marker detection. The above technique has a remarkable effect in that there is no need to write a dedicated code for marker detection in the marker, and any image or three-dimensional object can be used as the marker. It is not intended to limit the form. In other words, the marker detecting unit 110 may detect the marker by using a marker in which a dedicated code is written, and reading the marker.
 空間定義部112は、撮影画像を解析することにより、撮影対象である実空間(被写空間)に存在する物が構成する環境形状と、それを撮影しているカメラの位置および姿勢を追跡する。そして被写空間中に3次元のワールド座標系を定義し、さらにカメラの動きに応じて逐次、カメラ座標系を定義する。環境形状とカメラの位置や姿勢を追跡する技術として、例えばSLAM(Simultaneous Localization And Mapping)法を利用する。 The space definition unit 112 analyzes the photographed image to track the environmental shape formed by the object existing in the real space (photographed space) that is the subject of photographing, and the position and orientation of the camera that is photographing the environment shape. . Then, a three-dimensional world coordinate system is defined in the object space, and the camera coordinate system is sequentially defined according to the movement of the camera. For example, a SLAM (Simultaneous Localization And Mapping) method is used as a technique for tracking the environmental shape and the position and orientation of the camera.
 SLAM法は、撮影画像から検出した特徴点を含む局所パッチごとに、特徴点の動きを追跡し、それに基づき所定の状態変数を各時間ステップで更新する手法である。状態変数をカメラの位置や姿勢(回転角)、移動速度、角速度、被写空間に存在する物の少なくとも1つの特徴点の位置、などとすることにより、被写空間とカメラのセンサ面との位置関係(距離および角度)、ひいてはワールド座標系とカメラ座標系との位置関係が撮影画像ごとに取得できる。 The SLAM method is a method of tracking the movement of a feature point for each local patch including the feature point detected from a captured image and updating a predetermined state variable at each time step based on the tracking. By using the state variables as the camera position and orientation (rotation angle), moving speed, angular velocity, the position of at least one feature point of the object existing in the object space, etc., the object space and the sensor surface of the camera The positional relationship (distance and angle) and thus the positional relationship between the world coordinate system and the camera coordinate system can be acquired for each captured image.
 そのほか、カメラをステレオカメラとしたり、別途赤外線照射手段と赤外線センサを設けたりして、既知の手法でカメラから被写体までの距離を取得し環境形状などを取得してもよい。あるいは、情報処理装置10に備えられたモーションセンサ67や地磁気センサ68の出力信号に基づき、カメラの位置や姿勢を算出してもよい。マーカ検出部110、空間定義部112が行う画像解析はこのように様々な手段が考えられ、例えば特許文献1にも記載されていることから、ここでは詳細な説明は省略する。 In addition, the camera may be a stereo camera, or an infrared irradiation unit and an infrared sensor may be separately provided, and the distance from the camera to the subject may be acquired by a known method to acquire the environmental shape and the like. Alternatively, the position and orientation of the camera may be calculated based on output signals from the motion sensor 67 and the geomagnetic sensor 68 provided in the information processing apparatus 10. Various means for image analysis performed by the marker detection unit 110 and the space definition unit 112 are conceivable as described above, and are described in, for example, Patent Document 1. Therefore, detailed description thereof is omitted here.
 情報処理部114は、マーカ検出部110が検出したマーカに対応づけられたゲームなどの情報処理を実行する。そのためマーカ対応情報記憶部109には、マーカと、それが検出されたときに開始するゲームとを対応づけて格納しておく。マーカ対応情報記憶部109にはさらに、各ゲームにおいて描画すべきオブジェクトを対応づけて格納しておく。情報処理部114は、空間定義部112が定義したワールド座標系における各オブジェクトの配置を決定し、出力画像生成部118にオブジェクトの描画を含む出力画像の生成を要求する。 The information processing unit 114 executes information processing such as a game associated with the marker detected by the marker detection unit 110. Therefore, the marker correspondence information storage unit 109 stores a marker and a game that starts when the marker is detected in association with each other. The marker correspondence information storage unit 109 further stores an object to be drawn in each game in association with each other. The information processing unit 114 determines the placement of each object in the world coordinate system defined by the space definition unit 112 and requests the output image generation unit 118 to generate an output image including drawing of the object.
 出力画像生成部118は、情報処理部114からの要求に従い、撮影画像記憶部106から撮影画像のデータを読み出したうえ、オブジェクトを撮影画像上に描画して出力画像を生成する。オブジェクトのモデルデータは識別情報とともにオブジェクトデータ記憶部116に格納しておく。ここで行われる描画処理は、ワールド座標系に配置されたオブジェクトを、カメラの位置および姿勢から定義したカメラ座標系に変換し、スクリーンに投影するものであり、基本的な処理手順は一般的なコンピュータグラフィックスと同様でよい。生成した出力画像のデータは、フレームメモリを介して表示装置20に出力し即時表示させる。なお撮影画像においてマーカが検出されない期間、出力画像生成部118は撮影画像をそのまま出力画像としてよい。 In response to a request from the information processing unit 114, the output image generation unit 118 reads the captured image data from the captured image storage unit 106, and draws an object on the captured image to generate an output image. The object model data is stored in the object data storage unit 116 together with the identification information. The drawing process performed here converts an object placed in the world coordinate system into a camera coordinate system defined from the position and orientation of the camera, and projects it onto the screen. The basic processing procedure is general. It may be the same as computer graphics. The generated output image data is output to the display device 20 via the frame memory and displayed immediately. Note that during a period in which no marker is detected in the captured image, the output image generation unit 118 may use the captured image as it is as an output image.
 オブジェクトデータ記憶部116にはさらに、オブジェクトとして含まれるキャラクタの動作を規定する情報も格納しておく。具体例は後に述べるが、本実施の形態ではカメラとキャラクタとの距離、相対角度が逐次特定できるため、それらのパラメータが所定の条件を満たしたとき、キャラクタが所定の動作(以後、「特定動作」と呼ぶ)を行うようにする。これにより、ユーザが動かすカメラと仮想キャラクタとのインタラクションを実現する。 The object data storage unit 116 further stores information that defines the actions of characters included as objects. Although a specific example will be described later, in the present embodiment, since the distance and relative angle between the camera and the character can be sequentially specified, when those parameters satisfy a predetermined condition, the character performs a predetermined motion (hereinafter referred to as “specific motion”). ”). Thereby, the interaction between the camera moved by the user and the virtual character is realized.
 図5は、本実施の形態における情報処理装置10の使用環境例を説明するための図である。同図において実空間120はテーブル122が設置され、その上には時計124と鉛筆立て126が置かれている。このような実空間120において、ユーザはテーブル122にマーカ128aを置き、それを情報処理装置10の背面カメラ31で撮影する。この例ではマーカ128aは、ネコの絵が描かれたカードとしている。背面カメラ31の視野は、例えば点線矩形130で示した領域である。 FIG. 5 is a diagram for explaining an example of the usage environment of the information processing apparatus 10 in the present embodiment. In the figure, a real space 120 is provided with a table 122, on which a clock 124 and a pencil stand 126 are placed. In such a real space 120, the user places the marker 128 a on the table 122 and images it with the rear camera 31 of the information processing apparatus 10. In this example, the marker 128a is a card on which a picture of a cat is drawn. The field of view of the rear camera 31 is an area indicated by a dotted rectangle 130, for example.
 図6は実空間と、オブジェクト描画のための座標系の関係を例示しており、図5の実空間120を右上から見た状態を示している。図5で示した通り、テーブル122上には時計124、鉛筆立て126、マーカ128aが置かれており、カメラのセンサ面がスクリーン140と一致するとする。上記の通りマーカ検出部110によって、撮影画像中にマーカ128aが検出されると、空間定義部112は、当該マーカの実際のサイズをマーカ対応情報記憶部109から読み出し、撮影画像中のマーカの像と比較する。これにより、マーカ128aからスクリーン140までの距離および回転角、ひいてはカメラの位置および姿勢の初期値を導出できる。 FIG. 6 illustrates the relationship between the real space and the coordinate system for drawing the object, and shows a state where the real space 120 in FIG. 5 is viewed from the upper right. As shown in FIG. 5, a clock 124, a pencil stand 126, and a marker 128 a are placed on the table 122, and the camera sensor surface coincides with the screen 140. As described above, when the marker 128a is detected in the captured image by the marker detection unit 110, the space definition unit 112 reads the actual size of the marker from the marker correspondence information storage unit 109, and displays the marker image in the captured image. Compare with Thereby, the initial value of the distance and rotation angle from the marker 128a to the screen 140, and hence the position and orientation of the camera can be derived.
 マーカ128aの中央を原点として、図示するxyz座標のようにワールド座標系を定義すれば、ワールド座標系とカメラ座標系との関係が決定し、結果としてワールド座標系に配置したオブジェクトのスクリーン140への投影が可能になる。さらに空間定義部112によって、撮影画像における、マーカ128a、テーブル122、時計124、鉛筆立て126などの像による特徴点をSLAM法などにより追跡することにより、カメラがいかに移動したり姿勢を変化させたりしても両座標系の関係は逐次求められる。これにより、カメラの移動に応じたオブジェクト画像の変化も正確に表現できる。なおワールド座標系は図示するものに限らないが、マーカ128aの画像平面がテーブル上面や床面であると推定し、当該面を基準面として定義することにより、オブジェクトをその上に置くように表現できる。 If the world coordinate system is defined like the xyz coordinates shown with the center of the marker 128a as the origin, the relationship between the world coordinate system and the camera coordinate system is determined, and as a result, to the screen 140 of the object placed in the world coordinate system. Projection becomes possible. Further, the space definition unit 112 tracks the feature points of the captured image such as the marker 128a, the table 122, the clock 124, and the pencil stand 126 by the SLAM method, so that the camera can move and change its posture. Even so, the relationship between the two coordinate systems can be obtained sequentially. Thereby, the change of the object image according to the movement of the camera can be expressed accurately. Although the world coordinate system is not limited to that shown in the figure, it is assumed that the image plane of the marker 128a is a table upper surface or a floor surface, and the surface is defined as a reference surface, thereby expressing the object on the surface. it can.
 次にこれまで述べた構成による情報処理装置10の動作について述べる。図7は情報処理装置10がマーカ検出をきっかけとして、オブジェクト描画を伴うゲームを実行する処理手順を示すフローチャートである。なお以後説明する例では、情報処理装置はマーカに反応してゲームそのものを開始するのでなく、ゲームに関連する様々なモード選択を受け付けたうえでゲームやそれ以外の機能の処理を開始する。したがってそれらを包括する概念として、情報処理装置10が開始する対象を「アプリケーション」と呼ぶ。 Next, the operation of the information processing apparatus 10 configured as described above will be described. FIG. 7 is a flowchart illustrating a processing procedure in which the information processing apparatus 10 executes a game with object drawing triggered by marker detection. In the example described below, the information processing apparatus does not start the game itself in response to the marker but starts processing the game and other functions after receiving various mode selections related to the game. Therefore, as a concept encompassing them, an object started by the information processing apparatus 10 is called an “application”.
 図7のフローチャートは例えば、ユーザが情報処理装置10の電源を投入したとき、あるいは処理開始の操作を操作部70などを介して行ったときなどに開始される。まず撮影画像取得部104は、背面カメラ31に撮影を開始させることにより、撮影画像のデータ取得を開始する(S10)。以後、撮影画像取得部104は所定のレートで撮影画像(画像フレーム)を順次取得していく。 7 is started, for example, when the user turns on the information processing apparatus 10 or when a process start operation is performed via the operation unit 70 or the like. First, the captured image acquisition unit 104 starts acquisition of captured image data by causing the rear camera 31 to start capturing (S10). Thereafter, the captured image acquisition unit 104 sequentially acquires captured images (image frames) at a predetermined rate.
 撮影画像のデータは画像解析部108に供給されるとともに撮影画像記憶部106に格納される。出力画像生成部118はまず、当該撮影画像のデータを撮影画像記憶部106から読み出し、表示装置20に出力することにより、電子ファインダと同様、撮影画像をそのまま表示させる(S12)。これによりユーザは、背面カメラ31で撮影した画像を表示装置20で確認しながら情報処理装置10を動かし、所望の視野を得ることができる。 The captured image data is supplied to the image analysis unit 108 and stored in the captured image storage unit 106. First, the output image generation unit 118 reads the photographed image data from the photographed image storage unit 106 and outputs it to the display device 20 to display the photographed image as it is like the electronic viewfinder (S12). Thereby, the user can move the information processing apparatus 10 while confirming the image photographed by the rear camera 31 with the display device 20, and can obtain a desired visual field.
 並行して画像解析部108のマーカ検出部110は、撮影画像を解析することによりマーカ検出処理を行う(S14)。そのためマーカ検出部110は、マーカ対応情報記憶部109に登録されている、各マーカの特徴を表すデータを参照する。マーカが検出されない期間、画像解析部108からその旨の通知を受けることにより、出力画像生成部118は撮影画像をそのまま表示装置20に出力し続ける(S16のN、S12)。マーカが検出されたら(S16のY)、空間定義部112は、図6に示したようにワールド座標系を決定したうえ、カメラの位置、姿勢に応じてスクリーン140をワールド座標系に対して設定する(S18)。カメラの位置や姿勢は常時変化する可能性があるため、以後、上述のSLAM法などによってその動きを追跡する。 In parallel, the marker detection unit 110 of the image analysis unit 108 performs marker detection processing by analyzing the captured image (S14). Therefore, the marker detection unit 110 refers to data representing the feature of each marker registered in the marker correspondence information storage unit 109. When the marker is not detected, the output image generation unit 118 continues to output the captured image as it is to the display device 20 by receiving the notification from the image analysis unit 108 (N in S16, S12). When the marker is detected (Y in S16), the space definition unit 112 determines the world coordinate system as shown in FIG. 6 and sets the screen 140 with respect to the world coordinate system according to the position and orientation of the camera. (S18). Since the position and orientation of the camera may change constantly, the movement is tracked by the SLAM method described above.
 一方、マーカ検出部110は、検出したマーカに対応づけられたアプリケーションを、マーカ対応情報記憶部109を参照することにより特定し、当該アプリケーションの識別情報を情報処理部114に通知する(S20)。それに応じて情報処理部114は出力画像生成部118との協働により、通知されたアプリケーションを実行する(S22)。 Meanwhile, the marker detection unit 110 identifies an application associated with the detected marker by referring to the marker correspondence information storage unit 109, and notifies the information processing unit 114 of identification information of the application (S20). In response to this, the information processing unit 114 executes the notified application in cooperation with the output image generation unit 118 (S22).
 ここでは上述のとおり、アプリケーションに対応づけられたキャラクタやゲームの道具などのオブジェクトをワールド座標系に配置し、それをスクリーン140に投影することにより描画したうえ、撮影画像に重畳して出力画像を生成する。さらにカメラの移動に応じてキャラクタに特定動作をさせたり、ユーザ操作に応じてゲームを進捗させたりする。生成した出力画像は順次、表示装置20で表示することにより、カメラの動きやゲームなどの処理の進捗に応じて変化する画像を表示できる。 Here, as described above, an object such as a character or a game tool associated with an application is placed in the world coordinate system, and is drawn by projecting it onto the screen 140, and the output image is superimposed on the captured image. Generate. Further, the character is caused to perform a specific action according to the movement of the camera, or the game is advanced according to a user operation. The generated output image is sequentially displayed on the display device 20, so that an image that changes according to the progress of processing such as camera movement or game can be displayed.
 アプリケーションの処理が終了したら、ユーザが撮影を終了させるなど全体的な処理の停止操作を行うまで撮影を続行してマーカ検出を行い、対応するアプリケーションを実行する(S26のN、S12~S22)。撮影を終了させる操作がなされたら一連の処理を終了する(S26のY)。 When the processing of the application is completed, the shooting is continued until the user performs an overall operation stop operation such as ending the shooting, marker detection is performed, and the corresponding application is executed (N in S26, S12 to S22). When an operation for ending the shooting is performed, a series of processing ends (Y in S26).
 図8は、図7のS22におけるアプリケーションの実行段階で表示装置20に表示される画面例を示している。同図は例えば、図5で示した実空間120を撮影し、マーカ128aが検出された結果、実行を開始したアプリケーションの一画面である。画面例148おいて、撮影画像に含まれるマーカ128bの上に、アプリケーション、あるいはマーカ128aに対応づけられたオブジェクトであるネコのキャラクタ150aが描画されている。さらにマーカ128bを囲むように、アイコン152a、152b、152cが描画されている。ここで「アイコン」とは、該当領域を指示することにより対応づけられた処理が開始されたり対応する項目が選択されたりするものであれば、その使用目的、開始される処理や選択対象の種類、形状などは限定されない。したがって図示するようなボタンのグラフィックスも「アイコン」と呼ぶ。 FIG. 8 shows an example of a screen displayed on the display device 20 at the application execution stage in S22 of FIG. This figure shows, for example, one screen of an application that has started execution as a result of detecting the marker 128a after photographing the real space 120 shown in FIG. In the screen example 148, a cat character 150a, which is an application or an object associated with the marker 128a, is drawn on the marker 128b included in the captured image. Further, icons 152a, 152b, and 152c are drawn so as to surround the marker 128b. Here, the “icon” is used if the associated process is started or the corresponding item is selected by designating the corresponding area, the purpose of use, the process to be started, and the type of selection target The shape is not limited. Therefore, the button graphics as shown in the figure are also called “icons”.
 アイコン152a、152b、152cはアプリケーションで実行する機能を選択するためのものであり、遊び方の説明書きを表示する「遊び方」アイコン152a、所定のキャラクタとゲームで対戦する「対戦」アイコン152b、複数のキャラクタ同士の対戦を観戦する「観戦」アイコン152cで構成される。上記の通りキャラクタ150aおよびアイコン152a、152b、152cはまず、ワールド座標系に配置された後、カメラのセンサ面と一致するスクリーンに投影して描画するため、マーカ128bと同一平面であるテーブル上に置かれているように表現できる。 The icons 152a, 152b, and 152c are for selecting a function to be executed by the application. A “how to play” icon 152a that displays a description of how to play, a “match” icon 152b that plays against a predetermined character in a game, It consists of a “watching” icon 152c for watching a battle between characters. As described above, the character 150a and the icons 152a, 152b, and 152c are first placed on the world coordinate system, and then projected and drawn on a screen that matches the sensor surface of the camera. It can be expressed as it is placed.
 情報処理部114は、表示画面中、アイコン152a、152b、152cの位置に対応する前面タッチパッド21内の領域を、各アイコンが表す機能に対応づける。これにより、ユーザが所望のアイコンの位置で前面タッチパッド21にタッチすることにより、情報処理部114は、対応する機能の処理を開始する。アイコン152a、152b、152cは実世界のテーブルに置かれているように描画されるため、カメラの視野が変化すると画面におけるアイコンの位置も変化する。従って情報処理部114は、出力画像生成部118から各アイコンの描画領域に係る情報を常時取得し、前面タッチパッド21における各アイコンの検知領域をカメラの動きに応じて更新していく。 The information processing unit 114 associates an area in the front touchpad 21 corresponding to the position of the icons 152a, 152b, and 152c on the display screen with the function represented by each icon. Thereby, when the user touches the front touchpad 21 at the position of a desired icon, the information processing unit 114 starts processing of the corresponding function. Since the icons 152a, 152b, and 152c are drawn as if they are placed on a table in the real world, the position of the icon on the screen changes when the field of view of the camera changes. Therefore, the information processing unit 114 constantly acquires information related to the drawing area of each icon from the output image generation unit 118, and updates the detection area of each icon on the front touchpad 21 according to the movement of the camera.
 キャラクタ150aは3次元コンピュータグラフィックスの手法によって立体的に表現することが望ましい。またユーザが操作を行っていない期間であっても体を動かしたりまばたきしたりすることにより生き物としての臨場感を演出する。なおアイコン152a、152b、152cも立体形状としてもよい。 It is desirable to represent the character 150a in a three-dimensional manner using a three-dimensional computer graphics technique. In addition, even when the user is not performing an operation, the presence of a living creature is produced by moving or blinking the body. The icons 152a, 152b, and 152c may also have a three-dimensional shape.
 図9は、図7のS16およびS20でマーカ検出部110がマーカ検出および対応アプリケーション特定を行う際に参照する、マーカ対応情報記憶部109に格納されたデータの構造例を示している。マーカ対応情報300は、識別情報欄302、特徴量情報欄304、サイズ欄306、および対応アプリケーション欄308を含む。識別情報欄302には、登録するマーカごとに付与した識別番号を格納する。 FIG. 9 shows an example of the structure of data stored in the marker correspondence information storage unit 109, which is referred to when the marker detection unit 110 performs marker detection and corresponding application identification in S16 and S20 of FIG. The marker correspondence information 300 includes an identification information column 302, a feature amount information column 304, a size column 306, and a corresponding application column 308. In the identification information column 302, an identification number assigned to each marker to be registered is stored.
 特徴量情報欄304には、マーカのテンプレート画像、あるいはその特徴量を表すデータの識別情報を格納する。同図の場合「ネコ画像」、「キツネ画像」など、画像の名前が示されている。画像や特徴量などのデータ本体は画像の名前と対応づけて別途格納しておく。特徴量情報欄304には画像の名前以外に、画像データや特徴量データの識別番号、それらのデータ自体、あるいは当該データの格納アドレスを示すポインタなどを格納してもよい。 In the feature amount information column 304, a marker template image or data identification information representing the feature amount is stored. In the case of the figure, the names of images such as “cat image” and “fox image” are shown. Data bodies such as images and feature quantities are stored separately in association with image names. In addition to the image name, the feature amount information column 304 may store image data, feature number data identification numbers, the data itself, or a pointer indicating the storage address of the data.
 サイズ欄306には、各マーカのサイズを格納する。同図の場合、長方形のカードをマーカとして使用する場合を想定しているため、「縦の長さ×横の長さ(mm)」なるフォーマットで記載しているが、マーカの形状によってフォーマットは様々であってよい。サイズと形状など複数種類のパラメータの組み合わせでもよい。対応アプリケーション欄308には、各マーカに対応づけるアプリケーションの識別情報を格納する。同図では「エアホッケーゲーム」、「トランプゲーム」などゲームの名前が示されているが、アプリケーションの識別番号、ソフトウェア本体、あるいは当該ソフトウェアの格納アドレスを示すポインタなどを格納してもよい。 The size column 306 stores the size of each marker. In the case of the figure, since it is assumed that a rectangular card is used as a marker, it is described in a format of “vertical length × horizontal length (mm)”, but the format depends on the shape of the marker. It can be various. A combination of a plurality of parameters such as size and shape may be used. The corresponding application column 308 stores the identification information of the application associated with each marker. Although the names of games such as “air hockey game” and “card game” are shown in the same figure, an application identification number, a software main body, or a pointer indicating a storage address of the software may be stored.
 マーカ検出部110は、特徴量情報欄304の識別情報に基づき、登録されている各マーカ画像またはその特徴量を読み出し、それを用いて撮影画像中のマーカを検出する。検出に上記手法を用いることにより、拡大率の変化に対しても高い頑健性でマーカを検出できる。またマーカ検出部110は、検出されたマーカに対応づけられているアプリケーションを、対応アプリケーション欄308を参照して特定する。一方、空間定義部112は、サイズ欄306に記載されたマーカのサイズに基づき、実空間の単位長さに合わせてワールド座標系を定義するとともに、マーカを含む被写体とカメラとの位置関係を取得する。 The marker detection unit 110 reads out each registered marker image or its feature amount based on the identification information in the feature amount information column 304, and uses it to detect a marker in the captured image. By using the above method for detection, it is possible to detect a marker with high robustness against a change in magnification. In addition, the marker detection unit 110 identifies an application associated with the detected marker with reference to the corresponding application column 308. On the other hand, the space definition unit 112 defines the world coordinate system according to the unit length of the real space based on the size of the marker described in the size column 306, and acquires the positional relationship between the subject including the marker and the camera. To do.
 なお同図に示すように、マーカとアプリケーションは一対一に対応していなくてもよい。具体的には、マーカ対応情報300において識別情報「001」のマーカと識別情報「002」のマーカはいずれも「エアホッケーゲーム」に対応している。このようにすることで、同じアプリケーションであってもマーカの図柄の差などによって登場させるキャラクタや衣装を変化させたり、ゲームのバージョンや難易度、BGMなどを異ならせたりすることができる。 As shown in the figure, the marker and application do not have to correspond one-to-one. Specifically, in the marker correspondence information 300, the marker with the identification information “001” and the marker with the identification information “002” both correspond to the “air hockey game”. By doing in this way, even if it is the same application, the character and costume to appear by the difference in the design of a marker, etc. can be changed, or a game version, difficulty, BGM, etc. can be varied.
 逆に複数のマーカの組み合わせを1つのアプリケーションに対応させてもよい。このようにすると、複数のマーカを集めたときだけ所定のアプリケーションやゲームを実行できる状況を作り出すことができ、マーカ収集の楽しみも提供できる。この場合、マーカとしてカードや立体物など複数の個体の組み合わせを設定する以外に、スタンプラリーのように1枚のカードに集めた複数の印影の組み合わせなどを利用してもよい。 Conversely, a combination of multiple markers may correspond to one application. In this way, it is possible to create a situation in which a predetermined application or game can be executed only when a plurality of markers are collected, and enjoyment of marker collection can also be provided. In this case, in addition to setting a combination of a plurality of individuals such as a card or a three-dimensional object as a marker, a combination of a plurality of seals collected on a single card, such as a stamp rally, may be used.
 また同図の識別情報「001」のマーカと識別情報「002」のマーカのようにマーカのサイズを異ならせることにより、ゲームを行う場、つまりエアホッケーゲームにおけるエアホッケー台のような、ゲーム環境を表すオブジェクトのサイズを変化させてもよい。本実施の形態では、上記のとおりマーカのサイズに基づき、ワールド座標系における単位長さを実空間の単位長さに対応させることができる。これにより、実空間での任意のサイズを想定して各オブジェクトを描画することができる。上記のように複数のマーカの組み合わせを1つのアプリケーションに対応させる場合は、置かれたマーカ同士の間隔によってオブジェクトのサイズを変化させられるようにしてもよい。 Also, the game environment, such as an air hockey table in an air hockey game, by changing the size of the markers, such as the identification information “001” marker and the identification information “002” marker in FIG. You may change the size of the object which represents. In the present embodiment, the unit length in the world coordinate system can correspond to the unit length in the real space based on the marker size as described above. Thus, each object can be drawn assuming an arbitrary size in the real space. As described above, when a combination of a plurality of markers is associated with one application, the size of the object may be changed depending on the interval between the placed markers.
 図10は、図7のS22で情報処理部114がマーカに対応するアプリケーションを実行する処理手順例を示すフローチャートである。なおアプリケーションの内容によって処理手順や処理の内容を様々に変形できることは当業者には理解されるところであり、本実施の形態をこれに限る主旨ではない。また図9のフローチャートは特にアプリケーションそのものに対する処理手順を示しており、並行してなされるカメラの位置や姿勢の変化に対する処理については後に述べる。 FIG. 10 is a flowchart illustrating an example of a processing procedure in which the information processing unit 114 executes the application corresponding to the marker in S22 of FIG. Note that it is understood by those skilled in the art that the processing procedure and processing content can be variously changed depending on the content of the application, and the present embodiment is not limited to this. Further, the flowchart of FIG. 9 particularly shows a processing procedure for the application itself, and processing for changes in the position and orientation of the camera performed in parallel will be described later.
 まず検出したマーカに対応するアプリケーションに係る情報がマーカ検出部110から通知されると、情報処理部114は、当該アプリケーションに対応づけられたキャラクタやアイコンなどを特定し、出力画像生成部118がそれらのオブジェクトを撮影画像上に描画する(S30)。各オブジェクトのモデリングデータは、情報処理部114からの要求に従いオブジェクトデータ記憶部116から読み出す。これにより表示される初期画面は、例えば図8で示した画面例148である。 First, when information related to the application corresponding to the detected marker is notified from the marker detection unit 110, the information processing unit 114 identifies a character, an icon, or the like associated with the application, and the output image generation unit 118 detects them. The object is drawn on the photographed image (S30). Modeling data of each object is read from the object data storage unit 116 in accordance with a request from the information processing unit 114. The initial screen displayed thereby is, for example, the screen example 148 shown in FIG.
 このような画面に対し、ユーザが「遊び方」アイコン152aの部分にタッチすることにより説明書きの表示を要求したら(S32のY)、出力画像生成部118はオブジェクトデータ記憶部116から説明書きの画像を読み出して撮影画像に重畳する(S34)。このとき説明書きの画像を平面的に表すのではなく、撮影画像における実空間の世界を壊すことなく表示することが望ましい。例えば撮影画像中、マーカの像が表れている四角形の領域に、説明書きの画像をテクスチャマッピングすることにより、マーカの位置に説明書きのカードが置かれたように表現する。 When the user requests display of the explanatory note by touching the “how to play” icon 152a on such a screen (Y in S32), the output image generating unit 118 reads the explanatory note image from the object data storage unit 116. Is superimposed on the captured image (S34). At this time, it is desirable not to represent the explanatory image in a plane but to display it without destroying the real space world in the captured image. For example, in the captured image, the description image is texture-mapped to a rectangular area where the marker image appears, so that the description card is placed at the marker position.
 ユーザが、説明書きの表示を終了させる操作を行うまで表示を継続する(S36のN、S34)。説明書きを表示させた状態で、キャラクタとの対戦を開始する所定の操作がなされたら、S42~S48の対戦モードへ移行する(36のY、S40のY)。一方、キャラクタ同士の対戦を観戦するための所定の操作がなされたら、S52~S58の観戦モードへ移行する(S36のY、S40のN、S50のY)。それ以外に説明書きの表示を終了させる操作がなされたら、アプリケーションそのものを終了させる操作がなされないうちは、S30で表示させた初期画面に表示を戻す(S36のY、S40のN、S50のN、S38のN、S30)。なお説明書きの表示とともに、初期画面へ表示を戻すためのアイコンなどを表示してもよい。 The display is continued until the user performs an operation to end the display of the description (N in S36, S34). When a predetermined operation for starting a battle with the character is performed in the state where the explanatory note is displayed, the battle mode is shifted to the battle mode of S42 to S48 (Y of 36, Y of S40). On the other hand, when a predetermined operation for watching the battle between the characters is performed, the process proceeds to the watching mode of S52 to S58 (Y in S36, N in S40, Y in S50). If an operation for terminating the display of the explanatory note is performed, the display is returned to the initial screen displayed in S30 (Y in S36, N in S40, N in S50), unless an operation for terminating the application itself is performed. , S38 N, S30). An icon for returning the display to the initial screen may be displayed together with the description.
 一方、S30で表示した初期画面に対し、ユーザが「対戦」アイコン152bの部分にタッチした場合も、S42~S48の対戦モードへ移行する(S32のN、S40のY)。また当該初期画面に対し、ユーザが「観戦」アイコン152cの部分にタッチしたら、S52~S58の観戦モードへ移行する(S32のN、S40のN、S50のY)。 On the other hand, when the user touches the “battle” icon 152b on the initial screen displayed in S30, the game mode shifts to the battle mode of S42 to S48 (N in S32, Y in S40). Further, when the user touches the “watching” icon 152c on the initial screen, the mode shifts to the watching mode of S52 to S58 (N in S32, N in S40, Y in S50).
 対戦モードではまず、対戦時に描画すべきオブジェクトの初期位置およびサイズをワールド座標系で決定し、それをスクリーンに投影することにより撮影画像に重畳する(S42)。そして情報処理装置10に対するユーザ操作に応じて道具や相手キャラクタのオブジェクトを適宜動かしながらゲームを進捗させる(S44)。情報処理部114がオブジェクトの動きを算出し、出力画像生成部118が所定のフレームレートで撮影画像上に当該オブジェクトを描画していくことで表示画像を更新する(S46)。 In the battle mode, first, the initial position and size of the object to be drawn during the battle are determined in the world coordinate system, and are projected onto the screen to be superimposed on the captured image (S42). Then, the game is advanced while appropriately moving the tool and the object of the opponent character according to the user operation on the information processing apparatus 10 (S44). The information processing unit 114 calculates the movement of the object, and the output image generation unit 118 updates the display image by drawing the object on the captured image at a predetermined frame rate (S46).
 本実施の形態では、情報処理装置10に設けられたカメラでの実空間の撮影および表示がユーザインターフェースの一部を構成している。そのためゲームにおけるユーザ操作の手段としてもモーションセンサ67が検知する情報処理装置10自体の動きを活用することにより、カメラ撮影を含む一連の動作に統一感が生まれ、ユーザの理解が容易になる。ただしその他の操作手段も適宜利用してよい。S44とS46の処理を、対戦が終了するかユーザが終了操作を行うまで継続する(S48のN、S44、S46)。対戦が終了するなどしたら、アプリケーションそのものを終了させる操作がなされないうちは、S30で表示した初期画面に表示を戻す(S48のY、S38のN、S30)。 In the present embodiment, shooting and display of a real space with a camera provided in the information processing apparatus 10 constitutes a part of the user interface. Therefore, by utilizing the movement of the information processing apparatus 10 itself detected by the motion sensor 67 as means for user operation in the game, a sense of unity is created in a series of operations including camera shooting, and the user can easily understand. However, other operation means may be used as appropriate. The processes of S44 and S46 are continued until the battle is ended or the user performs an end operation (N in S48, S44, S46). When the battle is finished, the display is returned to the initial screen displayed in S30 (Y in S48, N in S38, S30), unless an operation for ending the application itself is performed.
 観戦モードではまず、観戦時に描画すべきオブジェクトの初期位置およびサイズをワールド座標系で決定し、それをスクリーンに投影することにより撮影画像に重畳する(S52)。この場合、複数のキャラクタ同士が対戦を開始する状況が表示される。そしてキャラクタがそれぞれに設定されたレベルに応じて対戦するように、ゲームを自動で進捗させる(S54)。 In the watching mode, first, an initial position and a size of an object to be drawn at the time of watching are determined in the world coordinate system, and are projected onto the screen to be superimposed on the captured image (S52). In this case, a situation in which a plurality of characters start a battle is displayed. Then, the game is automatically advanced so that the characters battle each other according to the set level (S54).
 情報処理部114がオブジェクトの動きを算出し、出力画像生成部118が所定のフレームレートで撮影画像上に当該オブジェクトを描画していくことで表示画像を更新する(S56)。S54とS56の処理を、キャラクタ同士の対戦が終了するかユーザが観戦を終了させる操作を行うまで継続する(S58のN、S54、S56)。観戦が終了するなどしたら、アプリケーションそのものを終了させる操作がなされないうちは、S30で表示した初期画面に表示を戻す(S58のY、S38のN、S30)。いずれかの段階でアプリケーションを終了させる操作がなされたら、アプリケーションの処理を終了する(S38のY)。なおアプリケーションの終了操作はアプリケーションの実行中、随時受け付けてよい。 The information processing unit 114 calculates the movement of the object, and the output image generation unit 118 updates the display image by drawing the object on the captured image at a predetermined frame rate (S56). The processes of S54 and S56 are continued until the battle between the characters ends or the user performs an operation to end the watching (N of S58, S54, S56). When the watching is over, the display is returned to the initial screen displayed in S30 (Y in S58, N in S38, S30) unless an operation for ending the application itself is performed. If an operation for terminating the application is performed at any stage, the processing of the application is terminated (Y in S38). The application termination operation may be accepted at any time during the execution of the application.
 図11は、図10のS34で表示される説明書きの表示画面例を示している。この画面は、例えば図8で示した初期画面において「遊び方」アイコン152aがタッチされた後に表示される。またこの例は、図5で示した実空間120の撮影画像に基づき、図9で示したマーカ対応情報300のうち識別情報「001」のマーカが検出され、「エアホッケーゲーム」のアプリケーションが実行されていることを想定している。 FIG. 11 shows an example of the explanatory note display screen displayed in S34 of FIG. This screen is displayed after the “how to play” icon 152a is touched on the initial screen shown in FIG. 8, for example. Further, in this example, based on the photographed image of the real space 120 shown in FIG. 5, the marker having the identification information “001” is detected from the marker correspondence information 300 shown in FIG. 9, and the “air hockey game” application is executed. It is assumed that
 画面例160では撮影画像上に、説明書きの画像164a、ネコのキャラクタ150b、説明書きを前のページへ戻すアイコン162a、次のページへ進めるアイコン162bが描画されている。説明書きの画像164aを表示するため、画像164bのような画像データをオブジェクトデータ記憶部116に格納しておく。説明書きの画像は複数ページにより構成してよく、その場合にアイコン162a、162bを描画する。上述のとおり、説明書きの画像を本来マーカが写っている領域にテクスチャマッピングすることにより、図示するように実空間におけるテーブルに置かれているように表現できる。 In the screen example 160, a description image 164a, a cat character 150b, an icon 162a for returning the description to the previous page, and an icon 162b for proceeding to the next page are drawn on the photographed image. In order to display the explanatory note image 164 a, image data such as the image 164 b is stored in the object data storage unit 116. The description image may be composed of a plurality of pages, in which case the icons 162a and 162b are drawn. As described above, the description image can be expressed as if it is placed on a table in the real space as shown in the figure by texture mapping the region where the marker is originally reflected.
 なお説明書きの画像として準備する画像は、図示するような静止画でもよいし、動画でもよい。また本実施の形態ではゲームを含むアプリケーションを想定しているため、その遊び方を説明する画像を表示したが、表示する内容をこれに限定する主旨ではない。すなわち電子書籍、ウェブページ、映画などアプリケーションの内容によって表示する内容も異なってよい。 Note that the image to be prepared as the description image may be a still image as shown in the figure or a moving image. In this embodiment, since an application including a game is assumed, an image explaining how to play is displayed. However, the content to be displayed is not limited to this. That is, the contents displayed may vary depending on the contents of the application such as an electronic book, a web page, and a movie.
 アイコン162a、アイコン162b、キャラクタ150bは、図8で説明したのと同様の手順により描画する。アイコン162a、162bによりページを戻したり送ったりする操作がなされたら、説明書きの画像164aを前や次のページの画像にクロスフェードさせるなどして自然な表現で入れ替える。あるいはページめくりのアニメーションを挿入するなどしてもよい。キャラクタ150bは、図8で示した初期画面における状態から、説明書きが見えるように立ち上がり、移動した状態としている。さらにキャラクタ150bが、説明書きの入れ替えやページめくりをしているように表現してもよい。 The icon 162a, the icon 162b, and the character 150b are drawn by the same procedure as described in FIG. When an operation for returning or sending a page is performed using the icons 162a and 162b, the description image 164a is replaced with a natural expression by crossfading the previous or next page image. Alternatively, a page turning animation may be inserted. The character 150b stands up and moves from the state on the initial screen shown in FIG. 8 so that the explanatory note can be seen. Furthermore, it may be expressed as if the character 150b is changing explanations or turning pages.
 図12は、図11で示した画面例160が表示されている状態から、実空間においてユーザがカメラをマーカに近づけた場合の画面の変化例を示している。画面例170において、説明書きの画像164c、アイコン162c、162d、キャラクタ150cは、図11の画面例160における説明書きの画像164a、アイコン162a、162b、キャラクタ150bと同じオブジェクトである。 FIG. 12 shows a screen change example when the user brings the camera closer to the marker in the real space from the state where the screen example 160 shown in FIG. 11 is displayed. In the screen example 170, the explanatory note image 164c, the icons 162c and 162d, and the character 150c are the same objects as the explanatory note image 164a, the icons 162a and 162b, and the character 150b in the screen example 160 of FIG.
 本実施の形態では、ワールド座標系におけるオブジェクトの位置が決定しているため、カメラを近づけると実空間の物と同様、アップで写されたり一部が視野からはみ出したりする。これにより、説明書きやアイコンなども実際にテーブルなどに置かれている印象を与えられる。例えば画面例160の状態では説明書きが小さくて読みづらい場合などに、カメラを近づけることにより説明書きを近くで見た状態とすることができる。 In the present embodiment, since the position of the object in the world coordinate system is determined, when the camera is brought close to it, it is copied up or part of it protrudes from the field of view as in real space. This gives the impression that the description and icons are actually placed on the table. For example, in the state of the screen example 160, when the explanatory note is small and difficult to read, it is possible to make the explanatory note viewed close by bringing the camera closer.
 図13は、図10のS46で表示される対戦モードでの画面例を示している。この画面は例えば、図8で示した初期画面において、「対戦」アイコン152bがタッチされた直後や対戦中に表示される。この例も図11と同様に「エアホッケーゲーム」のアプリケーションが実行されていることを想定している。 FIG. 13 shows a screen example in the battle mode displayed in S46 of FIG. This screen is displayed, for example, immediately after the “battle” icon 152b is touched or during the battle on the initial screen shown in FIG. In this example as well, it is assumed that an “air hockey game” application is executed, as in FIG.
 画面例180では撮影画像上に、エアホッケー台182、パック186、マレット184a、184b、ネコのキャラクタ150cが描画されている。この対戦モードでは、ユーザが情報処理装置10を動かすと、手前にあるマレット184bがエアホッケー台182上で情報処理装置10と同じ方向に動き、それによってパック186を打ち返すことができる。 In the screen example 180, an air hockey table 182, a pack 186, mallets 184a and 184b, and a cat character 150c are drawn on the photographed image. In this battle mode, when the user moves the information processing apparatus 10, the mallet 184 b in front of the user moves in the same direction as the information processing apparatus 10 on the air hockey base 182, thereby returning the pack 186.
 情報処理装置10の動きはモーションセンサ67により検出し、その動き量や速度を、画面に描画したマレット184bの動き量や速度に変換する。その変換規則もオブジェクトデータ記憶部116に格納しておき、情報処理部114が参照することにより各時間ステップにおけるマレットの動きや、それに当たってはね返るパックの動きを決定する。これにより、打ち返す瞬間にマレットを左右に動かしてパックの打ち返し角度を変えたり、マレットを前に動かしてスマッシュしたり、といったように、実際のエアホッケーと同様に、パックの動きに変化をつけることができる。 The movement of the information processing apparatus 10 is detected by the motion sensor 67, and the movement amount and speed thereof are converted into the movement amount and speed of the mallet 184b drawn on the screen. The conversion rule is also stored in the object data storage unit 116, and the information processing unit 114 refers to it to determine the movement of the mallet at each time step and the movement of the pack that rebounds. This makes it possible to change the movement of the pack in the same way as actual air hockey, such as moving the mallet left and right at the moment of hitting to change the hitting angle of the pack, or moving the mallet forward and smashing. Can do.
 対戦相手のネコのキャラクタ150cは、この例では図8の初期画面などで描画したキャラクタと同一としている。対戦モードでは当該キャラクタ150cを、エアホッケー台を挟んでカメラ、つまりユーザに対峙するように描画する。ネコのキャラクタ150cも仮想的な情報処理装置を把持し、それを動かすことによって自分のマレット184aを動かしているようにする。これにより、実際に仮想的なキャラクタと対戦している臨場感を演出できる。 In this example, the opponent's cat character 150c is the same as the character drawn on the initial screen of FIG. In the battle mode, the character 150c is drawn so as to face the camera, that is, the user with the air hockey table interposed therebetween. The cat character 150c also grasps the virtual information processing apparatus and moves its own mallet 184a by moving it. As a result, it is possible to produce a sense of reality that is actually playing against a virtual character.
 エアホッケー台182は、撮影画像に写るマーカ128cに対し所定の向きおよび位置にあるように描画する。同図のエアホッケー台182は天板のみが空中に浮いている状態を示しているが、天板より下の部分も加えて立体構造として表してもよいし、スコアボードなどをさらに表してもよい。エアホッケー台182の天板部分を透明または半透明とすることにより、撮影画像の隠蔽部分が少なくなり、実空間で行われている印象をより与えることができる。エアホッケー台182はマーカ128cの直上などに配置してもよいが、天板部分を透明または半透明とした場合は、マーカの図柄によりパック186などが見づらくならないよう、ずらして配置することが望ましい。 The air hockey base 182 draws the marker 128c in the photographed image so as to be in a predetermined direction and position. The air hockey table 182 in the figure shows a state in which only the top plate is floating in the air, but a portion below the top plate may be added and expressed as a three-dimensional structure, or a scoreboard or the like may be further expressed. Good. By making the top plate portion of the air hockey base 182 transparent or translucent, the concealed portion of the photographed image is reduced, and the impression that is performed in real space can be given more. The air hockey base 182 may be disposed directly above the marker 128c, but when the top plate portion is transparent or semi-transparent, it is desirable that the air hockey base 182 be shifted so that the pack 186 or the like is not easily seen due to the design of the marker. .
 上記のSLAM法になどより、撮影画像中の特徴点を追跡する場合、時計や鉛筆立てなどマーカ周辺の物体にも基づいてカメラとワールド座標系との相対関係が特定できる。そのため一旦、マーカが検出され、それに対応するようにエアホッケー台182を配置すれば、ゲーム操作などによりマーカが視野から外れたり、ユーザがマーカを取り去ったりしても、周辺の物との相対位置によって、ワールド座標系における同じ位置にあるようにエアホッケー台182を描画することができる。キャラクタやアイコンなどその他のオブジェクトについても同様である。 When tracking feature points in a captured image using the above-mentioned SLAM method, the relative relationship between the camera and the world coordinate system can be specified based on objects around the marker such as a clock or a pencil stand. Therefore, once the marker is detected and the air hockey base 182 is arranged so as to correspond to the marker, even if the marker is out of the field of view by a game operation or the like, or the user removes the marker, the relative position with the surrounding objects Thus, the air hockey table 182 can be drawn so as to be at the same position in the world coordinate system. The same applies to other objects such as characters and icons.
 図14は、図10のS56で表示される観戦モードでの画面例を示している。この画面は例えば、図8で示した初期画面において「観戦」アイコン152cがタッチされた直後や観戦中に表示される。この例も図11や図13と同様、「エアホッケーゲーム」のアプリケーションが実行されていることを想定している。 FIG. 14 shows a screen example in the watching mode displayed in S56 of FIG. This screen is displayed, for example, immediately after the “watching” icon 152c is touched or during watching in the initial screen shown in FIG. This example also assumes that an application of “air hockey game” is executed, as in FIGS. 11 and 13.
 画面例190では撮影画像上に、エアホッケー台194、パック198、マレット196a、196b、ネコのキャラクタ150d、キツネのキャラクタ192が描画されている。この観戦モードでは、ネコのキャラクタ150dとキツネのキャラクタ192がエアホッケー台194を挟んで対峙するように描画する。そしてそれぞれが、仮想的な情報処理装置を動かすことにより、各自のマレット196a、196bを動かしているようにする。ユーザはカメラの位置や姿勢を変化させることにより、様々な距離や角度からこの模様を観戦することができる。同図の場合、実空間にあるマーカ128dの直上にエアホッケー台を配置しているため、ユーザの実際の動きとしては、マーカ128dを中心にカメラの距離や角度を変化させることになる。 In the screen example 190, an air hockey table 194, a pack 198, mallets 196a and 196b, a cat character 150d, and a fox character 192 are drawn on the photographed image. In this watching mode, drawing is performed so that the cat character 150d and the fox character 192 face each other with the air hockey table 194 interposed therebetween. Each of them moves their mallet 196a, 196b by moving a virtual information processing apparatus. The user can watch this pattern from various distances and angles by changing the position and posture of the camera. In the case of the figure, since the air hockey table is arranged immediately above the marker 128d in the real space, the actual movement of the user changes the camera distance and angle around the marker 128d.
 同図の例でネコのキャラクタ150dは、図8の初期画面や図13の対戦モードにおいて描画したキャラクタと同一としている。例えばユーザが、対戦モードによってネコのキャラクタ150dと対戦を繰り返すことにより、ネコのキャラクタ150dに設定されたレベルが上がるようにする。つまりユーザが、ネコのキャラクタ150dを訓練して選手にするような状況を演出する。そして観戦モードで他のキャラクタと戦わせることにより、いわば自分が育てたネコのキャラクタ150dを対外試合に参加させるような気持ちで応援したり、さらに対戦モードで強くしたりする、といった楽しみを提供できる。 In the example of the figure, the cat character 150d is the same as the character drawn in the initial screen of FIG. 8 or the battle mode of FIG. For example, when the user repeats the battle with the cat character 150d in the battle mode, the level set for the cat character 150d is increased. That is, the user creates a situation where the cat character 150d is trained to be a player. Then, by fighting with other characters in the watching mode, it is possible to provide enjoyment such as cheering with the feeling that the cat character 150d that he grew up participates in the external game, or strengthening in the fighting mode. .
 図15は、図10のS30、S34、S42、S52で、情報処理部114がマーカに対応して描画するオブジェクトを特定する際に参照する、マーカ対応情報記憶部109に格納されたデータの構造例を示している。なおこの例は、図9のマーカ対応情報300で示したように、同じアプリケーションであってもマーカによって描画するキャラクタを変化させる態様を想定している。上記の通りマーカによってキャラクタ以外の要素を変化させてもよいし、変化させずにアプリケーションによって全ての要素が一意に定まるようにしてもよい。後者の場合、マーカによる処理の分岐は図9のマーカ対応情報300を参照してアプリケーションを特定した時点で完了するため、情報処理部114はマーカ対応情報記憶部109を参照しなくてよい。 FIG. 15 shows the structure of data stored in the marker correspondence information storage unit 109, which is referred to when the information processing unit 114 specifies an object to be drawn corresponding to the marker in S30, S34, S42, and S52 of FIG. An example is shown. In this example, as shown by the marker correspondence information 300 in FIG. 9, a mode is assumed in which the character drawn by the marker is changed even in the same application. As described above, elements other than the character may be changed by the marker, or all elements may be uniquely determined by the application without being changed. In the latter case, the processing branch by the marker is completed when the application is identified with reference to the marker correspondence information 300 in FIG. 9, and the information processing unit 114 does not need to refer to the marker correspondence information storage unit 109.
 オブジェクト情報400は、アプリケーション欄402、マーカ欄404、第1キャラクタ欄406、第2キャラクタ欄408、表示アイコン欄410、説明画像欄412、道具欄414を含む。アプリケーション欄402には、アプリケーションの識別情報を格納する。当該識別情報は、図9のマーカ対応情報300における対応アプリケーション欄308に格納された識別情報に対応する。同図では「エアホッケーゲーム」、「トランプゲーム」などゲームの名前が示されているが図9のマーカ対応情報300と同様、アプリケーションの識別番号、ソフトウェア本体、あるいは当該ソフトウェアの格納アドレスを示すポインタなどを格納してよい。 The object information 400 includes an application field 402, a marker field 404, a first character field 406, a second character field 408, a display icon field 410, an explanation image field 412, and a tool field 414. The application column 402 stores application identification information. The identification information corresponds to the identification information stored in the corresponding application column 308 in the marker correspondence information 300 of FIG. Although the names of the games such as “air hockey game” and “card game” are shown in the same figure, as with the marker correspondence information 300 in FIG. 9, the application identification number, the software body, or a pointer indicating the storage address of the software Etc. may be stored.
 マーカ欄404にはマーカの識別番号を格納する。当該識別番号は、図9のマーカ対応情報300における識別情報欄302に格納された識別番号に対応する。検出されたマーカに対応するアプリケーションの識別情報と当該マーカの識別番号とをマーカ検出部110から通知された際、情報処理部114はそれらに基づき、描画すべきオブジェクトをオブジェクト情報400から特定する。 The marker column 404 stores marker identification numbers. The identification number corresponds to the identification number stored in the identification information column 302 in the marker correspondence information 300 of FIG. When the marker detection unit 110 is notified of the identification information of the application corresponding to the detected marker and the identification number of the marker, the information processing unit 114 identifies the object to be drawn from the object information 400 based on them.
 第1キャラクタ欄406には、図10の例では全てのモードで登場する第1キャラクタの識別情報を格納する。第2キャラクタ欄408には、図10の例では観戦モードにおける対戦相手とする第2キャラクタの識別情報を格納する。同図の場合、第1キャラクタ欄406および第2キャラクタ欄408には、「ネコ」、「キツネ」など、キャラクタモデルの名前が示されているが、識別番号などでもよい。各キャラクタのモデリングデータは、キャラクタの識別情報と対応づけてオブジェクトデータ記憶部116格納しておく。表示アイコン欄410、説明画像欄412、道具欄414についても表記の仕方やデータ本体との関係は同様である。 The first character column 406 stores identification information of the first character appearing in all modes in the example of FIG. The second character column 408 stores identification information of the second character that is the opponent in the watching mode in the example of FIG. In the figure, the first character column 406 and the second character column 408 show the names of character models such as “cat” and “fox”, but they may be identification numbers. The modeling data for each character is stored in the object data storage unit 116 in association with the character identification information. The display icon field 410, the explanation image field 412, and the tool field 414 have the same notation and relationship with the data body.
 なおマーカによって第1キャラクタ、第2キャラクタのいずれか一方のみを変化させるようにしたり、2つのマーカによって第1キャラクタと第2キャラクタをそれぞれ独立に指定できるようにしてもよい。表示アイコン欄410には、図10のS30で描画するアイコンの識別情報を格納する。同図の場合、「遊び方」、「対戦」、「観戦」など、アイコンの名前が示されている。これらは図8の画面例148における「遊び方」アイコン152a、「対戦」アイコン152b、「観戦」アイコン152cとそれぞれ対応している。 Note that only one of the first character and the second character may be changed by the marker, or the first character and the second character may be independently specified by the two markers. The display icon column 410 stores identification information of icons drawn in S30 of FIG. In the case of the figure, the names of icons such as “how to play”, “match”, “watch” are shown. These correspond to the “how to play” icon 152a, the “match” icon 152b, and the “watch” icon 152c in the screen example 148 of FIG.
 説明画像欄412には、図10のS34で描画する説明書きの画像の識別情報を格納する。同図の場合、「ホッケー操作(1),(2),・・・」など、複数ページにわたる説明書きの画像群の名前が示されている。そのうちの1ページが図11の説明書きの画像164a、164bに対応している。道具欄414には、図10の例では対戦モードや観戦モードで描画するゲームの道具オブジェクトの識別情報を格納する。同図の場合、「エアホッケー台」、「マレット」、「パック」など、道具オブジェクトのモデルの名前が示されている。これらのオブジェクトは、図13、図14の画面例におけるエアホッケー台182、194、パック186、198、マレット184a、184b、196a、196bに対応する。 In the explanation image column 412, the identification information of the explanation text image drawn in S34 of FIG. 10 is stored. In the case of the same figure, the names of the image group of the explanatory note covering a plurality of pages such as “hockey operation (1), (2),...” Are shown. One of the pages corresponds to the explanatory images 164a and 164b in FIG. In the example of FIG. 10, the tool column 414 stores identification information of game tool objects drawn in the battle mode or the watching mode. In the case of the figure, the names of the tool object models such as “air hockey table”, “mallet”, and “pack” are shown. These objects correspond to the air hockey tables 182 and 194, the packs 186 and 198, and the mallets 184a, 184b, 196a, and 196b in the screen examples of FIGS.
 情報処理部114は、アプリケーションの各モードにおいて描画すべきオブジェクトをオブジェクト情報400を参照して特定し、出力画像生成部118に描画を要求する。出力画像生成部118はオブジェクトデータ記憶部116に格納されているモデリングデータを識別情報に基づき読み出して、各オブジェクトを描画する。情報処理部114はこのとき、空間定義部112が定義したワールド座標系におけるマーカの位置およびサイズに基づき描画すべきオブジェクトを配置する。例えば上記のように、同じアプリケーションであっても複数のサイズのマーカを準備しておくことにより、マーカのサイズに応じてキャラクタやエアホッケー台などのオブジェクトのサイズを変化させてもよい。 The information processing unit 114 identifies an object to be drawn in each mode of the application with reference to the object information 400, and requests the output image generation unit 118 to draw. The output image generation unit 118 reads the modeling data stored in the object data storage unit 116 based on the identification information, and draws each object. At this time, the information processing unit 114 arranges an object to be drawn based on the position and size of the marker in the world coordinate system defined by the space definition unit 112. For example, as described above, by preparing markers of a plurality of sizes even in the same application, the size of an object such as a character or an air hockey table may be changed according to the size of the marker.
 あるいはマーカのサイズによらず、ゲームの実行状況やユーザの要求に従い、オブジェクトのサイズを変化させてもよい。例えば観戦モードにおいて自分が育てたキャラクタが対戦相手のキャラクタに勝ったとき、育てたキャラクタを実空間における人間と同程度の大きさとし、道具オブジェクトも実物大としたうえで対戦できる特別な対戦モードを提供するようにしてもよい。 Alternatively, the object size may be changed according to the game execution status or the user's request regardless of the marker size. For example, in the watching mode, when the character you grew up wins the opponent's character, you can set up a special battle mode that allows you to play with the raised character as big as a human in real space and the tool object also being real size You may make it provide.
 図16は、上記特別な対戦モードにおいて表示される画面の例を示している。画面例200は、基本的には図13で示した、通常の対戦モードにおける画面例180と同じ構成を有する。ただし、エアホッケー台202が実物大となり、ネコのキャラクタ150eも人間と同程度のサイズとなっている点が異なる。そのためユーザは、図13の場合と比較し、マーカ128dが置かれたテーブルから後退した位置で実空間を撮影している状態となっている。エアホッケー台202のサイズ変化に応じ、その配置も適宜調整する。同図では、マーカ128dの上に重なるようにエアホッケー台202を配置している。 FIG. 16 shows an example of a screen displayed in the special battle mode. The screen example 200 basically has the same configuration as the screen example 180 in the normal battle mode shown in FIG. However, the difference is that the air hockey table 202 is the actual size, and the cat character 150e is the same size as a human being. Therefore, as compared with the case of FIG. 13, the user is in a state of photographing the real space at a position retracted from the table on which the marker 128d is placed. According to the size change of the air hockey table 202, its arrangement is also adjusted as appropriate. In the figure, an air hockey base 202 is disposed so as to overlap the marker 128d.
 図17は、このようにオブジェクトのサイズを変化させることを許容した場合に、図10の対戦モード時にオブジェクトを描画する処理手順を示すフローチャートである。情報処理部114はまず、ワールド座標系におけるオブジェクトのサイズと配置を決定する(S70)。オブジェクトのサイズは上記の通り、マーカのサイズから所定の規則に従い一意に導出したり、特別な対戦モード対する設定値を参照したりして決定する。 FIG. 17 is a flowchart showing a processing procedure for drawing an object in the battle mode of FIG. 10 when it is allowed to change the size of the object in this way. The information processing unit 114 first determines the size and arrangement of objects in the world coordinate system (S70). As described above, the size of the object is uniquely derived from the marker size according to a predetermined rule, or is determined by referring to a setting value for a special battle mode.
 次に情報処理部114は、マレットのように情報処理装置10の動きを反映して動く道具オブジェクトがある場合に、情報処理装置10の動きに対する当該道具オブジェクトの動きの感度を調整する(S72)。エアホッケーゲームを例にとると、図13で示したような卓上サイズのエアホッケー台と、図16で示したような実物大のエアホッケー台とでは、マレットの可動幅が大きく異なる。このため情報処理装置10の移動量からマレットの移動量への換算規則をサイズによらず固定としてしまうと、大きいサイズのときに情報処理装置10を不自然なほど大きく動かす必要が生じたり、小さいサイズのときにマレットが意図に反して動きすぎてしまったりすることが考えられる。 Next, when there is a tool object that moves reflecting the movement of the information processing apparatus 10 such as a mallet, the information processing unit 114 adjusts the sensitivity of the movement of the tool object to the movement of the information processing apparatus 10 (S72). . Taking an air hockey game as an example, the movable width of the mallet differs greatly between a desktop-sized air hockey base as shown in FIG. 13 and a full-size air hockey base as shown in FIG. For this reason, if the conversion rule from the movement amount of the information processing apparatus 10 to the movement amount of the mallet is fixed regardless of the size, the information processing apparatus 10 needs to be moved unnaturally when the size is large or small. It is conceivable that the mallet moves too much when it is sized.
 そこでこのような道具オブジェクトがある場合は、オブジェクトのサイズに応じて感度を調整することが望ましい。ここで「感度」は、情報処理装置の動きに対する道具オブジェクトの反応の大きさを表すパラメータであればよく、移動量の割合、速度の割合、加速度の割合など、用いる変数は特に限定されない。 Therefore, if there is such a tool object, it is desirable to adjust the sensitivity according to the size of the object. Here, the “sensitivity” may be a parameter indicating the magnitude of the response of the tool object to the movement of the information processing apparatus, and variables to be used are not particularly limited, such as a movement amount ratio, a speed ratio, and an acceleration ratio.
 定性的にはオブジェクトのサイズが大きいほど、情報処理装置10の動きに対するマレットの動きを小さくする。例えば、情報処理装置10の速度Vaに対して求められるマレットの速度Vmの比率Vm/Vaが、オブジェクトのサイズの倍率に反比例するようにする。つまりマレットの速度Vmはオブジェクトのサイズの倍率Sと情報処理装置10の速度Vaから次のように求める。
 Vm=kVa/S
ここでkは基準のサイズS=1のときの速度比率であり、実験などに基づきあらかじめ算出しておく。
Qualitatively, the larger the size of the object, the smaller the movement of the mallet relative to the movement of the information processing apparatus 10. For example, the ratio Vm / Va of the mallet speed Vm obtained with respect to the speed Va of the information processing apparatus 10 is set to be inversely proportional to the magnification of the object size. That is, the mallet speed Vm is obtained from the object size magnification S and the speed Va of the information processing apparatus 10 as follows.
Vm = kVa / S
Here, k is a speed ratio when the reference size S = 1, and is calculated in advance based on an experiment or the like.
 ただし上式は一例に過ぎず、感度調整は様々な手法で行ってよい。例えば情報処理装置10の動き量からエアホッケー台の幅に対するマレットの動き量の割合を算出するようにしてもよい。S70とS72の処理により、オブジェクトのサイズ、位置、情報処理装置10の動きに対するオブジェクトの動きが決定されるため、ゲーム開始時にはそれに従いオブジェクトを配置して描画し、以後、ユーザ操作やゲームの進捗に応じてオブジェクトを移動させながら描画処理を繰り返す(S74)。 However, the above equation is only an example, and sensitivity adjustment may be performed by various methods. For example, the ratio of the mallet movement amount to the air hockey table width may be calculated from the movement amount of the information processing apparatus 10. The processing of S70 and S72 determines the object size and position, and the movement of the object relative to the movement of the information processing apparatus 10. Therefore, at the start of the game, the object is arranged and drawn accordingly. The drawing process is repeated while moving the object according to (S74).
 次に、カメラとキャラクタの距離、相対角度に応じてキャラクタに特定動作をさせる態様について説明する。図18はオブジェクトデータ記憶部116に格納され、情報処理部114がアプリケーション実行時にキャラクタに特定動作をさせるために参照するデータの構造例を示している。特定動作設定情報500は、キャラクタ欄502、対象部位欄504、相対角しきい値欄506、距離しきい値欄508、動作欄510を含む。キャラクタ欄502には、動作主体であるキャラクタの識別情報を格納する。同図の場合「ネコ」などキャラクタモデルの名前が示されている。当該識別情報は、図15のオブジェクト情報400における第1キャラクタ欄406などに格納された識別情報に対応する。 Next, a mode in which the character performs a specific action according to the distance between the camera and the character and the relative angle will be described. FIG. 18 shows an example of the structure of data stored in the object data storage unit 116 and referred to by the information processing unit 114 for causing the character to perform a specific action when the application is executed. The specific action setting information 500 includes a character field 502, a target part field 504, a relative angle threshold value field 506, a distance threshold value field 508, and an action field 510. The character column 502 stores identification information of the character that is the subject of action. In the case of the figure, the name of the character model such as “cat” is shown. The identification information corresponds to the identification information stored in the first character column 406 in the object information 400 of FIG.
 対象部位欄504、相対角しきい値欄506、距離しきい値欄508には、各キャラクタに特定動作をさせる条件の組み合わせを格納する。この条件はキャラクタとカメラとの距離や相対角度に対して自由に設定できるため、条件の内容や表現形式は他にも様々考えられる。同図の場合、キャラクタの部位ごとに、当該部位に所定角度以内でカメラが近づいた、といった状況を判定するための条件を例として示している。 In the target part column 504, the relative angle threshold column 506, and the distance threshold column 508, combinations of conditions for causing each character to perform a specific action are stored. Since this condition can be freely set with respect to the distance and relative angle between the character and the camera, there are various other contents and expression formats of the condition. In the case of the figure, for each part of the character, a condition for determining a situation in which the camera approaches the part within a predetermined angle is shown as an example.
 図19は、図18の特定動作設定情報500で設定する条件の表し方を説明するための図である。部位210はキャラクタの頭部や手など、条件を設定する部位を表している。そして当該部位の基準点212(同図の場合は頂点)における法線ベクトルn1とカメラのセンサ面に対応するスクリーン214の法線ベクトルn2がなす角度θ(0≦θ≦180°)、および基準点212とスクリーン214の中心との距離Aに対ししきい値を設定する。角度θが180°に近づくほど当該基準点212をのぞき込むかたちとなるため、典型的には角度θがしきい値以上となり、距離Aがしきい値以下となったとき、特定動作を発生させる。 FIG. 19 is a diagram for explaining how to express the conditions set in the specific operation setting information 500 of FIG. A part 210 represents a part where conditions are set, such as a character's head or hand. An angle θ (0 ≦ θ ≦ 180 °) between the normal vector n1 at the reference point 212 (vertex in the figure) of the part and the normal vector n2 of the screen 214 corresponding to the sensor surface of the camera, and the reference A threshold is set for the distance A between the point 212 and the center of the screen 214. As the angle θ approaches 180 °, the reference point 212 is looked into. Typically, when the angle θ is equal to or greater than the threshold value and the distance A is equal to or less than the threshold value, a specific action is generated.
 図18に戻り、対象部位欄504には条件を設定する対象部位とその基準点、相対角しきい値欄506には角度θのしきい値、距離しきい値欄508には距離Aのしきい値を格納する。そして動作欄510には、対象部位に対し設定した条件を満たしたときキャラクタが行う特定動作の識別情報を格納する。同図の設定例では、ネコのキャラクタの頭頂部を覗くようにカメラが動いたら、当該ネコのキャラクタはカメラの方向へジャンプする(特定動作設定情報500の2行目)。 Returning to FIG. 18, the target part field 504 is a target part for which a condition is set and its reference point, the relative angle threshold value field 506 is the angle θ threshold value, and the distance threshold value field 508 is the distance A value. Stores the threshold value. The action column 510 stores identification information of a specific action performed by the character when a condition set for the target part is satisfied. In the setting example shown in the figure, when the camera moves so as to look into the top of the cat character, the cat character jumps in the direction of the camera (second line of the specific action setting information 500).
 手先を覗くようにカメラが動いたら、当該キャラクタは握っているアイテムをカメラ方向へ差し出す動作をすることでユーザが当該アイテムを獲得する(特定動作設定情報500の3行目)。口元へカメラが近づいたら当該キャラクタはゲーム攻略に係るヒントなどをしゃべるようにする(特定動作設定情報500の4行目)。なお動作欄510に記載した特定動作の識別情報に対応させて、実際のキャラクタの動きを制御するプログラムなどを別途、オブジェクトデータ記憶部116に格納しておく。 When the camera moves so as to look into the hand, the user acquires the item by performing an action of pushing the item being gripped toward the camera (third line of the specific action setting information 500). When the camera approaches the mouth, the character speaks a hint or the like related to the game strategy (the fourth line of the specific action setting information 500). Note that a program for controlling the movement of the actual character is stored in the object data storage unit 116 separately in correspondence with the identification information of the specific action described in the action column 510.
 このような条件をキャラクタごとに設定することにより、キャラクタに個性や特徴を与えるとともに、ユーザが何気なく行った情報処理装置10の移動でキャラクタの思わぬ反応が返ってきたり有益な情報を得たりする楽しみを提供することができる。本実施の形態では、オブジェクトとカメラという個体同士の距離、相対角度に対して条件を設定できるため、リアクションとしてのオブジェクトの動作も豊富なバリエーションで準備できる。時々刻々変化するオブジェクトの位置や姿勢に対し、高い自由度で情報処理装置の位置や姿勢を変化させられるため、それらの組み合わせを入力値とすることにより、偶発的な出力結果を楽しむことができる。 By setting such conditions for each character, personality and characteristics are given to the character, and unexpected movement of the character is returned or useful information is obtained by the movement of the information processing apparatus 10 performed casually by the user. Can provide fun. In the present embodiment, conditions can be set for the distance and relative angle between the object and the individual of the camera, so that the operation of the object as a reaction can be prepared with abundant variations. Because the position and orientation of the information processing device can be changed with a high degree of freedom with respect to the position and orientation of objects that change from moment to moment, by using those combinations as input values, you can enjoy accidental output results .
 図20は、情報処理部114と出力画像生成部118がカメラの位置や姿勢の変化に対し表示画像を更新する処理手順を示すフローチャートである。この処理は、図10で示したアプリケーションを実行する処理と並行して行われる。まず情報処理部114は、空間定義部112からカメラの位置や姿勢にかかる情報を常時取得することで、それらが変化したか否かを監視する(S80)。変化がなければそのまま監視のみ継続する(S80のN)。 FIG. 20 is a flowchart illustrating a processing procedure in which the information processing unit 114 and the output image generation unit 118 update the display image in response to changes in the position and orientation of the camera. This process is performed in parallel with the process of executing the application shown in FIG. First, the information processing unit 114 constantly acquires information on the position and orientation of the camera from the space definition unit 112 to monitor whether or not they have changed (S80). If there is no change, only monitoring is continued (N in S80).
 変化があったら(S80のY)、出力画像生成部118は、新しいカメラ座標系を空間定義部112から取得する(S82)。これは図6におけるスクリーン140をカメラの位置、姿勢の変化に対応させて移動させたことを意味する。次に図18の特定動作設定情報500を参照し、その時点で描画しているオブジェクトとカメラとの距離、相対角度が、設定された条件を満たしているか否かを判定する(S84)。満たしていなければ(S84のN)、出力画像生成部118が、各オブジェクトを移動後のスクリーンに投影して描画し直し、撮影画像に重畳することによって表示画像を更新する(S88)。この場合も当然、アプリケーション処理に応じたオブジェクトの動きは表現する。 If there is a change (Y in S80), the output image generation unit 118 acquires a new camera coordinate system from the space definition unit 112 (S82). This means that the screen 140 in FIG. 6 has been moved in response to changes in the position and orientation of the camera. Next, the specific action setting information 500 in FIG. 18 is referred to, and it is determined whether or not the distance and relative angle between the object drawn at that time and the camera satisfy the set condition (S84). If not satisfied (N in S84), the output image generation unit 118 projects and redraws each object on the screen after movement, and updates the display image by superimposing it on the captured image (S88). In this case, naturally, the movement of the object corresponding to the application process is expressed.
 一方、特定動作を行う条件を満たしている場合(S84のY)、情報処理部114は、当該条件に対応づけられた特定動作の識別情報を特定動作設定情報500から取得する。そして当該識別情報に対応する動作発生用のプログラムなどに基づき、特定動作を行うようにキャラクタを動かす(S86)。そして出力画像生成部118は、移動後のスクリーンに、特定動作をしているキャラクタを含む各オブジェクトを投影して描画し直し、撮影画像に重畳することによって表示画像を更新する(S88)。 On the other hand, when the condition for performing the specific action is satisfied (Y in S84), the information processing unit 114 acquires the identification information of the specific action associated with the condition from the specific action setting information 500. Then, based on the action generation program corresponding to the identification information, the character is moved to perform the specific action (S86). Then, the output image generation unit 118 projects and redraws each object including the character performing the specific action on the screen after movement, and updates the display image by superimposing it on the captured image (S88).
 次に、図8の画面例148のように表示させたアイコンを操作する別の方法について説明する。上記の実施形態では、表示画面上のアイコンの位置で前面タッチパッド21にタッチすることにより、対応する処理が開始された。一方、本実施の形態では実空間でマーカが置かれているテーブルなどの平面上にアイコンも置かれているように表現する。そこで別の操作方法として、画面上でアイコンが置かれているように見える実空間での位置に実際に手を伸ばしてテーブルなどの平面をタッチすることによりアイコンがタッチされたと認識するようにする。 Next, another method for operating the icon displayed as in the screen example 148 of FIG. 8 will be described. In the above embodiment, the corresponding process is started by touching the front touch pad 21 at the position of the icon on the display screen. On the other hand, in the present embodiment, it is expressed as if icons are also placed on a plane such as a table on which markers are placed in real space. Therefore, as another operation method, it is recognized that the icon is touched by actually reaching out to the position in the real space where the icon appears to be placed on the screen and touching a plane such as a table. .
 図21は実空間での指の動きによりアイコン操作を実現する態様における表示画面の例を示している。この例は、図8で示した画面例148が表示されている状態でユーザがカメラの視野内に右手222を出した状態である。ユーザの左手は当然、情報処理装置10を把持して当該実空間を撮影しており、図8の場合よりテーブル側に寄った状態となっている。画面例220では、マーカ128a、ユーザの右手222を含む撮影画像と、コンピュータグラフィックスで描画したネコのキャラクタ150a、アイコン152cなどが混在している。 FIG. 21 shows an example of a display screen in a mode in which icon operation is realized by finger movement in real space. In this example, the user has put out the right hand 222 within the field of view of the camera while the screen example 148 shown in FIG. 8 is displayed. Naturally, the user's left hand is holding the information processing apparatus 10 and photographing the real space, and is closer to the table side than in the case of FIG. In the screen example 220, a captured image including the marker 128a and the user's right hand 222, a cat character 150a drawn with computer graphics, an icon 152c, and the like are mixed.
 この方法では、撮影画像におけるユーザの右手222を既存の手認識技術によって認識する。そしてその指が、撮影画像中のアイコン152cの領域で所定のしきい値より長い時間停止したら、タッチがなされたと判定する。あるいは指の特徴点をSLAM法などにより追跡することにより、実空間で指がテーブル上のアイコン152cに対応する領域をタッチしたことを判定する。タッチがなされたと判定されたら、情報処理部114は当該アイコンに対応する処理を開始する。 In this method, the user's right hand 222 in the captured image is recognized by the existing hand recognition technology. When the finger stops in the area of the icon 152c in the captured image for a time longer than a predetermined threshold, it is determined that the touch has been made. Alternatively, it is determined that the finger touches the area corresponding to the icon 152c on the table in the real space by tracking the feature point of the finger by the SLAM method or the like. If it is determined that a touch has been made, the information processing unit 114 starts processing corresponding to the icon.
 手認識や特徴点の追跡は、画像解析部108で随時行う。そのため、実空間での指の動きからタッチ判定を行う場合は、情報処理部114は、現在表示中のアイコンのワールド座標系における位置を画像解析部108に逐次、通知しておく。そして画像解析部108が3次元空間でのタッチ判定を行い、その結果を情報処理部114に通知する。撮影画像中でタッチ判定を行う場合は、情報処理部114は画像解析部108から撮影画像における指の位置を随時取得し、自らがタッチ判定を行う。 Hand recognition and tracking of feature points are performed at any time by the image analysis unit 108. Therefore, when the touch determination is performed based on the finger movement in the real space, the information processing unit 114 sequentially notifies the image analysis unit 108 of the position of the currently displayed icon in the world coordinate system. Then, the image analysis unit 108 performs touch determination in the three-dimensional space and notifies the information processing unit 114 of the result. When touch determination is performed in the captured image, the information processing unit 114 acquires the position of the finger in the captured image from the image analysis unit 108 as needed, and performs the touch determination by itself.
 図21の画面例220では、実空間に存在するテーブル上に仮想的に描画したアイコン152cが置かれ、さらにその上にユーザの実際の右手222が存在する、という、撮影画像に描画オブジェクトが挟まれた状態となる。そこで出力画像生成部118は、アイコン152cのうち右手222に隠れる部分を一般的な隠面消去処理により消去する。右手222の輪郭線も上記手認識技術や既存の輪郭線追跡技術によって特定できる。さらに出力画像生成部118は、指がアイコン152cに近づいていくのに応じて、指の影224をつけることにより臨場感を演出する。この処理も、一般的なシャドウイング手法により実現できる。このとき、大局照明モデルを用いてもよいし、撮影画像から光源を推定することでレイトレーシング法などによって影をつけてもよい。 In the screen example 220 of FIG. 21, a drawing object is sandwiched in a photographed image in which a virtually drawn icon 152c is placed on a table in real space, and the user's actual right hand 222 is further on the icon 152c. It will be in the state. Therefore, the output image generation unit 118 erases the portion hidden in the right hand 222 of the icon 152c by a general hidden surface removal process. The contour line of the right hand 222 can also be specified by the above hand recognition technique or the existing contour tracking technique. Furthermore, the output image generation unit 118 creates a sense of reality by applying a finger shadow 224 as the finger approaches the icon 152c. This process can also be realized by a general shadowing technique. At this time, a global illumination model may be used, or a shadow may be cast by a ray tracing method or the like by estimating a light source from a captured image.
 以上述べた本実施の形態によれば、ユーザがカメラを備えた情報処理装置を用いて実空間に置いたマーカを撮影することにより、当該情報処理装置でマーカに対応する情報処理を開始するようにする。当該情報処理においては、撮影中の実空間にコンピュータグラフィックスで描画したオブジェクトが存在するように、撮影画像にオブジェクトの画像を重畳する。そして情報処理装置の動きに応じてゲームに用いる道具などのオブジェクトが動くようにする。 According to the present embodiment described above, the information processing apparatus corresponding to the marker is started by the information processing apparatus by photographing the marker placed in the real space using the information processing apparatus provided with the camera. To. In the information processing, the object image is superimposed on the photographed image so that the object drawn by computer graphics exists in the real space being photographed. Then, an object such as a tool used in the game is moved according to the movement of the information processing apparatus.
 さらにキャラクタの動作が、カメラの位置とキャラクタとの距離、相対角度によって変化するようにする。これらのことにより、実空間に表れた仮想的な物体を動かしながらキャラクタと遊んだりコミュニケーションをとったりする、という複雑な状況を、情報処理装置を把持して動かす、というシンプルな動作で作り出すことができる。 Furthermore, the movement of the character is changed depending on the distance between the camera position and the character and the relative angle. With these things, it is possible to create a complex situation in which a virtual object appearing in real space is played and communicated with a character while communicating with a simple operation of grasping and moving an information processing device. .
 また描画するオブジェクトとして、アイコンや説明書きも含め、それらも実空間に置かれているように描画する。アイコンは、情報処理装置に備えた表示画面上のタッチパッド、あるいはアイコンが仮想的に置かれている実空間内の面にタッチすることによって操作できるようにする。このようにすることで、全ての操作や情報表示を表示画面内に存在する、拡張現実空間で行えるようになり、その世界観を維持したユーザインターフェースを実現できる。また一般的な入力装置やカーソルで操作するのと比較し、より自然かつ直感的な操作が可能となる。 Also, draw icons as if they are placed in real space, including icons and descriptions. The icon can be operated by touching a touch pad on a display screen provided in the information processing apparatus or a surface in a real space where the icon is virtually placed. In this way, all operations and information display can be performed in the augmented reality space existing in the display screen, and a user interface that maintains the world view can be realized. In addition, more natural and intuitive operation is possible compared to a general input device or a cursor.
 描画するオブジェクトは、サイズが既知のマーカに対応して生成するため、オブジェクトの実空間での大きさを仮定したうえでの描画が可能である。したがって、キャラクタを実際の人間と同じ大きさにしたり道具を実物大にしたりすることも可能となり、その結果表示される画面の世界では、自分の部屋などでキャラクタと実際に遊んでいるような状況を作り出すことができる。このようにオブジェクトのサイズを可変としたとき、情報処理装置の動きに対する道具などのオブジェクトの動きの感度をサイズに応じて調整することにより、サイズが変更しても操作性を維持することができる。 Since the object to be drawn is generated corresponding to a marker whose size is known, drawing can be performed assuming the size of the object in real space. Therefore, it is also possible to make the character the same size as an actual human or make the tool full size, and in the resulting screen world, you can actually play with the character in your room etc. Can produce. Thus, when the size of the object is variable, the operability can be maintained even if the size is changed by adjusting the sensitivity of the movement of the object such as the tool with respect to the movement of the information processing apparatus according to the size. .
 以上、本発明を実施の形態をもとに説明した。この実施の形態は例示であり、それらの各構成要素や各処理プロセスの組み合わせにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。 The present invention has been described based on the embodiments. This embodiment is an exemplification, and it will be understood by those skilled in the art that various modifications can be made to combinations of the respective constituent elements and processing processes, and such modifications are within the scope of the present invention. is there.
10 情報処理装置、 20 表示装置、 21 前面タッチパッド、 31 背面カメラ、 60 CPU、 62 GPU、 64 メインメモリ、 66 ストレージ、 67 モーションセンサ、 70 操作部、 100 制御部、 102 入力情報取得部、 104 撮影画像取得部、 106 撮影画像記憶部、 108 画像解析部、109 マーカ対応情報記憶部、 110 マーカ検出部、 112 空間定義部、 114 情報処理部、 116 オブジェクトデータ記憶部、 118 出力画像生成部。 10 information processing devices, 20 display devices, 21 front touchpad, 31 rear camera, 60 CPU, 62 GPU, 64 main memory, 66 storage, 67 motion sensor, 70 operation unit, 100 control unit, 102 input information acquisition unit, 104 A captured image acquisition unit, 106, a captured image storage unit, 108, an image analysis unit, 109, a marker correspondence information storage unit, 110, a marker detection unit, 112, a space definition unit, 114, an information processing unit, 116, an object data storage unit, and 118, an output image generation unit.
 以上のように本発明はコンピュータ、ゲーム機、情報端末などの情報処理装置に利用可能である。 As described above, the present invention can be used for information processing apparatuses such as computers, game machines, and information terminals.

Claims (14)

  1.  実空間を撮影中のカメラから当該撮影画像のデータを取得する撮影画像取得部と、
     前記撮影画像を解析し、被写空間に存在するマーカを検出する画像解析部と、
     前記カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系と前記カメラの視野に対応するスクリーンを定義する空間定義部と、
     検出されたマーカに対応する情報処理を実行するとともに、前記3次元座標系に、前記マーカに対応する仮想のオブジェクトを配置する情報処理部と、
     前記スクリーンに前記仮想のオブジェクトを投影してなるオブジェクト画像を前記撮影画像に重畳して出力画像を生成し、表示装置に出力する出力画像生成部と、
     を備え、
     前記情報処理部は、前記仮想のオブジェクトとして、前記情報処理に対するユーザ操作手段である操作用オブジェクトを配置することを特徴とする情報処理装置。
    A captured image acquisition unit that acquires data of the captured image from a camera that is capturing a real space;
    An image analysis unit that analyzes the captured image and detects a marker present in the subject space;
    Identifying a relative positional relationship between the camera and the subject space, and defining a three-dimensional coordinate system corresponding to the subject space and a screen corresponding to the field of view of the camera;
    An information processing unit that executes information processing corresponding to the detected marker and arranges a virtual object corresponding to the marker in the three-dimensional coordinate system;
    An output image generating unit that generates an output image by superimposing an object image formed by projecting the virtual object on the screen on the captured image, and outputs the output image to a display device;
    With
    The information processing apparatus arranges an operation object as a user operation means for the information processing as the virtual object.
  2.  前記画像解析部は、前記撮影画像を解析してユーザの手指をさらに検出し、
     前記情報処理部は、前記画像解析部における解析結果により、前記操作用オブジェクトの配置位置に対応する実空間の位置をユーザが指示したことを認識したとき、当該操作用オブジェクトに対応する処理を開始することを特徴とする請求項1に記載の情報処理装置。
    The image analysis unit further detects a user's finger by analyzing the captured image,
    When the information processing unit recognizes from the analysis result in the image analysis unit that the user has instructed the position of the real space corresponding to the arrangement position of the operation object, the information processing unit starts processing corresponding to the operation object. The information processing apparatus according to claim 1, wherein:
  3.  前記表示装置の画面を覆い接触操作を検知するタッチパッドをさらに備え、
     前記情報処理部は、表示中の出力画像における前記操作用オブジェクトの領域にユーザが触れたことを前記タッチパッドを介して認識したとき、当該操作用オブジェクトに対応する処理を開始することを特徴とする請求項1に記載の情報処理装置。
    A touch pad that covers a screen of the display device and detects a contact operation;
    The information processing unit starts processing corresponding to an operation object when the user recognizes that the user has touched the region of the operation object in the output image being displayed through the touchpad. The information processing apparatus according to claim 1.
  4.  前記情報処理部は、前記3次元座標系のうち実空間における前記マーカとの同一面に前記操作用オブジェクトを配置することを特徴とする請求項1から3のいずれかに記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 3, wherein the information processing unit arranges the operation object on the same plane as the marker in real space in the three-dimensional coordinate system.
  5.  前記情報処理部はさらに、前記操作用オブジェクトを用いた操作の対象である操作対象オブジェクトを配置することを特徴とする請求項1から4のいずれかに記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 4, wherein the information processing unit further arranges an operation target object that is an operation target using the operation object.
  6.  前記情報処理部は、前記操作対象オブジェクトとして複数ページからなる画像群の一部のページを配置し、前記操作用オブジェクトとして、配置する画像のページを切り替えるためのボタンを配置することを特徴とする請求項5に記載の情報処理装置。 The information processing unit arranges a part of an image group composed of a plurality of pages as the operation target object, and arranges a button for switching a page of the image to be arranged as the operation object. The information processing apparatus according to claim 5.
  7.  前記情報処理部は、前記仮想のオブジェクトとしてさらに、前記カメラとの相対的な位置関係に応じて挙動を変化させるキャラクタのオブジェクトを配置することを特徴とする請求項1から6のいずれかに記載の情報処理装置。 The said information processing part arrange | positions the object of the character which changes a behavior further according to the relative positional relationship with the said camera as the said virtual object. Information processing device.
  8.  前記情報処理装置の動きを検出するセンサをさらに備え、
     前記情報処理部は、前記操作用オブジェクトとして可動オブジェクトを配置し、前記情報処理装置の動きに係る情報を前記センサから取得して、当該動きを反映するように前記可動オブジェクトを前記3次元座標系において動かすことを特徴とする請求項1に記載の情報処理装置。
    A sensor for detecting movement of the information processing apparatus;
    The information processing unit arranges a movable object as the operation object, acquires information on the movement of the information processing apparatus from the sensor, and moves the movable object to the three-dimensional coordinate system so as to reflect the movement. The information processing apparatus according to claim 1, wherein the information processing apparatus is moved.
  9.  前記情報処理部は、前記3次元座標系における前記仮想のオブジェクトのサイズを所定の規則により決定し、前記情報処理装置の動きに対する前記可動オブジェクトの動きの感度を、当該オブジェクトのサイズに基づき決定することを特徴とする請求項8に記載の情報処理装置。 The information processing unit determines a size of the virtual object in the three-dimensional coordinate system according to a predetermined rule, and determines sensitivity of the movement of the movable object to the movement of the information processing device based on the size of the object The information processing apparatus according to claim 8.
  10.  前記情報処理部は、検出された前記マーカのサイズに基づき、前記仮想のオブジェクトのサイズを決定することを特徴とする請求項9に記載の情報処理装置。 The information processing apparatus according to claim 9, wherein the information processing unit determines the size of the virtual object based on the detected size of the marker.
  11.  前記情報処理部は、前記情報処理の結果が所定の条件を満たしたとき、前記仮想のオブジェクトのサイズを変化させることを特徴とする請求項9に記載の情報処理装置。 The information processing apparatus according to claim 9, wherein the information processing unit changes a size of the virtual object when a result of the information processing satisfies a predetermined condition.
  12.  情報処理装置が撮影画像を用いて行う情報処理方法であって、
     実空間を撮影中のカメラから当該撮影画像のデータを取得しメモリに格納するステップと、
     前記撮影画像を解析し、被写空間に存在するマーカを検出するステップと、
     前記カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系と前記カメラの視野に対応するスクリーンを定義するステップと、
     検出されたマーカに対応する情報処理を実行するとともに、前記3次元座標系に、前記マーカに対応する仮想のオブジェクトを配置するステップと、
     前記スクリーンに前記仮想のオブジェクトを投影してなるオブジェクト画像を、前記メモリより読み出した前記撮影画像に重畳して出力画像を生成し、表示装置に出力するステップと、
     を含み、
     前記配置するステップは、前記仮想のオブジェクトとして、前記情報処理に対するユーザ操作手段である操作用オブジェクトを配置することを特徴とする情報処理方法。
    An information processing method performed by an information processing device using a captured image,
    Acquiring data of the photographed image from a camera that is photographing a real space and storing the data in a memory;
    Analyzing the captured image and detecting a marker present in the subject space;
    Identifying a relative positional relationship between the camera and the subject space, defining a three-dimensional coordinate system corresponding to the subject space and a screen corresponding to the field of view of the camera;
    Performing information processing corresponding to the detected marker and placing a virtual object corresponding to the marker in the three-dimensional coordinate system;
    An object image formed by projecting the virtual object on the screen is superimposed on the captured image read from the memory to generate an output image and output to a display device;
    Including
    The information processing method characterized in that the step of arranging arranges an operation object as a user operation means for the information processing as the virtual object.
  13.  実空間を撮影中のカメラから当該撮影画像のデータを取得する機能と、
     前記撮影画像を解析し、被写空間に存在するマーカを検出する機能と、
     前記カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系と前記カメラの視野に対応するスクリーンを定義する機能と、
     検出されたマーカに対応する情報処理を実行するとともに、前記3次元座標系に、前記マーカに対応する仮想のオブジェクトを配置する機能と、
     前記スクリーンに前記仮想のオブジェクトを投影してなるオブジェクト画像を前記撮影画像に重畳して出力画像を生成し、表示装置に出力する機能と、
     をコンピュータに実現させ、
     前記配置する機能は、前記仮想のオブジェクトとして、前記情報処理に対するユーザ操作手段である操作用オブジェクトを配置することを特徴とするコンピュータプログラム。
    A function of acquiring data of the photographed image from a camera that is photographing a real space;
    A function of analyzing the captured image and detecting a marker present in the subject space;
    A function for identifying a relative positional relationship between the camera and the subject space, and defining a three-dimensional coordinate system corresponding to the subject space and a screen corresponding to the field of view of the camera;
    A function of executing information processing corresponding to the detected marker and arranging a virtual object corresponding to the marker in the three-dimensional coordinate system;
    A function of generating an output image by superimposing an object image formed by projecting the virtual object on the screen on the captured image, and outputting the output image to a display device;
    Is realized on a computer,
    The computer program is characterized in that the function to arrange arranges an operation object which is a user operation means for the information processing as the virtual object.
  14.  実空間を撮影中のカメラから当該撮影画像のデータを取得する機能と、
     前記撮影画像を解析し、被写空間に存在するマーカを検出する機能と、
     前記カメラと被写空間との相対的な位置関係を特定し、被写空間に対応する3次元座標系と前記カメラの視野に対応するスクリーンを定義する機能と、
     検出されたマーカに対応する情報処理を実行するとともに、前記3次元座標系に、前記マーカに対応する仮想のオブジェクトを配置する機能と、
     前記スクリーンに前記仮想のオブジェクトを投影してなるオブジェクト画像を前記撮影画像に重畳して出力画像を生成し、表示装置に出力する機能と、
     をコンピュータに実現させ、
     前記配置する機能は、前記仮想のオブジェクトとして、前記情報処理に対するユーザ操作手段である操作用オブジェクトを配置するコンピュータプログラムを記録したことを特徴とするコンピュータにて読み取り可能な記録媒体。
    A function of acquiring data of the photographed image from a camera that is photographing a real space;
    A function of analyzing the captured image and detecting a marker present in the subject space;
    A function for identifying a relative positional relationship between the camera and the subject space, and defining a three-dimensional coordinate system corresponding to the subject space and a screen corresponding to the field of view of the camera;
    A function of executing information processing corresponding to the detected marker and arranging a virtual object corresponding to the marker in the three-dimensional coordinate system;
    A function of generating an output image by superimposing an object image formed by projecting the virtual object on the screen on the captured image, and outputting the output image to a display device;
    Is realized on a computer,
    The computer-readable recording medium is characterized in that the function to be arranged records a computer program for arranging an operation object as a user operation means for the information processing as the virtual object.
PCT/JP2014/002529 2013-08-20 2014-05-13 Information processing device and information processing method WO2015025442A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013170282A JP2015041126A (en) 2013-08-20 2013-08-20 Information processing device and information processing method
JP2013-170282 2013-08-20

Publications (1)

Publication Number Publication Date
WO2015025442A1 true WO2015025442A1 (en) 2015-02-26

Family

ID=52483242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/002529 WO2015025442A1 (en) 2013-08-20 2014-05-13 Information processing device and information processing method

Country Status (2)

Country Link
JP (1) JP2015041126A (en)
WO (1) WO2015025442A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045711A (en) * 2016-02-05 2017-08-15 株式会社万代南梦宫娱乐 Image generation system and image processing method
CN107320955A (en) * 2017-06-23 2017-11-07 武汉秀宝软件有限公司 A kind of AR venue interface alternation method and system based on multi-client
CN108303062A (en) * 2016-12-27 2018-07-20 株式会社和冠 Image information processing device and image information processing method
CN109661686A (en) * 2016-08-31 2019-04-19 卡西欧计算机株式会社 Object display system, user terminal apparatus, object displaying method and program
WO2020202747A1 (en) * 2019-03-29 2020-10-08 ソニー株式会社 Information processing apparatus, information processing method, and recording medium
US11380011B2 (en) * 2019-04-23 2022-07-05 Kreatar, Llc Marker-based positioning of simulated reality

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6293020B2 (en) * 2014-08-27 2018-03-14 株式会社テクノクラフト Character cooperation application device
JP6527182B2 (en) * 2017-02-03 2019-06-05 Kddi株式会社 Terminal device, control method of terminal device, computer program
JP7111416B2 (en) * 2017-03-24 2022-08-02 日本電気株式会社 Mobile terminal, information processing system, control method, and program
CN111315456A (en) * 2017-09-11 2020-06-19 耐克创新有限合伙公司 Apparatus, system, and method for target searching and using geo-finders
US11509653B2 (en) 2017-09-12 2022-11-22 Nike, Inc. Multi-factor authentication and post-authentication processing system
JP2019200811A (en) * 2019-07-30 2019-11-21 富士通株式会社 Display control method, information processing apparatus, and display control program
JP2023091953A (en) * 2021-12-21 2023-07-03 株式会社セガ Program and information processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001092995A (en) * 1999-08-31 2001-04-06 Xerox Corp Extended reality display system and selective display method for processor generation image
WO2012098872A1 (en) * 2011-01-18 2012-07-26 京セラ株式会社 Mobile terminal and method for controlling mobile terminal
JP2012145981A (en) * 2011-01-06 2012-08-02 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
JP2013050881A (en) * 2011-08-31 2013-03-14 Nintendo Co Ltd Information processing program, information processing system, information processor, and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001092995A (en) * 1999-08-31 2001-04-06 Xerox Corp Extended reality display system and selective display method for processor generation image
JP2012145981A (en) * 2011-01-06 2012-08-02 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
WO2012098872A1 (en) * 2011-01-18 2012-07-26 京セラ株式会社 Mobile terminal and method for controlling mobile terminal
JP2013050881A (en) * 2011-08-31 2013-03-14 Nintendo Co Ltd Information processing program, information processing system, information processor, and information processing method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045711A (en) * 2016-02-05 2017-08-15 株式会社万代南梦宫娱乐 Image generation system and image processing method
CN107045711B (en) * 2016-02-05 2023-08-11 株式会社万代南梦宫娱乐 Image generation system and image processing method
CN109661686A (en) * 2016-08-31 2019-04-19 卡西欧计算机株式会社 Object display system, user terminal apparatus, object displaying method and program
CN109661686B (en) * 2016-08-31 2023-05-05 卡西欧计算机株式会社 Object display system, user terminal device, object display method, and program
CN108303062A (en) * 2016-12-27 2018-07-20 株式会社和冠 Image information processing device and image information processing method
CN107320955A (en) * 2017-06-23 2017-11-07 武汉秀宝软件有限公司 A kind of AR venue interface alternation method and system based on multi-client
CN107320955B (en) * 2017-06-23 2021-01-29 武汉秀宝软件有限公司 AR venue interface interaction method and system based on multiple clients
WO2020202747A1 (en) * 2019-03-29 2020-10-08 ソニー株式会社 Information processing apparatus, information processing method, and recording medium
JP7400810B2 (en) 2019-03-29 2023-12-19 ソニーグループ株式会社 Information processing device, information processing method, and recording medium
US11380011B2 (en) * 2019-04-23 2022-07-05 Kreatar, Llc Marker-based positioning of simulated reality

Also Published As

Publication number Publication date
JP2015041126A (en) 2015-03-02

Similar Documents

Publication Publication Date Title
WO2015025442A1 (en) Information processing device and information processing method
CN110147231B (en) Combined special effect generation method and device and storage medium
JP6158406B2 (en) System for enabling video capture of interactive applications on mobile devices
WO2018077206A1 (en) Augmented reality scene generation method, device, system and equipment
WO2019153824A1 (en) Virtual object control method, device, computer apparatus, and storage medium
CN110276840B (en) Multi-virtual-role control method, device, equipment and storage medium
WO2019153750A1 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
JP5739671B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
JP5654430B2 (en) Use of a portable game device to record or change a game or application running in a home game system in real time
JP5436912B2 (en) PROGRAM, INFORMATION STORAGE MEDIUM, AND GAME DEVICE
JP5602618B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
JP5627973B2 (en) Program, apparatus, system and method for game processing
JP5256269B2 (en) Data generation apparatus, data generation apparatus control method, and program
CN110427110B (en) Live broadcast method and device and live broadcast server
US8947365B2 (en) Storage medium storing image processing program for implementing image processing according to input coordinate, and information processing device
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
JP4863435B2 (en) GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD
US10166477B2 (en) Image processing device, image processing method, and image processing program
US20120133582A1 (en) Storage medium having stored thereon information processing program, information processing apparatus, information processing system, and information processing method
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
JP2012064010A (en) Information processor, information processing program, information processing system and information processing method
WO2015162991A1 (en) Image fusion system, information processing device, information terminal, and information processing method
JP2009251858A (en) Image conversion program and image conversion device
CN116440495A (en) Scene picture display method and device, terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837765

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14837765

Country of ref document: EP

Kind code of ref document: A1