US8290214B2 - Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method for user input in dynamic gaming systems - Google Patents

Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method for user input in dynamic gaming systems Download PDF

Info

Publication number
US8290214B2
US8290214B2 US12/068,819 US6881908A US8290214B2 US 8290214 B2 US8290214 B2 US 8290214B2 US 6881908 A US6881908 A US 6881908A US 8290214 B2 US8290214 B2 US 8290214B2
Authority
US
United States
Prior art keywords
light
section
pixels
primary
identification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/068,819
Other versions
US20080199047A1 (en
Inventor
Chun Liang Kuo
Shing Lung Tai
Tatsuji Kumabayashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bandai Namco Entertainment Inc
Shang Hwang Industry Co Ltd
Zeroplus Technology Co Ltd
Original Assignee
Namco Bandai Games Inc
Shang Hwang Industry Co Ltd
Zeroplus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Namco Bandai Games Inc, Shang Hwang Industry Co Ltd, Zeroplus Technology Co Ltd filed Critical Namco Bandai Games Inc
Assigned to SHANG HWANG INDUSTRY CO., LTD., NAMCO BANDAI GAMES INC., KUMABAYASHI, TATSUJI, ZEROPLUS TECHNOLOGY CO., LTD. reassignment SHANG HWANG INDUSTRY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUO, CHUN LIANG, TAI, SHING LUNG, KUMABAYASHI, TATSUJI
Publication of US20080199047A1 publication Critical patent/US20080199047A1/en
Application granted granted Critical
Publication of US8290214B2 publication Critical patent/US8290214B2/en
Assigned to BANDAI NAMCO GAMES INC. reassignment BANDAI NAMCO GAMES INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: NAMCO BANDAI GAMES INC.
Assigned to BANDAI NAMCO ENTERTAINMENT INC. reassignment BANDAI NAMCO ENTERTAINMENT INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: BANDAI NAMCO GAMES INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/219Input arrangements for video game devices characterised by their sensors, purposes or types for aiming at specific areas on the display, e.g. light-guns
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/26Teaching or practice apparatus for gun-aiming or gun-laying
    • F41G3/2616Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device
    • F41G3/2622Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device for simulating the firing of a gun or the trajectory of a projectile
    • F41G3/2627Cooperating with a motion picture projector
    • F41G3/2633Cooperating with a motion picture projector using a TV type screen, e.g. a CRT, displaying a simulated target
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/26Teaching or practice apparatus for gun-aiming or gun-laying
    • F41G3/2616Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device
    • F41G3/2622Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device for simulating the firing of a gun or the trajectory of a projectile
    • F41G3/2661Teaching or practice apparatus for gun-aiming or gun-laying using a light emitting device for simulating the firing of a gun or the trajectory of a projectile in which the light beam is sent from the target to the weapon
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0325Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present invention relates to an indication position calculation system, an indicator for an indication position calculation system, a game system, and an indication position calculation method.
  • An indication position calculation system has been known which calculates an indication position of an indicator on an indication plane (e.g., monitor or screen), such as a shooting game system using a gun-type controller or a presentation system using a pointing device.
  • an indication position calculation system an infrared light-emitting section or the like is disposed near the indication plane.
  • the light-emitting section is imaged using an image sensor (light-receiving sensor) provided on the end of the indicator, and the indication position of the indicator is calculated based on the position of the light-emitting section in the resulting image.
  • a related-art indication position calculation system performs position calculations evenly based on pixels of which the light-reception information has been determined to be larger than a predetermined threshold value, pixels corresponding to the center portion of the light-emitting section cannot be distinguished from pixels corresponding to the peripheral portion of the light-emitting section, whereby the indication position of the indicator may not be accurately calculated.
  • the indicator including the image sensor moves at high speed, pixels corresponding to the light-emitting section may be spread in the moving direction of the indicator in the image acquired by the image sensor. If the indication position of the indicator is calculated based on these pixels, an accurate calculation result may not be obtained.
  • an indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
  • an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • an indicator for an indication position calculation system comprising:
  • an imaging section which acquires an image a light-emitting section and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • a game system calculating an indication position of an indicator, the game system comprising:
  • an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • an indication position calculation method comprising:
  • a determination section to determine whether or not each of the pixels is a primary effective pixel satisfying a first condition and determine whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information;
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • FIG. 1 is a diagram schematically showing an example of the appearance of a system according to one embodiment of the invention.
  • FIG. 2 is a diagram illustrating an example of an image acquired according to one embodiment of the invention.
  • FIG. 3 is a diagram illustrating an example of an indicator and an imaging section according to one embodiment of the invention.
  • FIG. 4A is a diagram illustrating an example of an installation state of a system according to one embodiment of the invention
  • FIG. 4B is a diagram illustrating an example of an image acquired according to one embodiment of the invention.
  • FIG. 5 is a functional block diagram showing an example of an indicator according to one embodiment of the invention.
  • FIG. 6 is a diagram illustrating an example of part of an image acquired according to one embodiment of the invention.
  • FIG. 7 is a flowchart showing an example of a process according to one embodiment of the invention.
  • FIG. 8 is a flowchart showing an example of a process according to one embodiment of the invention.
  • FIG. 9 is a flowchart showing an example of a process according to one embodiment of the invention.
  • FIGS. 10A and 10B are diagrams illustrating an example of the principle according to one embodiment of the invention.
  • FIGS. 11A and 11B are diagrams illustrating image examples acquired according to one embodiment of the invention.
  • FIG. 12 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
  • FIG. 13 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
  • FIG. 14 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
  • FIGS. 15A and 15B are diagrams schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
  • FIG. 16 is a functional block diagram showing an example of an indication position calculation system according to one embodiment of the invention.
  • the invention may provide an indication position calculation system, an indicator for an indication position calculation system, a game system, and an indication position calculation method capable of promptly and accurately calculating an indication position with a reduced processing load.
  • an indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
  • an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • an indicator for an indication position calculation system comprising:
  • an imaging section which acquires an image a light-emitting section and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • an indication position calculation method comprising:
  • a determination section to determine whether or not each of the pixels is a primary effective pixel satisfying a first condition and determine whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information;
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • identification information of a pixel refers to information which specifies the position of a pixel in an image.
  • the identification information may be address data or a count value of a pixel.
  • the imaging section provided in the indicator acquires an image in a given area including the light-emitting section.
  • the determination section determines whether or not each pixel satisfies the first condition based on the light-reception information relating to each pixel, and the calculation section calculates the indication position of the indicator based on the positions of the effective pixels in the image using the effective pixels which satisfy the first condition as pixels corresponding to the light-emitting section.
  • the calculation section calculates the representative value of the effective pixels based on the identification information relating to the effective pixels while changing weighting on the identification information relating to the effective pixels depending on whether or not the effective pixels satisfy the second condition.
  • the indication position can be more accurately calculated by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels, for example.
  • the indication position can be accurately calculated by reducing the degree of effects of pixels corresponding to an afterimage when calculating the representative value.
  • the determination section may determine that each of the pixels is the primary effective pixel satisfying the first condition when a value of the light-reception information is larger than a first threshold value, and determine that the primary effective pixel is the secondary effective pixel satisfying the second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value;
  • the calculation section may calculate the representative value of the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then perform the position calculations based on the representative value, while making the weight on the identification information of the secondary effective pixels larger than the weight on the identification information of the primary effective pixels in calculation.
  • the indication position can be more accurately calculated by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels.
  • the imaging section may successively output the light-reception information of the pixels
  • one of the pixels having the light-reception information output primarily may be disposed on a lower side of the image and another one of the pixels having the light-reception information output last may be disposed on an upper side of the image when the indicator is held in a reference position;
  • the calculation section may perform the position calculations based on the identification information of a first effective pixel which is the primary effective pixel and has been primarily determined to satisfy the first condition in order to obtain the indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section.
  • reference position refers to a predetermined position (direction or position) of the indicator (i.e., position at which the starting pixel of the imaging section provided in the indicator is positioned on the lower side and the end pixel is positioned on the upper side).
  • the reference position may be appropriately determined depending on the specification of the indication position calculation system and the specification of the indicator.
  • the reference position of the indicator may be an average position when using the indicator in a normal state.
  • normal state used herein refers to a state determined based on the shape of the indicator or a state specified in a manual or the like, for example.
  • the indication position cannot be accurately calculated if the position calculations are performed using the pixels corresponding to noise as pixels corresponding to the light-emitting section without determining whether or not a pixel which satisfies a given condition is a pixel corresponding to the light-emitting section or a pixel corresponding to noise.
  • a noise source generally exists at a position above the light-emitting section.
  • a window through which external light enters or an incandescent lamp may serve as an infrared light source (i.e., noise source).
  • a window or a lamp generally exists at a position higher than the light-emitting section. Therefore, pixels corresponding to noise generally occur in an area higher than the light-emitting section in the image acquired by the imaging section. On the other hand, pixels corresponding to noise rarely occur in an area lower than the light-emitting section.
  • the imaging section is provided in the indicator so that the starting pixel is disposed on the lower side and the end pixel is disposed on the upper side when the indicator is held in the reference position.
  • the imaging section successively outputs the light-reception information relating to each pixel from the starting pixel which corresponds to the lower side of the image acquired by the imaging section.
  • the imaging section when the indicator is held in the reference position, the imaging section successively outputs the light-reception information relating to each pixel from the pixels positioned on the lower side in which pixels corresponding to noise rarely occur.
  • the indication position can be accurately calculated using the first pixel as a pixel corresponding to the light-emitting section without determining whether the first pixel is a pixel corresponding to the light-emitting section or a pixel corresponding to noise.
  • the indication position can be promptly and accurately calculated with a reduced processing load.
  • the calculation section may set a predetermined area of the image including the first effective pixel as a determination area, and perform the position calculations based on the identification information of the primary and secondary effective pixels included in the determination area.
  • a pixel included in the predetermined determination area can be considered to be a pixel corresponding to the light-emitting section. Therefore, the indication position can be more accurately calculated using the identification information relating to the pixels included in the predetermined determination area.
  • Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may comprise:
  • the calculation section may set a predetermined area of the image including the first effective pixel as a first determination area, set another predetermined area of the image including a second effective pixel which has been primarily determined to satisfy the first condition among the pixels out of the first determination area as a second determination area, and then perform the position calculations based on the identification information of the primary and secondary effective pixels within the first determination area and the second determination area.
  • the light-emitting section includes a first light-emitting section and a second light-emitting section
  • pixels included in the first determination area can be considered to be pixels corresponding to the first light-emitting section
  • pixels included in the second determination area can be considered to be pixels corresponding to the second light-emitting section.
  • the light-emitting section further includes a third light-emitting section and a fourth light-emitting section
  • a third determination area and a fourth determination area may be set. Even if the indication position of the indicator is calculated in a state in which a plurality of light-emitting sections are provided, the indication position can be more accurately calculated by using the identification information relating to the pixels corresponding to each light-emitting section.
  • Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may further comprise:
  • a shielding section provided in the light-emitting section and shielding part of light from the light-emitting section emitted downward in a predetermined degrees of angle or less from the horizontal.
  • the shielding section may be disposed at a position enabling the shielding section to shield part of the light from the light-emitting section emitted downward from the horizontal so that no reflected light from lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
  • reflected light can be prevented from entering the imaging section from the light-emitting section when the light-emitting section and the imaging section have the reference positional relationship. Accordingly, reflected light can be reliably prevented from entering the imaging section from the light-emitting section when the indication position calculation system is in a basic state depending on the application.
  • the light-emitting section may be directed in a direction enabling to prevent light from the light-emitting section from being emitted downward from the horizontal so that no reflected light from lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
  • reflected light can be prevented from entering the imaging section from the light-emitting section when the light-emitting section and the imaging section have the reference positional relationship by adjusting the direction of the light-emitting section without providing the shielding section. Accordingly, reflected light can be reliably prevented from entering the imaging section from the light-emitting section when the indication position calculation system is in a basic state depending on the application, by adjusting the direction of the light-emitting section.
  • Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may further comprise:
  • a filter which is provided in the indicator and through which light in the same wavelength band as light from the light-emitting section is allowed to pass toward the imaging section.
  • Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may comprise:
  • calculation section may perform the position calculations based on a representative value corresponding to each of the light-emitting sections.
  • a game system calculating an indication position of an indicator, the game system comprising:
  • an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image
  • a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information
  • calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
  • a game system which can more accurately calculate the indication position by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels, for example.
  • FIG. 1 is a diagram schematically showing a game system 10 to which an indication position calculation system according to one embodiment of the invention is applied.
  • the game system 10 includes a display section 12 which displays a game image such as a target object TO on a display screen 11 , a light-emitting unit 15 which is provided at the top of the display section 12 and includes two light-emitting sections 13 and 14 , each having an infrared light source such as an infrared LED, a controller 16 (indicator, shooting device, or pointing device) which is held by a player P so that its position and direction can be arbitrarily changed and is used to indicate an arbitrary position on the display screen 11 , and a game device 17 which performs a game process and the like.
  • a display section 12 which displays a game image such as a target object TO on a display screen 11
  • a light-emitting unit 15 which is provided at the top of the display section 12 and includes two light-emitting sections 13 and 14 , each having an inf
  • Each of the light-emitting sections 13 and 14 has the infrared light source on the front surface.
  • the light-emitting sections 13 and 14 are provided on the top surface of the display section 12 at a predetermined interval so that the front surfaces face in the same direction as the display screen 11 (i.e., direction toward the player).
  • Each of the light-emitting sections 13 and 14 emits infrared light forward from the light source.
  • the controller 16 is formed to imitate the shape of a gun, and includes a barrel GB which is directed toward the display screen 11 , a grip GG which extends from the barrel GB and is held by the player P with the hand, and a trigger GT which can be operated using the forefinger of the hand holding the grip GG
  • An image sensor 18 (light-receiving section or imaging section) such as a CMOS sensors is provided on the end of the barrel GB.
  • the image sensor 18 receives infrared light (light in the same wavelength band as light emitted from the light-emitting sections 13 and 14 ) which enters the image sensor 18 along the direction in which the end of the controller 16 (barrel GB) is directed, and acquires (images) the infrared light.
  • a filter FI which allows only light in a wavelength band corresponding to infrared light (i.e., light having the same wavelength as light emitted from the light-emitting sections 13 and 14 ) to pass through is provided on the front side of the image sensor 18 (i.e., the end of the controller 16 at a position forward from the image sensor 18 ) so that light in a predetermined wavelength band enters the image sensor 18 .
  • An image sensor having a light reception sensitivity in a wavelength band within a predetermined range including infrared light (light having a predetermined wavelength) may be used as the image sensor 18 without providing the filter FI.
  • the game system 10 calculates the positional relationship between the light-emitting sections 13 and 14 and the controller 16 based on position information relating to the light-emitting sections 13 and 14 in an image acquired by the image sensor 18 and reference position information set in advance, and calculates information relating to the indication position of the controller 16 on the display screen 11 .
  • the game system 10 determines whether or not the indication position of the controller 16 when the trigger GT of the controller 16 has been pulled coincides with the position of the target object TO displayed on the display screen 11 , and performs a game process such as an image display control process or a score calculation process.
  • FIG. 2 is a diagram showing an example of an image PC 1 acquired by the image sensor 18 when the controller 16 is directed toward the display screen 11 .
  • the image sensor 18 according to this embodiment includes 22,050 (175 ⁇ 126) light-receiving elements (imaging elements) arranged in a matrix on a rectangular surface, and acquires the image PC 1 .
  • One pixel of the image PC 1 corresponds to one light-receiving element.
  • the image PC 1 is updated every 1/54th of a second depending on the position and the direction of the controller 16 .
  • the game system 10 calculates the information relating to the indication position of the controller 16 on the display screen 11 using position information relating to two infrared light source areas IrA 1 and IrA 2 (i.e., areas obtained by imaging infrared light from the light-emitting sections 13 and 14 ) in the image PC 1 .
  • an origin O i.e., center of the image PC 1
  • information relating to the indication position of the controller 16 on the display screen 11 is calculated based on the positional relationship among the origin O, the infrared light source areas IrA 1 and IrA 2 in the image PC 1 , and a display screen area DpA which is an area corresponding to the display screen 11 in the image PC 1 .
  • the infrared light source areas IrA 1 and IrA 2 are formed above the center of the image PC 1 to some extent in a state in which a straight line I which connects the infrared light source areas IrA 1 and IrA 2 is rotated clockwise by omega degrees with respect to a reference line L (i.e., X axis of the image sensor 18 ) of the image PC 1 .
  • the origin O corresponds to a predetermined position on the lower right of the display screen area DpA so that the coordinates of the indication position of the controller 16 on the display screen 11 can be calculated.
  • the rotation angle of the controller 16 around the indication direction axis with respect to the display screen 11 can be calculated based on the rotation angle omega of the straight line I which connects the infrared light source areas IrA 1 and IrA 2 with respect to the reference line L.
  • the distance between the controller 16 and the display screen 11 in the example shown in FIG. 2 can be calculated based on the ratio of a reference distance D between the infrared light source areas IrA 1 and IrA 2 when the controller 16 is located at a predetermined distance from the display screen 11 and a distance d between the infrared light source areas IrA 1 and IrA 2 in the example shown in FIG. 2 by setting the reference distance D in advance.
  • information relating to the indication position of the controller 16 on the display screen 11 and the like can be calculated even if the player P moves his hand while holding the controller 16 as shown in FIG. 1 or changes the position and the direction of the controller 16 due to the movement of the player P.
  • the game device 17 and the controller 16 are connected via a cable. Note that information may be transmitted and received between the game device 17 and the controller 16 via wireless communication.
  • the light-emitting sections 13 and 14 need not be necessarily provided at the top of the display section 12 .
  • the light-emitting sections 13 and 14 may be provided at an arbitrary position (e.g., bottom or side) of the display section 12 . Specifically, the light-emitting sections 13 and 14 may be provided to have a given positional relationship with the display section 12 within a range in which the image sensor 18 can receive light when the controller 16 is directed toward the display screen 11 .
  • FIG. 3 provides a side view of the controller 16 including the image sensor 18 and an enlarged front view of the image sensor 18 .
  • FIG. 3 is a view illustrative the installation state of the image sensor 18 in the controller 16 .
  • the image sensor 18 according to this embodiment is provided in the controller 16 so that its light-receiving surface SF perpendicularly intersects an indication direction BD of the barrel GB.
  • the image sensor 18 is provided in the controller 16 so that the X axis (reference line L) of the image sensor 18 perpendicularly intersects an installation direction GD of the grip GG.
  • the indication direction BD and the X axis (reference line L) of the image sensor 18 become horizontal.
  • a state in which the installation direction GD of the grip GG faces perpendicularly downward is referred to as a reference position of the controller 16 .
  • the term “reference position of the controller 16” refers to an average position (direction or use state) of the controller 16 when the player P holds the controller 16 as if to hold an actual gun.
  • each of the 22,050 (175 ⁇ 126) light-receiving elements arranged on the rectangular surface SF receives infrared light which enters along the direction in which the end of the controller 16 is directed, and successively outputs light-reception information relating to each pixel.
  • the controller 16 determines whether or not the light-reception information relating to each pixel is larger than a threshold value set corresponding to the quantity of light emitted from the light-emitting sections 13 and 14 (i.e., whether or not each pixel is an effective pixel which satisfies a given condition) to determines whether or not each pixel is a pixel corresponding to the infrared light.
  • a pixel corresponding to infrared light may be a pixel corresponding to an infrared light source other than the light-emitting sections 13 and 14 (i.e., light source which emits light in the same wavelength band as light emitted from the light that light-emitting sections 13 and 14 ) instead of a pixel corresponding to the light-emitting sections 13 and 14 .
  • FIG. 4A is a diagram showing a state around the light-emitting sections 13 and 14 viewed from the front side of the display screen 11 .
  • an infrared light source other than the light-emitting sections 13 and 14 may exist.
  • a window WD through which external light enters, an incandescent lamp WH, and the like serve as infrared light sources.
  • the image sensor 18 receives infrared light which enters through the window WD, light emitted from the incandescent lamp WH, and the like, the received infrared light serves as noise so that an accurate indication position may not be calculated.
  • FIG. 4B shows an image PC 2 acquired by the image sensor 18 in a state shown in FIG. 4A when the controller 16 is held in the reference position toward the display screen 11 .
  • the window WD, the incandescent lamp WH, and the like are generally positioned above the light-emitting sections 13 and 14 disposed at the top of the display screen 11 in a real space.
  • pixels corresponding to a noise area NA 1 due to the window WD and a noise area NA 2 due to the incandescent lamp WH generally occur in an area UA obtained by imaging a portion above the light-emitting sections 13 and 14 .
  • a related-art indication position calculation system successively outputs the light-reception information relating to each pixel while scanning each row from the upper row to the lower row, starting from the uppermost-leftmost pixel (starting pixel) toward the lowermost-rightmost pixel (end pixel) in the image PC 2 shown in FIG. 4B , and determines whether or not each pixel is a pixel corresponding to the light source.
  • the image sensor 18 is provided in the controller 16 so that the starting pixel is disposed on the upper side and the end pixel is disposed on the lower side when the controller 16 is held in the reference position.
  • pixels corresponding to the light source include not only pixels corresponding to the light-emitting sections 13 and 14 , but also pixels corresponding to the noise areas NA 1 and NA 2 . Therefore, a related-art indication position calculation system must distinguish pixels corresponding to the light-emitting sections 13 and 14 from pixels corresponding to noise.
  • the image sensor 18 is provided in the controller 16 so that a starting pixel SP is disposed on the lower side and an end pixel EP is disposed on the upper side when the controller 16 is held in the reference position, taking such a situation into consideration.
  • the image sensor 18 horizontally scans pixels in the leftward direction from the lowermost-rightmost pixel in the acquired image PC 2 as the starting pixel SP. When the pixels in the lowermost row have been completely scanned, the image sensor 18 horizontally scans pixels in the next (upper) row in the leftward direction from the rightmost pixel.
  • the image sensor 18 successively outputs the light-reception information relating to each pixel while scanning pixels in each row from the lower row to the upper row until the uppermost-leftmost end pixel EP is reached. Specifically, when the controller 16 is held in the reference position, the image sensor 18 successively outputs the light-reception information relating to each pixel from pixels on the lower side in which pixels corresponding to noise rarely occur.
  • a first effective pixel FP of which the light-reception information has been output from the image sensor 18 and which has been determined to satisfy a given condition is a pixel corresponding to the light-emitting section 13 or 14 when the controller 16 is held in the reference position. Therefore, the indication position can be accurately calculated using the first effective pixel FP as a pixel corresponding to the light-emitting section 13 or 14 without determining whether the pixel corresponding to the light source is a pixel corresponding to the light-emitting section 13 or 14 or a pixel corresponding to noise.
  • the indication position can be promptly and accurately calculated with a reduced processing load.
  • FIG. 5 is a functional block diagram showing an indication position detection section provided in the controller 16 according to this embodiment.
  • the indication position detection section includes the image sensor 18 , an ASIC 200 , an MCU 300 , and a USB 400 .
  • the image sensor 18 is initialized when the image sensor 18 has received an initialization signal from the MCU 300 .
  • the image sensor 18 acquires the light-reception information in pixel units by receiving infrared light which enters along a direction in which the end of the controller 16 is directed to acquire an image along the direction in which the end of the controller 16 is directed.
  • the light-reception information (quantity of received light) of each pixel is acquired using a two-digit hexadecimal number in the range from 00 to FF.
  • a starting pixel is set on one end of the acquired image, and an end pixel is set on the other end.
  • the image sensor 18 successively outputs the light-reception information relating to each pixel from the starting pixel to the end pixel to the ASIC 200 .
  • the image sensor 18 outputs a pixel clock signal and a vertical synchronization signal to a control section 220 of the ASIC 200 , and outputs the pixel clock signal to a pixel counter 230 . This allows the control section 220 and the pixel counter 230 to synchronize with the output timing of the light-reception information relating to each pixel from the image sensor 18 .
  • the ASIC 200 includes a determination section 210 , the control section 220 , and the pixel counter 230 .
  • the determination section 210 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel successively output from the image sensor 18 . Specifically, the determination section 210 determines that a first condition is satisfied when the light-reception information relating to each pixel is 80 (two-digit hexadecimal number) or more. The determination section 210 determines that a pixel of which the light-reception information is 80 or more to be a pixel corresponding to the infrared light source.
  • the determination section 210 determines whether or not each pixel satisfies the first condition, and determines whether or not an effective pixel which satisfies the first condition satisfies a second condition. Specifically, a pixel group of pixels corresponding to the light source is bright at the center and becomes darker as the distance from the center increases due to a decrease in luminance. In this embodiment, the determination section 210 determines that the second condition is satisfied when the light-reception information relating to an effective pixel is F 0 or more.
  • the determination section 210 determines that a pixel of which the light-reception information is 80 to EF to be a dark pixel corresponding to the peripheral portion of the infrared light source, and determines that a pixel of which the light-reception information is F 0 to FF to be a bright pixel corresponding to the center portion of the infrared light source.
  • the determination section 210 outputs an enable signal to the control section 220 while determining whether or not the light-reception information relating to each pixel successively output from the image sensor 18 satisfies the first condition.
  • determination section 210 outputs a bright signal which indicates the pixel is bright to a pixel FIFO 340 of the MCU 300 .
  • the control section 220 allows a 15-bit count value output from the pixel counter 230 to be written into the pixel FIFO 340 when the control section 220 receives the enable signal from the determination section 210 .
  • the pixel counter 230 is reset upon reception of the vertical synchronization signal output from the image sensor 18 from the control section 220 , and outputs the 15-bit count value from the starting pixel (0) to the end pixel (22, 050) of the image sensor 18 in synchronization with the output of the light-reception information relating to each pixel.
  • the count value of an effective pixel of which the light-reception information has been determined to satisfy the first condition is written into the pixel FIFO 340 .
  • the pixel counter 230 writes 16-bit data into the pixel FIFO 340 by adding 0 as the most significant bit of the 15-bit count value when the bright signal is received from the determination section 210 and adding 1 as the most significant bit of the 15-bit count value when the bright signal is not received from the determination section 210 .
  • the count value of an effective pixel of which the light-reception information has been determined to satisfy the first condition and the data which indicates whether or not the effective pixel satisfies the second condition (16 bits in total) are stored in the pixel FIFO 340 .
  • the MCU 300 includes the pixel FIFO 340 , a light source position calculation section 350 , and an indication position calculation section 360 .
  • the pixel FIFO 340 stores 128 pieces of 16-bit data relating to the effective pixel based on the data output from the ASIC 200 .
  • the pixel FIFO 340 successively outputs the 16-bit data relating to each effective pixel to the light source position calculation section 350 in a first-in first-out manner.
  • the light source position calculation section 350 sets a first determination area in a predetermined range including a first effective pixel based on first data stored in the pixel FIFO 340 after the image sensor 18 has been initialized (i.e., the count value of the first effective pixel which has been determined to satisfy the first condition).
  • the light source position calculation section 350 sets a second determination area in a predetermined range including a second effective pixel based on the count value of the second effective pixel which is a pixel in the area other than the first determination area and is a first pixel which has been determined to satisfy the first condition).
  • the light source position calculation section 350 performs position calculations based on the count values (identification information) of effective pixels which satisfy the first condition and are included in the first determination area and the count values of effective pixels which satisfy the first condition and are included in the second determination area.
  • FIG. 6 shows part of an enlarged image acquired by the image sensor 18 .
  • FIG. 6 is a view illustrative of calculations of a representative value of the determination area.
  • a white square indicates an effective pixel which satisfies the first condition and the second condition
  • a gray square indicates an effective pixel which satisfies the first condition but does not satisfy the second condition
  • a black area indicates ineffective pixels which do not satisfy the first condition and the second condition.
  • the image sensor 18 horizontally and sequentially scans pixels from the lower right pixel and outputs the light-reception information relating to each pixel.
  • the determination section 210 determines whether or not each pixel satisfies the first condition, the determination section 210 determines that a pixel P 1 is a first effective pixel which satisfies the first condition.
  • the 16-bit data relating to the first effective pixel P 1 is input to the light source position calculation section 350 from the pixel FIFO 340 , the light source position calculation section 350 sets a first determination area JA 1 in a circular region in a predetermined range including the first effective pixel, as shown in FIG. 6 .
  • the light source position calculation section 350 sets the first determination area JA 1 in a range in which it is estimated that other effective pixels corresponding to the light-emitting section 13 or 14 exist when the first effective pixel is a pixel corresponding to the light-emitting section 13 or 14 .
  • the light source position calculation section 350 determines whether or not each effective pixel is included in the first determination area JA 1 based on the count value of each effective pixel sequentially input from the pixel FIFO 340 .
  • the light source position calculation section 350 calculates a representative value (center-of-gravity coordinates) of the first determination area based on the count value of each effective pixel included in the first determination area JA 1 .
  • the light source position calculation section 350 transforms the count value of each effective pixel into a coordinate value of each pixel in an image PC acquired by the image sensor 18 , and then calculates the center-of-gravity coordinates of the first determination area JA 1 .
  • the light source position calculation section 350 calculates a remainder when dividing the count value (one of 0 to 22,050) by 175 which is the number of pixels on the X axis of the image sensor 18 to calculate the X coordinate of the pixel in the image from the count value, and calculates a quotient when dividing the count value by 175 to calculate the Y coordinate of the pixel in the image from the count value.
  • the light source position calculation section 350 divides the sum of the X coordinate components of the effective pixels included in the first determination area JA 1 by the number of the effective pixels included in the first determination area JA 1 to calculate the X coordinate component of the center-of-gravity coordinates of the first determination area JA 1 . Likewise, the light source position calculation section 350 divides the sum of the Y coordinate components of the effective pixels included in the first determination area JA 1 by the number of the effective pixels included in the first determination area JA 1 to calculate the Y coordinate component of the center-of-gravity coordinates of the first determination area JA 1 .
  • the light source position calculation section 350 calculates the center-of-gravity coordinates of the first determination area JA 1 based on the coordinate component value of each effective pixel while changing weighting on the coordinate component value of each effective pixel between an effective pixel which is included in the first determination area JA 1 and satisfies the second condition (bright pixel) and an effective pixel which is included in the first determination area JA 1 and does not satisfy the second condition (dark pixel).
  • the light source position calculation section 350 performs the above calculations while doubling the coordinate component value and the number of bright pixels.
  • pixels P 7 and P 13 of which the X coordinate component value is 45 are bright pixels. Therefore, the coordinate component values 45 ⁇ 2 of the pixels P 7 and P 13 are doubled and added to the coordinate component value.
  • the number of bright pixels (2) is also doubled and added to the number of pixels. Since a pixel P 19 is a dark pixel, the coordinate component value 45 of the pixel P 19 is directly added to the coordinate component value, and the number of bright pixels (1) is directly added to the number of pixels.
  • the pixels having other X coordinate component values are similarly calculated. Therefore, the X component X1 of the center-of-gravity coordinates of the first determination area JA 1 in the example shown in FIG. 6 is calculated as follows.
  • the Y component Y1 of the center-of-gravity coordinates of the first determination area JA 1 is calculated as follows.
  • the center-of-gravity coordinates (X1, Y1) of the first determination area JA 1 in the example shown in FIG. 6 are calculated to be (47.547, 84.962).
  • the light source position calculation section 350 sets a second determination area JA 2 in a predetermined range including a second effective pixel which is included in an area other than the first determination area JA 1 and is a first pixel which has been determined to satisfy the first condition, and calculates the center-of-gravity coordinates (X2, Y2) of the second determination area JA 2 in the same manner as the first determination area JA 1 .
  • the light source position calculation section 350 outputs the center-of-gravity coordinates (X1, Y1) of the first determination area JA 1 and the center-of-gravity coordinates (X2, Y2) of the second determination area JA 2 to the indication position calculation section 360 .
  • the indication position calculation section 360 calculates the indication position of the controller 16 on the display screen 11 based on the center-of-gravity coordinates (X1, Y1) of the first determination area JA 1 and the center-of-gravity coordinates (X2, Y2) of the second determination area JA 2 .
  • the indication position calculation section 360 calculates the positional relationship between the light-emitting sections 13 and 14 and the controller 16 using the center-of-gravity coordinates (X1, Y1) of the first determination area JA 1 or the center-of-gravity coordinates (X2, Y2) of the second determination area JA 2 relatively positioned on the left of the image PC as the center coordinates of the light-emitting section 13 disposed on the upper left portion of the display section 12 and the center-of-gravity coordinates (X1, Y1) or the center-of-gravity coordinates (X2, Y2) relatively positioned on the right of the image PC as the center coordinates of the light-emitting section 14 disposed on the upper right portion of the display section 12 , and calculates the indication position of the controller 16 on the display screen 11 .
  • the indication position calculation section 360 outputs the calculated indication position to the USB 400 .
  • the USB 400 includes a USB interface 410 and a key function section 420 .
  • the USB interface 410 outputs the indication position data input from the MCU 300 to the game device 17 .
  • the key function section 420 outputs an operation signal based on operation of the trigger and other operation keys of the controller 16 .
  • the game device 17 identifies the indication position data relating to the controller 16 at a timing at which the trigger operation signal is input to be an impact position of a virtual bullet on a game screen (display screen 11 ), and performs game calculations such as determining whether or not the target has been hit.
  • FIG. 7 is a flowchart showing an example of an outline of the position calculation process according to this embodiment.
  • the determination section 210 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel output from the image sensor 18 (step S 10 ).
  • the light source position calculation section 350 calculates the center-of-gravity positions of two determination areas corresponding to the two light sources based on the identification information relating to the first effective pixel and the second effective pixel (step S 12 ).
  • the indication position calculation section 360 calculates the indication position of the controller 16 based on the center-of-gravity positions of the two determination areas (step S 14 ).
  • the indication position calculation section 360 outputs the indication position of the controller 16 to the game device 17 (step S 16 ).
  • the process in the steps S 10 to S 16 is repeated every 1/54th of a second.
  • FIG. 8 is a flowchart showing an example of the details of the pixel determination process in the step S 10 shown in FIG. 7 .
  • the determination section 210 acquires the light-reception information from the image sensor 18 in pixel units (step S 100 ), and determines whether or not the light-reception information is equal to or larger than a first threshold value (i.e., satisfies the first condition) (step S 102 ).
  • a first threshold value i.e., satisfies the first condition
  • the determination section 210 acquires the light-reception information relating to another pixel (step S 100 ).
  • the determination section 210 determines whether or not the light-reception information is equal to or larger than a second threshold value (i.e., satisfies the second condition) (step S 104 ).
  • the determination section 210 sets the most significant bit of the 16-bit data at 0 as the attribute bit (step S 106 ).
  • the determination section 210 sets the most significant bit of the 16-bit data at 1 as the attribute bit (step S 108 ).
  • the determination section 210 stores the 16-bit data in which the most significant bit is the attribute bit and the lower-order 15 bits are the count value of the pixel in the pixel FIFO 340 (step S 110 ).
  • the process in the steps S 100 to S 110 is repeatedly performed on each pixel of the image sensor 18 in synchronization with the pixel clock signal output from the image sensor 18 .
  • FIG. 9 is a flowchart showing an example of the details of the light source position calculation process in the step S 12 shown in FIG. 7 .
  • the light source position calculation section 350 acquires the 16-bit data relating to the first effective pixel from the pixel FIFO 340 , and separates the lower-order 15-bit count value into the X component value and the Y component value (step S 200 ).
  • the light source position calculation section 350 determines whether or not the first determination area has been set (step S 202 ).
  • the light source position calculation section 350 determines the pixel to be the first effective pixel, and sets the first determination area in an area in a predetermined range including the first effective pixel (step S 204 ).
  • the light source position calculation section 350 determines whether or not the most significant bit of the 16-bit data is 0 (step S 206 ). When the light source position calculation section 350 has determined that the most significant bit is 0 (i.e., bright pixel) (Y in step S 206 ), the light source position calculation section 350 doubles the X component value, the Y component value, and the number of pixels (step S 208 ), and adds the X component value to a register RX 1 , the Y component value to a register RY 1 , and the number of pixels to a register RC 1 (step S 210 ).
  • the light source position calculation section 350 When the light source position calculation section 350 has determined that the most significant bit is 1 (i.e., dark pixel) (N in step S 206 ), the light source position calculation section 350 adds the X component value to the register RX 1 , the Y component value to the register RY 1 , and the number of pixels to the register RC 1 without doubling the X component value, the Y component value, and the number of pixels (step S 210 ).
  • the light source position calculation section 350 determines whether or not the process from the step S 200 has been performed on all effective pixels included in the pixels 0 to 22,050 (pixels of one frame) of the image sensor 18 (step S 212 ). When the light source position calculation section 350 has determined that the process has been performed on all effective pixels (N in step S 212 ), the light source position calculation section 350 returns to the step S 200 , and acquires the 16-bit data relating to the next pixel from the pixel FIFO 340 . The light source position calculation section 350 repeats the process in the steps S 200 to S 212 .
  • the light source position calculation section 350 determines that the first determination area has been set in the step S 202 (Y in step S 202 ), and determines whether or not the effective pixel is positioned in the first determination area based on the coordinate value of the effective pixel (step S 214 ).
  • the light source position calculation section 350 performs the process in the steps S 206 to S 212 .
  • the light source position calculation section 350 determines whether or not the second determination area has been set (step S 216 ).
  • the light source position calculation section 350 determines the effective pixel to be the second effective pixel, and sets the second determination area in an area in a predetermined range including the second effective pixel (step S 218 ).
  • steps S 220 to S 224 the light source position calculation section 350 performs a process similar to the process in the steps S 206 to S 210 performed on the effective pixel in the first determination area, and adds the X component value to a register RX 2 , the Y component value to a register RY 2 , and the number of pixels to a register RC 2 (step S 224 ).
  • the light source position calculation section 350 determines whether or not the effective pixel is positioned in the second determination area based on the coordinate value of the effective pixel (step S 226 ).
  • the light source position calculation section 350 performs the process in the steps S 220 to S 224 .
  • the light source position calculation section 350 determines whether or not all effective pixels have been processed without adding a value to the register (step S 212 ).
  • the light source position calculation section 350 determines whether or not data relating to the two determination areas has been acquired (step S 228 ).
  • the light source position calculation section 350 divides the sum of the values stored in the register RX 1 by the sum of the values stored in the register RC 1 to calculate the X component X1 of the center-of-gravity coordinated of the first determination area, divides the sum of the values stored in the register RY 1 by the sum of the values stored in the register RC 1 to calculate the Y component Y1 of the center-of-gravity coordinated of the first determination area, divides the sum of the values stored in the register RX 2 by the sum of the values stored in the register RC 2 to calculate the X component X2 of the center-of-gravity coordinated of the second determination area, and divides the sum of the values
  • the light source position calculation section 350 outputs the center-of-gravity coordinates (X1, Y1) of the first determination area and the center-of-gravity coordinates (X2, Y2) of the second determination area to the indication position calculation section 360 (step S 232 ).
  • the light source position calculation section 350 When the light source position calculation section 350 has determined that data relating to the two determination areas has not been acquired (i.e., only data relating to the first determination area has been acquired) (N in step S 228 ), the light source position calculation section 350 performs an out-of-range setting which indicates that the controller 16 is directed in an area outside the detection range (step S 234 ), and outputs the setting information to the indication position calculation section 360 (step S 236 ).
  • the light source position calculation section 350 When the light source position calculation section 350 has output the data to the indication position calculation section 360 , the light source position calculation section 350 initializes each register and the determination areas (step S 238 ). The process in the steps S 200 to S 238 is repeated each time the image sensor 18 is initialized, and a first effective pixel of the next frame is output from the pixel FIFO 340 .
  • FIG. 10A is a side view showing a state in which the controller 16 is directed toward the display screen 11 .
  • each of the light-emitting sections 13 and 14 includes a light source 22 which emits infrared light that has a certain directivity and travels in a direction within a given range so that the image sensor 18 can receive the infrared light from the light-emitting sections 13 and 14 when the controller 16 and the light-emitting sections 13 and 14 have a positional relationship within a predetermined range.
  • each of the light-emitting sections 13 and 14 causes the light source 22 to emit infrared light so that the infrared light travels in a travel direction within a given range with respect to a center direction CD which is the direction of the light source 22 .
  • the center direction of the light source 22 may be a center luminous intensity direction (maximum luminous intensity direction) of the light source 22 , for example.
  • an object e.g., mirror or glass table which reflects the infrared light from the light source 22 may exist between the light source 22 and the image sensor 18 . Since the light from the light source 22 travels in a direction within a given range, reflected light RL having the same wavelength and intensity as direct light DL from the light source 22 may occur at a position differing from the position of the light source 22 (e.g., reflecting surface such as mirror or glass table). If the image sensor 18 receives the reflected light RL, an accurate indication position cannot be calculated due to the reflected light RL as noise.
  • FIG. 11A shows an image PC 3 acquired by the image sensor 18 in the state shown in FIG. 10A .
  • the image sensor 18 since the image sensor 18 receives the direct light DL from the light sources 22 and the reflected light RL, noise areas NA 3 and NA 4 corresponding to the reflected light RL occur under the infrared light source areas IrA 1 and IrA 2 corresponding to the light sources 22 , as shown in FIG. 11A .
  • the reflected light RL which occurs below the light-emitting sections 13 and 14 is prevented by providing a shielding section 24 which shields light that travels downward from the light source 22 (light-emitting sections 13 and 14 ).
  • the shielding section 24 is provided at a position at which the direct light DL from the light source 22 is prevented from traveling in the direction in which the reflected light RL that enters the image sensor 18 occurs when the light source 22 (light-emitting sections 13 and 14 ) and the image sensor 18 (controller 16 ) have a given reference positional relationship.
  • the image sensor 18 receives the direct light DL from the light source 22 and does not receive the reflected light RL when the light source 22 (light-emitting sections 13 and 14 ) and the image sensor 18 (controller 16 ) have a given reference positional relationship.
  • the light source 22 and the image sensor 18 have the reference positional relationship when the controller 16 is held in the above-mentioned reference position, is located at a reference position away from the light-emitting sections 13 and 14 by four meters, and is positioned in a reference direction in which the controller 16 is directed toward the light source 22 .
  • the position, direction, size, shape, and the like of the shielding section 24 are determined so that the reflected light RL does not enter within the angle of view theta (Light-reception range) of the image sensor 18 when the light source 22 and the image sensor 18 have the reference positional relationship.
  • the position, direction, size, shape, and the like of the shielding section 24 are determined provided that a position at the maximum distance from the light source 22 at which the image sensor 18 can obtain a quantity of light (luminous intensity or luminous flux) necessary for the game system 10 to calculate an accurate indication position from the light source 22 is set to be the reference position and an average (basic) direction of the controller 16 when the player plays a game while holding the controller 16 toward the display screen 11 is set to be the reference direction.
  • a plate-shaped shielding section 24 is provided under the light source 22 at a position near the light source 22 to protrude forward from the front surface of each of the light-emitting sections 13 and 14 . Therefore, light which travels downward from the light source 22 in a direction within a predetermined angle phi can be shielded by the shielding section 24 .
  • FIG. 11B shows light received by the image sensor 18 (i.e., image data stored in image data area PA) in the state shown in FIG. 10B .
  • the image sensor 18 receives the direct light DL from the light sources 22 but does not receive the reflected light RL, the noise areas NA 3 and NA 4 corresponding to the reflected light RL do not occur under the infrared light source areas IrA 1 and IrA 2 corresponding to the light sources 22 , as shown in FIG. 11B .
  • the reflected light RL which enters the image sensor 18 does not occur when the controller 16 is directed toward the display screen 11 within a game play range in which the distance between the light-emitting sections 13 and 14 and the controller 16 is four meters or less, as shown in FIG. 10B .
  • the reflected light RL also occurs in FIG. 10B .
  • the reflected light RL does not enter the image sensor 18 unless the controller 16 is positioned outside the game play range.
  • the reference positional relationship is not limited to the above-mentioned example.
  • the reference position may be set depending on the intensity of the light source 22 (i.e., distance at which the quantity of light and the like necessary for calculating an accurate indication position are ensured), the luminous intensity distribution curve, and the beam angle (i.e., diffusion range of predetermined quantity of light).
  • the reference direction may be set depending on the application of the indication position calculation system. Specifically, the reference direction may be appropriately set depending on the basic position of an operator (player P) who holds an indicator (controller 16 ) with respect to the indication plane (display screen 11 ).
  • the operator who operates the indicator is rarely positioned perpendicularly to the indication plane (e.g., screen), but is generally positioned diagonally in front of the indication plane.
  • the operator generally directs the indicator toward the indication plane (light source) disposed above the operator.
  • a direction upward from a position diagonally in front of the indication plane may be set to be the reference direction.
  • FIG. 12 is a perspective view of the light-emitting section 13 ( 14 ) of the light-emitting unit 15 according to this embodiment.
  • the light-emitting section 13 ( 14 ) is disposed in a state in which a bottom surface 32 of a housing 30 is secured on the top surface of the display section 12 .
  • the housing 30 has a front surface 34 and a back surface 36 .
  • Three LED light sources 22 - 1 to 22 - 3 which emit infrared light from the front surface 34 are provided on the front surface 34 .
  • the light sources 22 - 1 to 22 - 3 are disposed to form vertices of an inverted triangle when viewed from the travel direction of light emitted from the light sources 22 - 1 to 22 - 3 . Therefore, even if the distance between the light sources 22 - 1 to 22 - 3 and the image sensor 18 increases, the image sensor 18 can reliably receive the infrared light from the light sources 22 - 1 to 22 - 3 as light having a predetermined area.
  • a cover 38 which allows the infrared light from the light sources 22 - 1 to 22 - 3 to pass through is provided on the front surface 34 of the housing 30 to protect the light sources 22 - 1 to 22 - 3 .
  • the shielding section 24 which protrudes from the front surface 34 is provided at the lower end of the front surface 34 of the housing 30 (i.e., below the light sources 22 - 1 to 22 - 3 ).
  • the shielding section 24 is integrally formed with the housing 30 .
  • a material which does not allow the infrared light from the light sources 22 - 1 to 22 - 3 to pass through is used for the shielding section 24 . Therefore, light emitted from the light sources 22 - 1 to 22 - 3 downward in a predetermined degrees of angle or less from the horizontal can be shielded by the shielding section 24 .
  • FIG. 13 is a side view of the light-emitting section 13 ( 14 ) shown in FIG. 12 .
  • an angle alpha formed by a shielding surface 40 (i.e., top surface) of the shielding section 24 and the front surface 34 of the housing 30 is an acute angle, as shown in FIG. 13 . Therefore, a protrusion distance 1 of the shielding section 24 from the front surface 34 necessary for shielding light which travels in a direction within a predetermined range can be reduced as compared with the case where the angle alpha is a right angle or a obtuse angle.
  • the angle alpha is set at 70 degrees
  • the protrusion distance 1 is set at 7 mm. According to the example shown in FIG. 13 , protrusion of the light-emitting section 13 ( 14 ) can be prevented and the size of the light-emitting section 13 ( 14 ) can be reduced even if the shielding section 24 is provided.
  • the reflected light as noise is prevented by providing the shielding section 24 on the front surface of each of the light-emitting sections 13 and 14 to shield light which travels downward from the light sources 22 - 1 to 22 - 3 .
  • the reflected light as noise may be prevented by preventing light from traveling downward from the light sources 22 - 1 to 22 - 3 by adjusting the directions of the light sources 22 - 1 to 22 - 3 disposed in the light-emitting sections 13 and 14 .
  • the center direction CD of the light sources 22 - 1 to 22 - 3 may be adjusted to a direction in which the direct light DL from the light source 22 is prevented from traveling in the direction in which the reflected light RL that enters the image sensor 18 occurs when the light source 22 (light-emitting sections 13 and 14 ) and the image sensor 18 (controller 16 ) have a given reference position relationship.
  • FIG. 14 shows an example of the light-emitting section 13 ( 14 ) of which the light sources 22 - 1 to 22 - 3 are disposed so that the center direction CD of the light sources 22 - 1 to 22 - 3 faces upward by beta degrees with respect to a horizontal direction (direction normal to the display screen 11 of the display section 12 ) HD when the light-emitting section 13 ( 14 ) is disposed in a state in which the bottom surface 32 of the housing 30 is secured on the top surface of the display section 12 .
  • direct light from the light source 22 is prevented from traveling in a direction in which reflected light enters the image sensor 18 by adjusting the center direction CD of the light sources 22 - 1 to 22 - 3 upward by beta degrees. Therefore, occurrence of reflected light which serves as noise under the light sources 22 - 1 to 22 - 3 can be prevented in the same manner as in the example shown in FIG. 13 .
  • a Light-reception range in which the image sensor 18 can receive light from the light sources 22 - 1 to 22 - 3 in the game system 10 according to this embodiment is described below.
  • three light sources 22 - 1 to 22 - 3 which differ in center direction are disposed in the light-emitting section 13 ( 14 ) so that the image sensor 18 can receive infrared light from the light-emitting sections 13 and 14 when the controller 16 and the light-emitting sections 13 and 14 have a positional relationship within a specific range.
  • FIG. 5A is a front view of the light-emitting section 13 ( 14 ) according to this embodiment
  • FIG. 15B is a top view of the light-emitting section 13 ( 14 ).
  • the three light sources 22 - 1 to 22 - 3 of the light-emitting section 13 ( 14 ) according to this embodiment are disposed to form vertices of an inverted triangle.
  • the lowermost first light source 22 - 1 is disposed so that a center direction CD 1 of the light source 22 - 1 coincides with a first direction D 1 which is a reference direction SD of the light-emitting section 13 ( 14 ).
  • the reference direction SD is the front direction of the display screen 11 which is the basic direction of the player P who holds the controller 16 .
  • a direction parallel to the direction normal to the display screen 11 of the display section 12 when the light-emitting section 13 ( 14 ) is disposed in a state in which the bottom surface 32 of the housing 30 is secured on the top surface of the display section 12 is set to be the reference direction SD.
  • a predetermined range around the front direction of the display screen 11 i.e., basic direction of the player P
  • the light travels from the first light source 22 - 1 may be set to be the Light-reception range of the image sensor 18 .
  • the image sensor 18 Since the image sensor 18 successively outputs the light-reception information relating to each pixel while scanning the pixels of the acquired image PC from the lower row to the upper row, the image sensor 18 outputs the light-reception information relating to a pixel in the image PC corresponding to the lowermost first light source 22 - 1 prior to pixels corresponding to the light sources 22 - 2 and 22 - 3 . Therefore, a pixel corresponding to the first light source 22 - 1 is determined to be a first effective pixel which satisfies a given condition.
  • the maximum luminous intensity direction of the first light source 22 - 1 corresponding to the first effective pixel is set to be the front direction of the display screen 11 (basic direction of the player P) by causing the center direction CD 1 of the first light source 22 - 1 to coincide with the front direction of the display screen 11 .
  • a first effective pixel can be reliably determined by light which travels from the lowermost first light source 22 - 1 .
  • the second light source 22 - 2 positioned on the left when viewed from the front surface is disposed so that a center direction CD 2 of the light source 22 - 2 coincides with a second direction D 2 which differs from the reference direction SD by an angle gamma of 90 degrees or less.
  • the second light source 22 - 2 is disposed so that the center direction CD 2 differs from the reference direction SD by 60 degrees toward the left when viewed from the front surface. Therefore, the image sensor 18 can receive light which travels from the second light source 22 - 2 within a left range with respect to the front direction of the display screen 11 .
  • the third light source 22 - 3 positioned on the right when viewed from the front surface is disposed so that a center direction CD 3 of the light source 22 - 3 coincides with a third direction D 3 which is line-symmetrical with the second direction D 2 with respect to the reference direction SD as the symmetry axis.
  • the third light source 22 - 3 is disposed so that the center direction CD 3 differs from the reference direction SD by 60 degrees toward the right when viewed from the front surface. Therefore, the image sensor 18 according to this embodiment can receive light which travels from the third light source 22 - 3 within a right range with respect to the front direction of the display screen 11 .
  • the second direction D 2 and the third direction D 3 be parallel to the horizontal plane and differ in direction.
  • the center directions CD 1 to CD 3 of the light sources 22 - 1 to 22 - 3 may be adjusted upward from the horizontal direction HD by about five degrees.
  • FIG. 16 is a functional block diagram showing an example of the indication position calculation system according to this embodiment.
  • the indication position calculation system according to this embodiment may have a configuration in which some of the elements (sections) shown in FIG. 16 are omitted.
  • An operation section 160 allows a player to input operation data.
  • the operation section 160 may be an indicator (controller, shooting device, or pointing device) configured so that the player can arbitrarily change the position and the direction of the operation section 160 while holding the operation section 160 and directs the operation section 160 toward an arbitrary position on the indication plane such as the display screen 11 .
  • the operation section 160 includes a trigger as an operating section for the player to perform an ON/OFF input.
  • the operation section 160 may include a button, a lever (analog pad), an arrow key, a steering wheel, a microphone, a touch panel display, or the like so that various types of operation data can be input.
  • the operation section 160 includes an imaging section 162 , a determination section 164 , and a calculation section 166 .
  • the imaging section 162 may be implemented by an image sensor such as a CMOS sensor or a CCD camera.
  • the imaging section 162 successively outputs the light-reception information relating to each pixel from a starting pixel provided on one end of the acquired image to an end pixel provided on the other end of the acquired image.
  • the imaging section 162 may successively output the light-reception information relating to each pixel utilizing a hardware configuration, or may successively output the light-reception information relating to each pixel under software control.
  • the determination section 164 may be implemented by hardware such as a processor (e.g., CPU, MPU, or DSP) or an ASIC (e.g., gate array) and a program.
  • the determination section 164 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel successively output from the imaging section 162 .
  • the calculation section 166 may be implemented by hardware such as a processor (e.g., CPU, MPU, or DSP) or an ASIC (e.g., gate array) and a program.
  • a processor e.g., CPU, MPU, or DSP
  • ASIC e.g., gate array
  • the calculation section 166 performs position calculations based on the identification information relating to the pixels which satisfy the given condition to calculate the indication position of the operation section 160 .
  • the determination section 164 and the calculation section 166 may be integrally implemented by one processor or the like.
  • the determination section 164 and the calculation section 166 may be implemented by the function of a processing section 100 instead of providing the determination section 164 and the calculation section 166 in the operation section 160 .
  • a storage section 170 serves as a work area for the processing section 100 , a communication section 196 , and the like.
  • the function of the storage section 170 may be implemented by a RAM (VRAM) or the like.
  • the storage section 170 according to this embodiment includes a main storage section 171 used as a work area, a frame buffer 172 in which the final display image and the like are stored, an object data storage section 173 in which model data relating to an object is stored, a texture storage section 174 in which the texture for each piece of object data is stored, and a Z buffer 176 in which a Z value is stored when generating an image of an object.
  • the storage section 170 may have a configuration in which some of these sections are omitted.
  • An information storage medium 180 stores a program, data, and the like.
  • the function of the information storage medium 180 may be implemented by an optical disk (CD or DVD), a magneto-optical disk (MO), a magnetic disk, a hard disk, a magnetic tape, a memory (ROM), or the like.
  • a program (data) for causing the processing section 100 to perform various processes according to this embodiment is stored in the information storage medium 180 .
  • a program which causes a computer to function as each section according to this embodiment is stored in the information storage medium 180 .
  • a display section 190 outputs an image generated according to this embodiment.
  • the function of the display section 190 may be implemented by a CRT, an LCD, a touch panel display, or the like.
  • a light-emitting unit 15 for calculating the relative positions of the operation section 160 and the display screen of the display section 190 is provided in or around the display screen of the display section 190 .
  • an infrared LED which emits invisible light is used as the light source of the light-emitting unit 15 .
  • a sound output section 192 outputs sound generated according to this embodiment.
  • the function of the sound output section 192 may be implemented by a speaker, a headphone, or the like.
  • a portable information storage device 194 stores player's personal data, game save data, and the like.
  • a memory card, a portable game device, and the like can be given.
  • the communication section 196 performs various types of control for communicating with the outside (e.g. host device or another image generation system).
  • the function of the communication section 196 may be implemented by hardware such as a processor or a communication ASIC, a program, or the like.
  • the program (data) for causing a computer to function as each section according to this embodiment may be distributed to the information storage medium 180 (storage section 170 ) from an information storage medium included in a host device (server) through a network and the communication section 196 .
  • Use of the information storage medium of the host device (server) is also included within the scope of the invention.
  • the processing section 100 performs a game process, an image generation process, a sound generation process, and the like based on operation data from the operation section 160 , a program, and the like.
  • the game process includes starting a game when game start conditions have been satisfied, proceeding with a game, disposing an object such as a character or a map, displaying an object, calculating game results, finishing a game when game end conditions have been satisfied, and the like.
  • the processing section 100 performs various processes using the storage section 170 as a work area.
  • the function of the processing section 100 may be implemented by hardware such as a processor (e.g. CPU or DSP) or an ASIC (e.g. gate array) and a program.
  • the processing section 100 includes a display control section 104 , a determination section 106 , an evaluation section 108 , a drawing section 120 , and a sound generation section 130 . Note that the processing section 100 may have a configuration in which some of these sections are omitted.
  • the display control section 104 performs a display control process on an object displayed on the display section 190 .
  • the display control section 104 performs the display control process such as generating an object (e.g. character, background, target, car, ball, item, building, tree, pillar, wall, or map), directing the display position of an object, or causing an object to disappear.
  • the display control section 104 performs the display control process such as registering a generated object in an object list, transferring the object list to the drawing section 120 and the like, or deleting a disappeared object from the object list.
  • the display control section 104 displays an object indicating the indication position (impact position) on the display screen 11 based on information relating to the indication position of the operation section 160 on the display screen 11 .
  • the determination section 106 determines the positional relationship between the indication position information relating to the operation section 160 on the display screen 11 of the display section 190 and a target object TO based on an operation input using the operating section (trigger) provided in the operation section 160 . Specifically, the determination section 106 determines whether or not the indication position has hit (coincides with or reaches) the display position of the target object TO based on the indication position information when the operation input using the operating section has been received.
  • the evaluation section 108 evaluates the operation using the operation section 160 based on the hit determination result.
  • the evaluation section 108 evaluates the operation of the operator by means of score calculation and the like when the target object TO has been hit.
  • the drawing section 120 performs a drawing process based on the results of various processes (game process) performed by the processing section 100 to generate an image, and outputs the generated image to the display section 190 .
  • An image generated by the drawing section 120 may be a two-dimensional image or a three-dimensional image.
  • the drawing section 120 performs a geometric process such as coordinate transformation (world coordinate transformation or camera coordinate transformation), clipping, or perspective transformation, and creates drawing data (e.g. primitive surface vertex coordinates, texture coordinates, color data, normal vector, and alpha-value) based on the processing results.
  • the drawing section 120 draws an object (one or more primitives) subjected to perspective transformation (geometric process) in a drawing buffer (buffer which can store image information in pixel units such as a frame buffer or intermediate buffer; VRAM) based on the drawing data (primitive data). This causes an image viewed from a virtual camera (given view point) to be generated in a game space.
  • a drawing buffer buffer which can store image information in pixel units such as a frame buffer or intermediate buffer; VRAM
  • the sound generation section 130 performs a sound process based on the results of various processes performed by the processing section 100 to generate game sound such as background music (BGM), effect sound, or voice, and outputs the generated game sound to the sound output section 192 .
  • game sound such as background music (BGM), effect sound, or voice
  • the image generation system may be configured as a system dedicated to a single-player mode in which only one player can play a game, or a system which also implements a multi-player mode in which a plurality of players can play a game.
  • a game image and game sound provided to the players may be generated using one terminal, or may be generated by a distributed process using a plurality of terminals (game devices or portable telephones) connected via a network (transmission line or communication line), for example.
  • the light-emitting unit 15 includes two light-emitting sections 13 and 14 independently provided.
  • the light-emitting unit 15 may be formed by integrating the two light-emitting sections 13 and 14 .
  • the light-emitting unit 15 may be formed as an oblong rod-shaped member, and the light-emitting sections 13 and 14 may be disposed on the right and left ends.
  • the distance between the light-emitting sections 13 and 14 may be changed in stages in a direction indicated by an arrow.
  • the distance between the light-emitting sections 13 and 14 may be changed in stages corresponding to monitors ranging from a 20 -inch monitor to a 50 -inch monitor.
  • the light source 22 emits infrared light (i.e., invisible light).
  • the light source 22 may emit another type of invisible light or may emit visible light.
  • a light source which emits invisible light and a light source which emits visible light may be provided so that the light sources can be switched.
  • visible light may be output when the operator indicates the indication plane in order to initialize the positional relationship between the light-emitting section (light source) and the indicator (light-receiving section).
  • the invention may be applied to various image generation systems.
  • the above embodiments have been described taking an example of applying the invention to a game system.
  • the invention may also applied to an indication position calculation system including a presentation system and the like and an indicator used for an indication position calculation system.
  • the invention may be applied to various image generation systems such as an arcade game system, a consumer game system, a large-scale attraction system in which a number of players participate, a simulator, a multimedia terminal, a system board which generates a game image, and a portable telephone.
  • image generation systems such as an arcade game system, a consumer game system, a large-scale attraction system in which a number of players participate, a simulator, a multimedia terminal, a system board which generates a game image, and a portable telephone.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Position Input By Displaying (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Display Devices Of Pinball Game Machines (AREA)

Abstract

When the user of a system directs an indicator toward a light-emitting section, an imaging section provided in the indicator acquires an image PA in a given area including the light-emitting section. A determination section determines whether or not each pixel individually satisfies a first condition based on light-reception information relating to each pixel of the image PA, and a calculation section calculates an indication position of the indicator based on the positions of effective pixels in the image using the effective pixels which satisfy the first condition as pixels corresponding to the light-emitting section. The calculation section calculates a representative value of the effective pixels based on identification information relating to the effective pixels while changing weighting on the identification information relating to the effective pixels depending on whether or not the effective pixels satisfy a second condition.

Description

Japanese Patent Application No. 2007-35227, filed on Feb. 15, 2007, is hereby incorporated by reference in its entirety.
BACKGROUND OF THE INVENTION
The present invention relates to an indication position calculation system, an indicator for an indication position calculation system, a game system, and an indication position calculation method.
An indication position calculation system has been known which calculates an indication position of an indicator on an indication plane (e.g., monitor or screen), such as a shooting game system using a gun-type controller or a presentation system using a pointing device. In such an indication position calculation system, an infrared light-emitting section or the like is disposed near the indication plane. The light-emitting section is imaged using an image sensor (light-receiving sensor) provided on the end of the indicator, and the indication position of the indicator is calculated based on the position of the light-emitting section in the resulting image.
In such an indication position calculation system, when the light-reception information relating to a pixel of the image is larger than a predetermined threshold value, this pixel is determined to be a pixel corresponding to the light-emitting section. The indication position of the indicator is calculated based on the position of such a pixel. Japanese Patent No. 2961097 discloses such technology, for example.
However, since a related-art indication position calculation system performs position calculations evenly based on pixels of which the light-reception information has been determined to be larger than a predetermined threshold value, pixels corresponding to the center portion of the light-emitting section cannot be distinguished from pixels corresponding to the peripheral portion of the light-emitting section, whereby the indication position of the indicator may not be accurately calculated. In particular, when the indicator including the image sensor moves at high speed, pixels corresponding to the light-emitting section may be spread in the moving direction of the indicator in the image acquired by the image sensor. If the indication position of the indicator is calculated based on these pixels, an accurate calculation result may not be obtained.
SUMMARY
According to a first aspect of the invention, there is provided an indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
a light-emitting section;
an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on identification information of the primary effective pixels,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to a second aspect of the invention, there is provided an indicator for an indication position calculation system, the indicator comprising:
an imaging section which acquires an image a light-emitting section and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on identification the information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to a third aspect of the invention, there is provided a game system calculating an indication position of an indicator, the game system comprising:
a display section which displays an object;
a light-emitting section which has a given positional relationship with the display section;
an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on the identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to a fourth aspect of the invention, there is provided an indication position calculation method comprising:
causing an imaging section provided in an indicator to acquire an image of a light-emitting section and output light-reception information of pixels of the acquired image;
causing a determination section to determine whether or not each of the pixels is a primary effective pixel satisfying a first condition and determine whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
causing a calculation section to perform position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
FIG. 1 is a diagram schematically showing an example of the appearance of a system according to one embodiment of the invention.
FIG. 2 is a diagram illustrating an example of an image acquired according to one embodiment of the invention.
FIG. 3 is a diagram illustrating an example of an indicator and an imaging section according to one embodiment of the invention.
FIG. 4A is a diagram illustrating an example of an installation state of a system according to one embodiment of the invention, and FIG. 4B is a diagram illustrating an example of an image acquired according to one embodiment of the invention.
FIG. 5 is a functional block diagram showing an example of an indicator according to one embodiment of the invention.
FIG. 6 is a diagram illustrating an example of part of an image acquired according to one embodiment of the invention.
FIG. 7 is a flowchart showing an example of a process according to one embodiment of the invention.
FIG. 8 is a flowchart showing an example of a process according to one embodiment of the invention.
FIG. 9 is a flowchart showing an example of a process according to one embodiment of the invention.
FIGS. 10A and 10B are diagrams illustrating an example of the principle according to one embodiment of the invention.
FIGS. 11A and 11B are diagrams illustrating image examples acquired according to one embodiment of the invention.
FIG. 12 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
FIG. 13 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
FIG. 14 is a diagram schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
FIGS. 15A and 15B are diagrams schematically showing an example of the appearance of a light-emitting section according to one embodiment of the invention.
FIG. 16 is a functional block diagram showing an example of an indication position calculation system according to one embodiment of the invention.
DETAILED DESCRIPTION OF THE EMBODIMENT
The invention may provide an indication position calculation system, an indicator for an indication position calculation system, a game system, and an indication position calculation method capable of promptly and accurately calculating an indication position with a reduced processing load.
(1) According to one embodiment of the invention, there is provided an indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
a light-emitting section;
an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on identification information of the primary effective pixels,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to one embodiment of the invention, there is provided an indicator for an indication position calculation system, the indicator comprising:
an imaging section which acquires an image a light-emitting section and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on identification the information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to one embodiment of the invention, there is provided an indication position calculation method comprising:
causing an imaging section provided in an indicator to acquire an image of a light-emitting section and output light-reception information of pixels of the acquired image;
causing a determination section to determine whether or not each of the pixels is a primary effective pixel satisfying a first condition and determine whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
causing a calculation section to perform position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
The term “identification information of a pixel” used herein refers to information which specifies the position of a pixel in an image. For example, the identification information may be address data or a count value of a pixel.
According to the above embodiments, when the user of the system directs the indicator toward the light-emitting section, the imaging section provided in the indicator acquires an image in a given area including the light-emitting section. The determination section determines whether or not each pixel satisfies the first condition based on the light-reception information relating to each pixel, and the calculation section calculates the indication position of the indicator based on the positions of the effective pixels in the image using the effective pixels which satisfy the first condition as pixels corresponding to the light-emitting section. According to the above embodiments, the calculation section calculates the representative value of the effective pixels based on the identification information relating to the effective pixels while changing weighting on the identification information relating to the effective pixels depending on whether or not the effective pixels satisfy the second condition.
According to the above embodiments, the indication position can be more accurately calculated by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels, for example. In particular, even if the indicator including the imaging section moves at high speed so that pixels corresponding to the light-emitting section are spread in the image acquired by the imaging section in the moving direction of the indicator, the indication position can be accurately calculated by reducing the degree of effects of pixels corresponding to an afterimage when calculating the representative value.
(2) In each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method,
the determination section may determine that each of the pixels is the primary effective pixel satisfying the first condition when a value of the light-reception information is larger than a first threshold value, and determine that the primary effective pixel is the secondary effective pixel satisfying the second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value; and
the calculation section may calculate the representative value of the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then perform the position calculations based on the representative value, while making the weight on the identification information of the secondary effective pixels larger than the weight on the identification information of the primary effective pixels in calculation.
According to this feature, the indication position can be more accurately calculated by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels.
(3) In each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method,
the imaging section may successively output the light-reception information of the pixels;
one of the pixels having the light-reception information output primarily may be disposed on a lower side of the image and another one of the pixels having the light-reception information output last may be disposed on an upper side of the image when the indicator is held in a reference position; and
the calculation section may perform the position calculations based on the identification information of a first effective pixel which is the primary effective pixel and has been primarily determined to satisfy the first condition in order to obtain the indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section.
The term “reference position” used herein refers to a predetermined position (direction or position) of the indicator (i.e., position at which the starting pixel of the imaging section provided in the indicator is positioned on the lower side and the end pixel is positioned on the upper side). The reference position may be appropriately determined depending on the specification of the indication position calculation system and the specification of the indicator. The reference position of the indicator may be an average position when using the indicator in a normal state. The term “normal state” used herein refers to a state determined based on the shape of the indicator or a state specified in a manual or the like, for example.
When pixels corresponding to noise exist, the indication position cannot be accurately calculated if the position calculations are performed using the pixels corresponding to noise as pixels corresponding to the light-emitting section without determining whether or not a pixel which satisfies a given condition is a pixel corresponding to the light-emitting section or a pixel corresponding to noise.
In an actual situation in which the above indication position calculation system is installed, a noise source generally exists at a position above the light-emitting section. For example, when using an infrared light source as the light-emitting section, a window through which external light enters or an incandescent lamp may serve as an infrared light source (i.e., noise source). A window or a lamp generally exists at a position higher than the light-emitting section. Therefore, pixels corresponding to noise generally occur in an area higher than the light-emitting section in the image acquired by the imaging section. On the other hand, pixels corresponding to noise rarely occur in an area lower than the light-emitting section.
The above embodiments focuses on this situation. Specifically, the imaging section is provided in the indicator so that the starting pixel is disposed on the lower side and the end pixel is disposed on the upper side when the indicator is held in the reference position. The imaging section successively outputs the light-reception information relating to each pixel from the starting pixel which corresponds to the lower side of the image acquired by the imaging section. Specifically, when the indicator is held in the reference position, the imaging section successively outputs the light-reception information relating to each pixel from the pixels positioned on the lower side in which pixels corresponding to noise rarely occur.
According to the above embodiments, since the first pixel of which the light-reception information has been output from the imaging section and which has been determined to satisfy a given condition is considered to be a pixel corresponding to the light-emitting section, the indication position can be accurately calculated using the first pixel as a pixel corresponding to the light-emitting section without determining whether the first pixel is a pixel corresponding to the light-emitting section or a pixel corresponding to noise. According to the above embodiments, since it is unnecessary to determine whether the pixel which satisfies a given condition is a pixel corresponding to the light-emitting section or a pixel corresponding to noise, the indication position can be promptly and accurately calculated with a reduced processing load.
(4) In each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method,
the calculation section may set a predetermined area of the image including the first effective pixel as a determination area, and perform the position calculations based on the identification information of the primary and secondary effective pixels included in the determination area.
According to this feature, a pixel included in the predetermined determination area can be considered to be a pixel corresponding to the light-emitting section. Therefore, the indication position can be more accurately calculated using the identification information relating to the pixels included in the predetermined determination area.
(5) Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may comprise:
a plurality of the light-emitting sections having a predetermined positional relationship with each other,
wherein the calculation section may set a predetermined area of the image including the first effective pixel as a first determination area, set another predetermined area of the image including a second effective pixel which has been primarily determined to satisfy the first condition among the pixels out of the first determination area as a second determination area, and then perform the position calculations based on the identification information of the primary and secondary effective pixels within the first determination area and the second determination area.
According to this feature, when the light-emitting section includes a first light-emitting section and a second light-emitting section, for example, pixels included in the first determination area can be considered to be pixels corresponding to the first light-emitting section, and pixels included in the second determination area can be considered to be pixels corresponding to the second light-emitting section. When the light-emitting section further includes a third light-emitting section and a fourth light-emitting section, a third determination area and a fourth determination area may be set. Even if the indication position of the indicator is calculated in a state in which a plurality of light-emitting sections are provided, the indication position can be more accurately calculated by using the identification information relating to the pixels corresponding to each light-emitting section.
(6) Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may further comprise:
a shielding section provided in the light-emitting section and shielding part of light from the light-emitting section emitted downward in a predetermined degrees of angle or less from the horizontal.
According to this feature, even if a reflecting surface which reflects light from the light-emitting section to produce noise exists at a position lower than the light-emitting section (e.g., when a glass table is provided between the light-emitting section and the imaging section (indicator)), reflected light can be prevented from entering the imaging section from the light-emitting section.
(7) In each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method,
the shielding section may be disposed at a position enabling the shielding section to shield part of the light from the light-emitting section emitted downward from the horizontal so that no reflected light from lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
According to this feature, reflected light can be prevented from entering the imaging section from the light-emitting section when the light-emitting section and the imaging section have the reference positional relationship. Accordingly, reflected light can be reliably prevented from entering the imaging section from the light-emitting section when the indication position calculation system is in a basic state depending on the application.
(8) In each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method,
the light-emitting section may be directed in a direction enabling to prevent light from the light-emitting section from being emitted downward from the horizontal so that no reflected light from lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
According to this feature, reflected light can be prevented from entering the imaging section from the light-emitting section when the light-emitting section and the imaging section have the reference positional relationship by adjusting the direction of the light-emitting section without providing the shielding section. Accordingly, reflected light can be reliably prevented from entering the imaging section from the light-emitting section when the indication position calculation system is in a basic state depending on the application, by adjusting the direction of the light-emitting section.
(9) Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may further comprise:
a filter which is provided in the indicator and through which light in the same wavelength band as light from the light-emitting section is allowed to pass toward the imaging section.
This makes it possible to more accurately calculate the indication position by reducing noise due to light in a wavelength band differing from that of light emitted from the light-emitting section.
(10) Each of the indication position calculation system, the indicator for an indication position calculation system, and the indication position calculation method may comprise:
a plurality of the light-emitting sections having a predetermined positional relationship with each other,
wherein the calculation section may perform the position calculations based on a representative value corresponding to each of the light-emitting sections.
This makes it possible to more accurately calculate the indication position by calculating the indication position using the positional relationship between a plurality of representative values.
(11) According to one embodiment of the invention, there is provided a game system calculating an indication position of an indicator, the game system comprising:
a display section which displays an object;
a light-emitting section which has a given positional relationship with the display section;
an indicator including an imaging section which acquires an image and outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not the primary effective pixel is a secondary effective pixel satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on the identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making weight on the identification information of the primary effective pixel different from weight on the identification information of the secondary effective pixel in calculation.
According to the above embodiment, a game system can be provided which can more accurately calculate the indication position by increasing the degree of effects of bright pixels having relatively large light-reception information as pixels corresponding to a portion near the center of the light-emitting section and reducing the degree of effects of dark pixels having relatively small light-reception information as pixels corresponding to the peripheral portion of the light-emitting section when calculating the representative value of the effective pixels, for example.
These embodiments of the invention will be described below. Note that the embodiments described below do not in any way limit the scope of the invention laid out in the claims herein. In addition, not all of the elements of the embodiments described below should be taken as essential requirements of the invention.
1. Outline of System
FIG. 1 is a diagram schematically showing a game system 10 to which an indication position calculation system according to one embodiment of the invention is applied. The game system 10 according to this embodiment includes a display section 12 which displays a game image such as a target object TO on a display screen 11, a light-emitting unit 15 which is provided at the top of the display section 12 and includes two light-emitting sections 13 and 14, each having an infrared light source such as an infrared LED, a controller 16 (indicator, shooting device, or pointing device) which is held by a player P so that its position and direction can be arbitrarily changed and is used to indicate an arbitrary position on the display screen 11, and a game device 17 which performs a game process and the like.
Each of the light-emitting sections 13 and 14 according to this embodiment has the infrared light source on the front surface. The light-emitting sections 13 and 14 are provided on the top surface of the display section 12 at a predetermined interval so that the front surfaces face in the same direction as the display screen 11 (i.e., direction toward the player). Each of the light-emitting sections 13 and 14 emits infrared light forward from the light source.
The controller 16 according to this embodiment is formed to imitate the shape of a gun, and includes a barrel GB which is directed toward the display screen 11, a grip GG which extends from the barrel GB and is held by the player P with the hand, and a trigger GT which can be operated using the forefinger of the hand holding the grip GG An image sensor 18 (light-receiving section or imaging section) such as a CMOS sensors is provided on the end of the barrel GB. The image sensor 18 receives infrared light (light in the same wavelength band as light emitted from the light-emitting sections 13 and 14) which enters the image sensor 18 along the direction in which the end of the controller 16 (barrel GB) is directed, and acquires (images) the infrared light.
In this embodiment, a filter FI which allows only light in a wavelength band corresponding to infrared light (i.e., light having the same wavelength as light emitted from the light-emitting sections 13 and 14) to pass through is provided on the front side of the image sensor 18 (i.e., the end of the controller 16 at a position forward from the image sensor 18) so that light in a predetermined wavelength band enters the image sensor 18. An image sensor having a light reception sensitivity in a wavelength band within a predetermined range including infrared light (light having a predetermined wavelength) may be used as the image sensor 18 without providing the filter FI.
The game system 10 calculates the positional relationship between the light-emitting sections 13 and 14 and the controller 16 based on position information relating to the light-emitting sections 13 and 14 in an image acquired by the image sensor 18 and reference position information set in advance, and calculates information relating to the indication position of the controller 16 on the display screen 11. The game system 10 determines whether or not the indication position of the controller 16 when the trigger GT of the controller 16 has been pulled coincides with the position of the target object TO displayed on the display screen 11, and performs a game process such as an image display control process or a score calculation process.
FIG. 2 is a diagram showing an example of an image PC1 acquired by the image sensor 18 when the controller 16 is directed toward the display screen 11. The image sensor 18 according to this embodiment includes 22,050 (175×126) light-receiving elements (imaging elements) arranged in a matrix on a rectangular surface, and acquires the image PC1. One pixel of the image PC1 corresponds to one light-receiving element. In this embodiment, the image PC1 is updated every 1/54th of a second depending on the position and the direction of the controller 16. The game system 10 calculates the information relating to the indication position of the controller 16 on the display screen 11 using position information relating to two infrared light source areas IrA1 and IrA2 (i.e., areas obtained by imaging infrared light from the light-emitting sections 13 and 14) in the image PC1. In this embodiment, an origin O (i.e., center of the image PC1) is considered to be the indication point of the controller 16, and information relating to the indication position of the controller 16 on the display screen 11 is calculated based on the positional relationship among the origin O, the infrared light source areas IrA1 and IrA2 in the image PC1, and a display screen area DpA which is an area corresponding to the display screen 11 in the image PC1.
In the example shown in FIG. 2, the infrared light source areas IrA1 and IrA2 are formed above the center of the image PC1 to some extent in a state in which a straight line I which connects the infrared light source areas IrA1 and IrA2 is rotated clockwise by omega degrees with respect to a reference line L (i.e., X axis of the image sensor 18) of the image PC1. In the example shown in FIG. 2, the origin O corresponds to a predetermined position on the lower right of the display screen area DpA so that the coordinates of the indication position of the controller 16 on the display screen 11 can be calculated. The rotation angle of the controller 16 around the indication direction axis with respect to the display screen 11 can be calculated based on the rotation angle omega of the straight line I which connects the infrared light source areas IrA1 and IrA2 with respect to the reference line L. The distance between the controller 16 and the display screen 11 in the example shown in FIG. 2 can be calculated based on the ratio of a reference distance D between the infrared light source areas IrA1 and IrA2 when the controller 16 is located at a predetermined distance from the display screen 11 and a distance d between the infrared light source areas IrA1 and IrA2 in the example shown in FIG. 2 by setting the reference distance D in advance.
According to this embodiment, information relating to the indication position of the controller 16 on the display screen 11 and the like can be calculated even if the player P moves his hand while holding the controller 16 as shown in FIG. 1 or changes the position and the direction of the controller 16 due to the movement of the player P.
In the example shown in FIG. 1, the game device 17 and the controller 16 are connected via a cable. Note that information may be transmitted and received between the game device 17 and the controller 16 via wireless communication. The light-emitting sections 13 and 14 need not be necessarily provided at the top of the display section 12. The light-emitting sections 13 and 14 may be provided at an arbitrary position (e.g., bottom or side) of the display section 12. Specifically, the light-emitting sections 13 and 14 may be provided to have a given positional relationship with the display section 12 within a range in which the image sensor 18 can receive light when the controller 16 is directed toward the display screen 11.
2. Configuration of Image Sensor
The details of the configuration of the image sensor 18 according to this embodiment are as follows. FIG. 3 provides a side view of the controller 16 including the image sensor 18 and an enlarged front view of the image sensor 18. FIG. 3 is a view illustrative the installation state of the image sensor 18 in the controller 16. As shown in FIG. 3, the image sensor 18 according to this embodiment is provided in the controller 16 so that its light-receiving surface SF perpendicularly intersects an indication direction BD of the barrel GB. The image sensor 18 is provided in the controller 16 so that the X axis (reference line L) of the image sensor 18 perpendicularly intersects an installation direction GD of the grip GG.
Therefore, when the player P holds the controller 16 so that the installation direction GD of the grip GG faces perpendicularly downward, the indication direction BD and the X axis (reference line L) of the image sensor 18 become horizontal. In this embodiment, a state in which the installation direction GD of the grip GG faces perpendicularly downward is referred to as a reference position of the controller 16. Specifically, the term “reference position of the controller 16” refers to an average position (direction or use state) of the controller 16 when the player P holds the controller 16 as if to hold an actual gun.
In the image sensor 18, each of the 22,050 (175×126) light-receiving elements arranged on the rectangular surface SF receives infrared light which enters along the direction in which the end of the controller 16 is directed, and successively outputs light-reception information relating to each pixel. The controller 16 then determines whether or not the light-reception information relating to each pixel is larger than a threshold value set corresponding to the quantity of light emitted from the light-emitting sections 13 and 14 (i.e., whether or not each pixel is an effective pixel which satisfies a given condition) to determines whether or not each pixel is a pixel corresponding to the infrared light. However, a pixel corresponding to infrared light may be a pixel corresponding to an infrared light source other than the light-emitting sections 13 and 14 (i.e., light source which emits light in the same wavelength band as light emitted from the light that light-emitting sections 13 and 14) instead of a pixel corresponding to the light-emitting sections 13 and 14.
FIG. 4A is a diagram showing a state around the light-emitting sections 13 and 14 viewed from the front side of the display screen 11. When forming the game system 10 in a room or the like using a consumer game device as the game device 17 according to this embodiment and using a television monitor as the display section 12, an infrared light source other than the light-emitting sections 13 and 14 may exist. As shown in FIG. 4A, a window WD through which external light enters, an incandescent lamp WH, and the like serve as infrared light sources. When the image sensor 18 receives infrared light which enters through the window WD, light emitted from the incandescent lamp WH, and the like, the received infrared light serves as noise so that an accurate indication position may not be calculated.
FIG. 4B shows an image PC2 acquired by the image sensor 18 in a state shown in FIG. 4A when the controller 16 is held in the reference position toward the display screen 11. As shown in FIG. 4A, the window WD, the incandescent lamp WH, and the like are generally positioned above the light-emitting sections 13 and 14 disposed at the top of the display screen 11 in a real space. In an image PC2 acquired by the image sensor 18, as shown in FIG. 4B, pixels corresponding to a noise area NA1 due to the window WD and a noise area NA2 due to the incandescent lamp WH generally occur in an area UA obtained by imaging a portion above the light-emitting sections 13 and 14.
A related-art indication position calculation system successively outputs the light-reception information relating to each pixel while scanning each row from the upper row to the lower row, starting from the uppermost-leftmost pixel (starting pixel) toward the lowermost-rightmost pixel (end pixel) in the image PC2 shown in FIG. 4B, and determines whether or not each pixel is a pixel corresponding to the light source. Specifically, in a related-art indication position calculation system, the image sensor 18 is provided in the controller 16 so that the starting pixel is disposed on the upper side and the end pixel is disposed on the lower side when the controller 16 is held in the reference position. Therefore, a related-art indication position calculation system successively outputs the light-reception information relating to each pixel from upper pixels in which pixels corresponding to noise tend to occur. Accordingly, pixels corresponding to the light source include not only pixels corresponding to the light-emitting sections 13 and 14, but also pixels corresponding to the noise areas NA1 and NA2. Therefore, a related-art indication position calculation system must distinguish pixels corresponding to the light-emitting sections 13 and 14 from pixels corresponding to noise.
On the other hand, pixels corresponding to noise rarely occur in an area DA obtained by imaging a portion below the light-emitting sections 13 and 14, as shown in FIG. 4B. In this embodiment, the image sensor 18 is provided in the controller 16 so that a starting pixel SP is disposed on the lower side and an end pixel EP is disposed on the upper side when the controller 16 is held in the reference position, taking such a situation into consideration. The image sensor 18 horizontally scans pixels in the leftward direction from the lowermost-rightmost pixel in the acquired image PC2 as the starting pixel SP. When the pixels in the lowermost row have been completely scanned, the image sensor 18 horizontally scans pixels in the next (upper) row in the leftward direction from the rightmost pixel. The image sensor 18 successively outputs the light-reception information relating to each pixel while scanning pixels in each row from the lower row to the upper row until the uppermost-leftmost end pixel EP is reached. Specifically, when the controller 16 is held in the reference position, the image sensor 18 successively outputs the light-reception information relating to each pixel from pixels on the lower side in which pixels corresponding to noise rarely occur.
According to this embodiment, a first effective pixel FP of which the light-reception information has been output from the image sensor 18 and which has been determined to satisfy a given condition is a pixel corresponding to the light-emitting section 13 or 14 when the controller 16 is held in the reference position. Therefore, the indication position can be accurately calculated using the first effective pixel FP as a pixel corresponding to the light-emitting section 13 or 14 without determining whether the pixel corresponding to the light source is a pixel corresponding to the light-emitting section 13 or 14 or a pixel corresponding to noise. According to this embodiment, since it is unnecessary to determine whether the effective pixel which has been determined to satisfy a given condition is a pixel corresponding to the light-emitting section 13 or 14 or a pixel corresponding to noise, the indication position can be promptly and accurately calculated with a reduced processing load.
3. Circuit Configuration
FIG. 5 is a functional block diagram showing an indication position detection section provided in the controller 16 according to this embodiment. The indication position detection section includes the image sensor 18, an ASIC 200, an MCU 300, and a USB 400.
The image sensor 18 is initialized when the image sensor 18 has received an initialization signal from the MCU 300. The image sensor 18 acquires the light-reception information in pixel units by receiving infrared light which enters along a direction in which the end of the controller 16 is directed to acquire an image along the direction in which the end of the controller 16 is directed. In this embodiment, the light-reception information (quantity of received light) of each pixel is acquired using a two-digit hexadecimal number in the range from 00 to FF. A starting pixel is set on one end of the acquired image, and an end pixel is set on the other end. The image sensor 18 successively outputs the light-reception information relating to each pixel from the starting pixel to the end pixel to the ASIC 200.
The image sensor 18 outputs a pixel clock signal and a vertical synchronization signal to a control section 220 of the ASIC 200, and outputs the pixel clock signal to a pixel counter 230. This allows the control section 220 and the pixel counter 230 to synchronize with the output timing of the light-reception information relating to each pixel from the image sensor 18.
The ASIC 200 includes a determination section 210, the control section 220, and the pixel counter 230.
The determination section 210 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel successively output from the image sensor 18. Specifically, the determination section 210 determines that a first condition is satisfied when the light-reception information relating to each pixel is 80 (two-digit hexadecimal number) or more. The determination section 210 determines that a pixel of which the light-reception information is 80 or more to be a pixel corresponding to the infrared light source.
In this embodiment, the determination section 210 determines whether or not each pixel satisfies the first condition, and determines whether or not an effective pixel which satisfies the first condition satisfies a second condition. Specifically, a pixel group of pixels corresponding to the light source is bright at the center and becomes darker as the distance from the center increases due to a decrease in luminance. In this embodiment, the determination section 210 determines that the second condition is satisfied when the light-reception information relating to an effective pixel is F0 or more. The determination section 210 determines that a pixel of which the light-reception information is 80 to EF to be a dark pixel corresponding to the peripheral portion of the infrared light source, and determines that a pixel of which the light-reception information is F0 to FF to be a bright pixel corresponding to the center portion of the infrared light source.
The determination section 210 outputs an enable signal to the control section 220 while determining whether or not the light-reception information relating to each pixel successively output from the image sensor 18 satisfies the first condition. When the light-reception information relating to each pixel satisfies the second condition, determination section 210 outputs a bright signal which indicates the pixel is bright to a pixel FIFO 340 of the MCU 300.
The control section 220 allows a 15-bit count value output from the pixel counter 230 to be written into the pixel FIFO 340 when the control section 220 receives the enable signal from the determination section 210. The pixel counter 230 is reset upon reception of the vertical synchronization signal output from the image sensor 18 from the control section 220, and outputs the 15-bit count value from the starting pixel (0) to the end pixel (22, 050) of the image sensor 18 in synchronization with the output of the light-reception information relating to each pixel. In this embodiment, the count value of an effective pixel of which the light-reception information has been determined to satisfy the first condition is written into the pixel FIFO 340.
In this case, the pixel counter 230 writes 16-bit data into the pixel FIFO 340 by adding 0 as the most significant bit of the 15-bit count value when the bright signal is received from the determination section 210 and adding 1 as the most significant bit of the 15-bit count value when the bright signal is not received from the determination section 210. Specifically, the count value of an effective pixel of which the light-reception information has been determined to satisfy the first condition and the data which indicates whether or not the effective pixel satisfies the second condition (16 bits in total) are stored in the pixel FIFO 340.
The MCU 300 includes the pixel FIFO 340, a light source position calculation section 350, and an indication position calculation section 360.
The pixel FIFO 340 stores 128 pieces of 16-bit data relating to the effective pixel based on the data output from the ASIC 200. The pixel FIFO 340 successively outputs the 16-bit data relating to each effective pixel to the light source position calculation section 350 in a first-in first-out manner.
The light source position calculation section 350 sets a first determination area in a predetermined range including a first effective pixel based on first data stored in the pixel FIFO 340 after the image sensor 18 has been initialized (i.e., the count value of the first effective pixel which has been determined to satisfy the first condition). The light source position calculation section 350 sets a second determination area in a predetermined range including a second effective pixel based on the count value of the second effective pixel which is a pixel in the area other than the first determination area and is a first pixel which has been determined to satisfy the first condition). The light source position calculation section 350 performs position calculations based on the count values (identification information) of effective pixels which satisfy the first condition and are included in the first determination area and the count values of effective pixels which satisfy the first condition and are included in the second determination area.
FIG. 6 shows part of an enlarged image acquired by the image sensor 18. FIG. 6 is a view illustrative of calculations of a representative value of the determination area. In FIG. 6, a white square indicates an effective pixel which satisfies the first condition and the second condition, a gray square indicates an effective pixel which satisfies the first condition but does not satisfy the second condition, and a black area indicates ineffective pixels which do not satisfy the first condition and the second condition. As shown in FIG. 6, the image sensor 18 horizontally and sequentially scans pixels from the lower right pixel and outputs the light-reception information relating to each pixel. When the determination section 210 determines whether or not each pixel satisfies the first condition, the determination section 210 determines that a pixel P1 is a first effective pixel which satisfies the first condition. When the 16-bit data relating to the first effective pixel P1 is input to the light source position calculation section 350 from the pixel FIFO 340, the light source position calculation section 350 sets a first determination area JA1 in a circular region in a predetermined range including the first effective pixel, as shown in FIG. 6. Specifically, the light source position calculation section 350 sets the first determination area JA1 in a range in which it is estimated that other effective pixels corresponding to the light-emitting section 13 or 14 exist when the first effective pixel is a pixel corresponding to the light-emitting section 13 or 14.
The light source position calculation section 350 determines whether or not each effective pixel is included in the first determination area JA1 based on the count value of each effective pixel sequentially input from the pixel FIFO 340. The light source position calculation section 350 calculates a representative value (center-of-gravity coordinates) of the first determination area based on the count value of each effective pixel included in the first determination area JA1. In this embodiment, the light source position calculation section 350 transforms the count value of each effective pixel into a coordinate value of each pixel in an image PC acquired by the image sensor 18, and then calculates the center-of-gravity coordinates of the first determination area JA1.
Specifically, the light source position calculation section 350 calculates a remainder when dividing the count value (one of 0 to 22,050) by 175 which is the number of pixels on the X axis of the image sensor 18 to calculate the X coordinate of the pixel in the image from the count value, and calculates a quotient when dividing the count value by 175 to calculate the Y coordinate of the pixel in the image from the count value. For example, the light source position calculation section 350 calculates a coordinate value (0, 0) from the count value 0 of the starting pixel as indicated by X=0 mod 175=0 and Y=0/175=0. The light source position calculation section 350 calculates a coordinate value (47, 82) from the count value 14397 as indicated by X=14397 mod 175=47 and Y=14397/175=82.
The light source position calculation section 350 divides the sum of the X coordinate components of the effective pixels included in the first determination area JA1 by the number of the effective pixels included in the first determination area JA1 to calculate the X coordinate component of the center-of-gravity coordinates of the first determination area JA1. Likewise, the light source position calculation section 350 divides the sum of the Y coordinate components of the effective pixels included in the first determination area JA1 by the number of the effective pixels included in the first determination area JA1 to calculate the Y coordinate component of the center-of-gravity coordinates of the first determination area JA1. In this embodiment, the light source position calculation section 350 calculates the center-of-gravity coordinates of the first determination area JA1 based on the coordinate component value of each effective pixel while changing weighting on the coordinate component value of each effective pixel between an effective pixel which is included in the first determination area JA1 and satisfies the second condition (bright pixel) and an effective pixel which is included in the first determination area JA1 and does not satisfy the second condition (dark pixel).
Specifically, the light source position calculation section 350 performs the above calculations while doubling the coordinate component value and the number of bright pixels. In the example shown in FIG. 6, pixels P7 and P13 of which the X coordinate component value is 45 are bright pixels. Therefore, the coordinate component values 45×2 of the pixels P7 and P13 are doubled and added to the coordinate component value. The number of bright pixels (2) is also doubled and added to the number of pixels. Since a pixel P19 is a dark pixel, the coordinate component value 45 of the pixel P19 is directly added to the coordinate component value, and the number of bright pixels (1) is directly added to the number of pixels. The pixels having other X coordinate component values are similarly calculated. Therefore, the X component X1 of the center-of-gravity coordinates of the first determination area JA1 in the example shown in FIG. 6 is calculated as follows.
X 1 = { 2 * ( 45 * 2 + 46 * 3 + 47 * 6 + 48 * 6 + 49 * 5 + 50 * 2 ) + ( 45 + 46 * 2 + 47 + 50 ) ] / [ 2 * ( 2 + 3 + 6 + 6 + 5 + 2 ) + ( 1 + 2 + 1 + 1 ) ] = ( 2286 + 234 ) / ( 48 + 5 ) = 2520 / 53 = 47.547
Likewise, the Y component Y1 of the center-of-gravity coordinates of the first determination area JA1 is calculated as follows.
Y 1 = [ 2 * ( 81 + 82 * 3 + 83 * 5 + 84 * 6 + 85 * 5 + 86 * 3 + 87 ) + ( 81 + 82 + 83 + 85 + 86 ) ] / [ 2 * ( 1 + 3 + 5 + 6 + 5 + 3 + 1 ) + ( 1 + 1 + 1 + 1 + 1 ) ] = ( 4032 + 417 ) / ( 48 + 5 ) = 2520 / 53 = 84.962
Specifically, the center-of-gravity coordinates (X1, Y1) of the first determination area JA1 in the example shown in FIG. 6 are calculated to be (47.547, 84.962).
The light source position calculation section 350 sets a second determination area JA2 in a predetermined range including a second effective pixel which is included in an area other than the first determination area JA1 and is a first pixel which has been determined to satisfy the first condition, and calculates the center-of-gravity coordinates (X2, Y2) of the second determination area JA2 in the same manner as the first determination area JA1. The light source position calculation section 350 outputs the center-of-gravity coordinates (X1, Y1) of the first determination area JA1 and the center-of-gravity coordinates (X2, Y2) of the second determination area JA2 to the indication position calculation section 360.
The indication position calculation section 360 calculates the indication position of the controller 16 on the display screen 11 based on the center-of-gravity coordinates (X1, Y1) of the first determination area JA1 and the center-of-gravity coordinates (X2, Y2) of the second determination area JA2. Specifically, the indication position calculation section 360 calculates the positional relationship between the light-emitting sections 13 and 14 and the controller 16 using the center-of-gravity coordinates (X1, Y1) of the first determination area JA1 or the center-of-gravity coordinates (X2, Y2) of the second determination area JA2 relatively positioned on the left of the image PC as the center coordinates of the light-emitting section 13 disposed on the upper left portion of the display section 12 and the center-of-gravity coordinates (X1, Y1) or the center-of-gravity coordinates (X2, Y2) relatively positioned on the right of the image PC as the center coordinates of the light-emitting section 14 disposed on the upper right portion of the display section 12, and calculates the indication position of the controller 16 on the display screen 11. The indication position calculation section 360 outputs the calculated indication position to the USB 400.
The USB 400 includes a USB interface 410 and a key function section 420.
The USB interface 410 outputs the indication position data input from the MCU 300 to the game device 17. The key function section 420 outputs an operation signal based on operation of the trigger and other operation keys of the controller 16. The game device 17 identifies the indication position data relating to the controller 16 at a timing at which the trigger operation signal is input to be an impact position of a virtual bullet on a game screen (display screen 11), and performs game calculations such as determining whether or not the target has been hit.
4. Flow of Position Calculation Process
The flow of a position calculation process of the controller 16 is described below using a flowchart. FIG. 7 is a flowchart showing an example of an outline of the position calculation process according to this embodiment. As shown in FIG. 7, the determination section 210 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel output from the image sensor 18 (step S10). The light source position calculation section 350 calculates the center-of-gravity positions of two determination areas corresponding to the two light sources based on the identification information relating to the first effective pixel and the second effective pixel (step S12). The indication position calculation section 360 calculates the indication position of the controller 16 based on the center-of-gravity positions of the two determination areas (step S14). The indication position calculation section 360 outputs the indication position of the controller 16 to the game device 17 (step S16). In this embodiment, the process in the steps S10 to S16 is repeated every 1/54th of a second.
FIG. 8 is a flowchart showing an example of the details of the pixel determination process in the step S10 shown in FIG. 7. As shown in FIG. 8, the determination section 210 acquires the light-reception information from the image sensor 18 in pixel units (step S100), and determines whether or not the light-reception information is equal to or larger than a first threshold value (i.e., satisfies the first condition) (step S102). When the determination section 210 has determined that the light-reception information is not equal to or larger than the first threshold value (N in step S102), the determination section 210 acquires the light-reception information relating to another pixel (step S100). When the determination section 210 has determined that the light-reception information is equal to or larger than the first threshold value (Y in step S102), the determination section 210 determines whether or not the light-reception information is equal to or larger than a second threshold value (i.e., satisfies the second condition) (step S104). When the determination section 210 has determined that the light-reception information is equal to or larger than the second threshold value (Y in step S104), the determination section 210 sets the most significant bit of the 16-bit data at 0 as the attribute bit (step S106). When the determination section 210 has determined that the light-reception information is not equal to or larger than the second threshold value (N in step S104), the determination section 210 sets the most significant bit of the 16-bit data at 1 as the attribute bit (step S108). The determination section 210 stores the 16-bit data in which the most significant bit is the attribute bit and the lower-order 15 bits are the count value of the pixel in the pixel FIFO 340 (step S110). In this embodiment, the process in the steps S100 to S110 is repeatedly performed on each pixel of the image sensor 18 in synchronization with the pixel clock signal output from the image sensor 18.
FIG. 9 is a flowchart showing an example of the details of the light source position calculation process in the step S12 shown in FIG. 7. As shown in FIG. 9, the light source position calculation section 350 acquires the 16-bit data relating to the first effective pixel from the pixel FIFO 340, and separates the lower-order 15-bit count value into the X component value and the Y component value (step S200). The light source position calculation section 350 determines whether or not the first determination area has been set (step S202). When the light source position calculation section 350 has determined that the first determination area has not been set (N in step S202), the light source position calculation section 350 determines the pixel to be the first effective pixel, and sets the first determination area in an area in a predetermined range including the first effective pixel (step S204).
The light source position calculation section 350 determines whether or not the most significant bit of the 16-bit data is 0 (step S206). When the light source position calculation section 350 has determined that the most significant bit is 0 (i.e., bright pixel) (Y in step S206), the light source position calculation section 350 doubles the X component value, the Y component value, and the number of pixels (step S208), and adds the X component value to a register RX1, the Y component value to a register RY1, and the number of pixels to a register RC1 (step S210). When the light source position calculation section 350 has determined that the most significant bit is 1 (i.e., dark pixel) (N in step S206), the light source position calculation section 350 adds the X component value to the register RX1, the Y component value to the register RY1, and the number of pixels to the register RC1 without doubling the X component value, the Y component value, and the number of pixels (step S210).
The light source position calculation section 350 determines whether or not the process from the step S200 has been performed on all effective pixels included in the pixels 0 to 22,050 (pixels of one frame) of the image sensor 18 (step S212). When the light source position calculation section 350 has determined that the process has been performed on all effective pixels (N in step S212), the light source position calculation section 350 returns to the step S200, and acquires the 16-bit data relating to the next pixel from the pixel FIFO 340. The light source position calculation section 350 repeats the process in the steps S200 to S212.
With regard to the next effective pixel, the light source position calculation section 350 determines that the first determination area has been set in the step S202 (Y in step S202), and determines whether or not the effective pixel is positioned in the first determination area based on the coordinate value of the effective pixel (step S214). When the light source position calculation section 350 has determined that the effective pixel is positioned in the first determination area (Y in step S214), the light source position calculation section 350 performs the process in the steps S206 to S212. When the light source position calculation section 350 has determined that the effective pixel is not positioned in the first determination area (N in step S214), the light source position calculation section 350 determines whether or not the second determination area has been set (step S216).
When the light source position calculation section 350 has determined that the second determination area has not been set (N in step S216), the light source position calculation section 350 determines the effective pixel to be the second effective pixel, and sets the second determination area in an area in a predetermined range including the second effective pixel (step S218). In steps S220 to S224, the light source position calculation section 350 performs a process similar to the process in the steps S206 to S210 performed on the effective pixel in the first determination area, and adds the X component value to a register RX2, the Y component value to a register RY2, and the number of pixels to a register RC2 (step S224).
When the light source position calculation section 350 has determined that the second determination area has been set in the step S216 (Y in step S216), the light source position calculation section 350 determines whether or not the effective pixel is positioned in the second determination area based on the coordinate value of the effective pixel (step S226). When the light source position calculation section 350 has determined that the effective pixel is positioned in the second determination area (Y in step S226), the light source position calculation section 350 performs the process in the steps S220 to S224. When the light source position calculation section 350 has determined that the effective pixel is not positioned in the second determination area (N in step S226), the light source position calculation section 350 determines whether or not all effective pixels have been processed without adding a value to the register (step S212).
When the light source position calculation section 350 has determined that all effective pixels have been processed (Y in step S212), the light source position calculation section 350 determines whether or not data relating to the two determination areas has been acquired (step S228). When the light source position calculation section 350 has determined that data relating to the two determination areas has been acquired (Y in step S228), the light source position calculation section 350 divides the sum of the values stored in the register RX1 by the sum of the values stored in the register RC1 to calculate the X component X1 of the center-of-gravity coordinated of the first determination area, divides the sum of the values stored in the register RY1 by the sum of the values stored in the register RC1 to calculate the Y component Y1 of the center-of-gravity coordinated of the first determination area, divides the sum of the values stored in the register RX2 by the sum of the values stored in the register RC2 to calculate the X component X2 of the center-of-gravity coordinated of the second determination area, and divides the sum of the values stored in the register RY2 by the sum of the values stored in the register RC2 to calculate the Y component Y2 of the center-of-gravity coordinated of the second determination area. The light source position calculation section 350 outputs the center-of-gravity coordinates (X1, Y1) of the first determination area and the center-of-gravity coordinates (X2, Y2) of the second determination area to the indication position calculation section 360 (step S232).
When the light source position calculation section 350 has determined that data relating to the two determination areas has not been acquired (i.e., only data relating to the first determination area has been acquired) (N in step S228), the light source position calculation section 350 performs an out-of-range setting which indicates that the controller 16 is directed in an area outside the detection range (step S234), and outputs the setting information to the indication position calculation section 360 (step S236).
When the light source position calculation section 350 has output the data to the indication position calculation section 360, the light source position calculation section 350 initializes each register and the determination areas (step S238). The process in the steps S200 to S238 is repeated each time the image sensor 18 is initialized, and a first effective pixel of the next frame is output from the pixel FIFO 340.
5. Configuration of Light-Emitting Section
The details of the configuration of the light-emitting section according to this embodiment are described below.
5-1. Noise Prevention by Shielding Section
FIG. 10A is a side view showing a state in which the controller 16 is directed toward the display screen 11. In this embodiment, each of the light-emitting sections 13 and 14 includes a light source 22 which emits infrared light that has a certain directivity and travels in a direction within a given range so that the image sensor 18 can receive the infrared light from the light-emitting sections 13 and 14 when the controller 16 and the light-emitting sections 13 and 14 have a positional relationship within a predetermined range. Specifically, each of the light-emitting sections 13 and 14 causes the light source 22 to emit infrared light so that the infrared light travels in a travel direction within a given range with respect to a center direction CD which is the direction of the light source 22. The center direction of the light source 22 may be a center luminous intensity direction (maximum luminous intensity direction) of the light source 22, for example.
As shown in FIG 10A, when forming the game system 10 in a room or the like, an object (e.g., mirror or glass table) which reflects the infrared light from the light source 22 may exist between the light source 22 and the image sensor 18. Since the light from the light source 22 travels in a direction within a given range, reflected light RL having the same wavelength and intensity as direct light DL from the light source 22 may occur at a position differing from the position of the light source 22 (e.g., reflecting surface such as mirror or glass table). If the image sensor 18 receives the reflected light RL, an accurate indication position cannot be calculated due to the reflected light RL as noise.
FIG. 11A shows an image PC3 acquired by the image sensor 18 in the state shown in FIG. 10A. In the state shown in FIG. 10A, since the image sensor 18 receives the direct light DL from the light sources 22 and the reflected light RL, noise areas NA3 and NA4 corresponding to the reflected light RL occur under the infrared light source areas IrA1 and IrA2 corresponding to the light sources 22, as shown in FIG. 11A.
In this embodiment, as shown in FIG. 11B, the reflected light RL which occurs below the light-emitting sections 13 and 14 is prevented by providing a shielding section 24 which shields light that travels downward from the light source 22 (light-emitting sections 13 and 14). Specifically, the shielding section 24 is provided at a position at which the direct light DL from the light source 22 is prevented from traveling in the direction in which the reflected light RL that enters the image sensor 18 occurs when the light source 22 (light-emitting sections 13 and 14) and the image sensor 18 (controller 16) have a given reference positional relationship. Therefore, the image sensor 18 receives the direct light DL from the light source 22 and does not receive the reflected light RL when the light source 22 (light-emitting sections 13 and 14) and the image sensor 18 (controller 16) have a given reference positional relationship.
In this embodiment, the light source 22 and the image sensor 18 have the reference positional relationship when the controller 16 is held in the above-mentioned reference position, is located at a reference position away from the light-emitting sections 13 and 14 by four meters, and is positioned in a reference direction in which the controller 16 is directed toward the light source 22. The position, direction, size, shape, and the like of the shielding section 24 are determined so that the reflected light RL does not enter within the angle of view theta (Light-reception range) of the image sensor 18 when the light source 22 and the image sensor 18 have the reference positional relationship.
In this embodiment, the position, direction, size, shape, and the like of the shielding section 24 are determined provided that a position at the maximum distance from the light source 22 at which the image sensor 18 can obtain a quantity of light (luminous intensity or luminous flux) necessary for the game system 10 to calculate an accurate indication position from the light source 22 is set to be the reference position and an average (basic) direction of the controller 16 when the player plays a game while holding the controller 16 toward the display screen 11 is set to be the reference direction. In the example shown in FIG. 10B, a plate-shaped shielding section 24 is provided under the light source 22 at a position near the light source 22 to protrude forward from the front surface of each of the light-emitting sections 13 and 14. Therefore, light which travels downward from the light source 22 in a direction within a predetermined angle phi can be shielded by the shielding section 24.
FIG. 11B shows light received by the image sensor 18 (i.e., image data stored in image data area PA) in the state shown in FIG. 10B. In the state shown in FIG. 10B, since the image sensor 18 receives the direct light DL from the light sources 22 but does not receive the reflected light RL, the noise areas NA3 and NA4 corresponding to the reflected light RL do not occur under the infrared light source areas IrA1 and IrA2 corresponding to the light sources 22, as shown in FIG. 11B.
Therefore, according to this embodiment, the reflected light RL which enters the image sensor 18 does not occur when the controller 16 is directed toward the display screen 11 within a game play range in which the distance between the light-emitting sections 13 and 14 and the controller 16 is four meters or less, as shown in FIG. 10B. The reflected light RL also occurs in FIG. 10B. However, the reflected light RL does not enter the image sensor 18 unless the controller 16 is positioned outside the game play range.
Note that the reference positional relationship is not limited to the above-mentioned example. Various positional relationships may be set. For example, the reference position may be set depending on the intensity of the light source 22 (i.e., distance at which the quantity of light and the like necessary for calculating an accurate indication position are ensured), the luminous intensity distribution curve, and the beam angle (i.e., diffusion range of predetermined quantity of light). The reference direction may be set depending on the application of the indication position calculation system. Specifically, the reference direction may be appropriately set depending on the basic position of an operator (player P) who holds an indicator (controller 16) with respect to the indication plane (display screen 11). For example, when applying the indication position calculation system according to this embodiment to a presentation system, the operator who operates the indicator is rarely positioned perpendicularly to the indication plane (e.g., screen), but is generally positioned diagonally in front of the indication plane. The operator generally directs the indicator toward the indication plane (light source) disposed above the operator. In this case, a direction upward from a position diagonally in front of the indication plane may be set to be the reference direction.
5-2. Details of Shielding Section
FIG. 12 is a perspective view of the light-emitting section 13 (14) of the light-emitting unit 15 according to this embodiment. In this embodiment, the light-emitting section 13 (14) is disposed in a state in which a bottom surface 32 of a housing 30 is secured on the top surface of the display section 12. The housing 30 has a front surface 34 and a back surface 36. Three LED light sources 22-1 to 22-3 which emit infrared light from the front surface 34 are provided on the front surface 34. In this embodiment, the light sources 22-1 to 22-3 are disposed to form vertices of an inverted triangle when viewed from the travel direction of light emitted from the light sources 22-1 to 22-3. Therefore, even if the distance between the light sources 22-1 to 22-3 and the image sensor 18 increases, the image sensor 18 can reliably receive the infrared light from the light sources 22-1 to 22-3 as light having a predetermined area.
A cover 38 which allows the infrared light from the light sources 22-1 to 22-3 to pass through is provided on the front surface 34 of the housing 30 to protect the light sources 22-1 to 22-3. The shielding section 24 which protrudes from the front surface 34 is provided at the lower end of the front surface 34 of the housing 30 (i.e., below the light sources 22-1 to 22-3). In this embodiment, the shielding section 24 is integrally formed with the housing 30. A material which does not allow the infrared light from the light sources 22-1 to 22-3 to pass through is used for the shielding section 24. Therefore, light emitted from the light sources 22-1 to 22-3 downward in a predetermined degrees of angle or less from the horizontal can be shielded by the shielding section 24.
FIG. 13 is a side view of the light-emitting section 13 (14) shown in FIG. 12. In this embodiment, an angle alpha formed by a shielding surface 40 (i.e., top surface) of the shielding section 24 and the front surface 34 of the housing 30 is an acute angle, as shown in FIG. 13. Therefore, a protrusion distance 1 of the shielding section 24 from the front surface 34 necessary for shielding light which travels in a direction within a predetermined range can be reduced as compared with the case where the angle alpha is a right angle or a obtuse angle. In this embodiment, the angle alpha is set at 70 degrees, and the protrusion distance 1 is set at 7 mm. According to the example shown in FIG. 13, protrusion of the light-emitting section 13 (14) can be prevented and the size of the light-emitting section 13 (14) can be reduced even if the shielding section 24 is provided.
5-3. Noise Prevention Due to Direction of Light Source
The above description has been given taking an example in which the reflected light as noise is prevented by providing the shielding section 24 on the front surface of each of the light-emitting sections 13 and 14 to shield light which travels downward from the light sources 22-1 to 22-3. Note that the reflected light as noise may be prevented by preventing light from traveling downward from the light sources 22-1 to 22-3 by adjusting the directions of the light sources 22-1 to 22-3 disposed in the light-emitting sections 13 and 14. Specifically, the center direction CD of the light sources 22-1 to 22-3 may be adjusted to a direction in which the direct light DL from the light source 22 is prevented from traveling in the direction in which the reflected light RL that enters the image sensor 18 occurs when the light source 22 (light-emitting sections 13 and 14) and the image sensor 18 (controller 16) have a given reference position relationship.
FIG. 14 shows an example of the light-emitting section 13 (14) of which the light sources 22-1 to 22-3 are disposed so that the center direction CD of the light sources 22-1 to 22-3 faces upward by beta degrees with respect to a horizontal direction (direction normal to the display screen 11 of the display section 12) HD when the light-emitting section 13 (14) is disposed in a state in which the bottom surface 32 of the housing 30 is secured on the top surface of the display section 12. In the example shown in FIG. 14, direct light from the light source 22 is prevented from traveling in a direction in which reflected light enters the image sensor 18 by adjusting the center direction CD of the light sources 22-1 to 22-3 upward by beta degrees. Therefore, occurrence of reflected light which serves as noise under the light sources 22-1 to 22-3 can be prevented in the same manner as in the example shown in FIG. 13.
6. Light-reception Range of Image Sensor
A Light-reception range in which the image sensor 18 can receive light from the light sources 22-1 to 22-3 in the game system 10 according to this embodiment is described below. In this embodiment, three light sources 22-1 to 22-3 which differ in center direction are disposed in the light-emitting section 13 (14) so that the image sensor 18 can receive infrared light from the light-emitting sections 13 and 14 when the controller 16 and the light-emitting sections 13 and 14 have a positional relationship within a specific range.
FIG. 5A is a front view of the light-emitting section 13 (14) according to this embodiment, and FIG. 15B is a top view of the light-emitting section 13 (14). As shown in FIG. 15A, the three light sources 22-1 to 22-3 of the light-emitting section 13 (14) according to this embodiment are disposed to form vertices of an inverted triangle. As shown in FIG. 15B, the lowermost first light source 22-1 is disposed so that a center direction CD1 of the light source 22-1 coincides with a first direction D1 which is a reference direction SD of the light-emitting section 13 (14).
In the light-emitting section 13 (14) according to this embodiment, the reference direction SD is the front direction of the display screen 11 which is the basic direction of the player P who holds the controller 16. Specifically, a direction parallel to the direction normal to the display screen 11 of the display section 12 when the light-emitting section 13 (14) is disposed in a state in which the bottom surface 32 of the housing 30 is secured on the top surface of the display section 12 is set to be the reference direction SD. In this embodiment, a predetermined range around the front direction of the display screen 11 (i.e., basic direction of the player P) in which light travels from the first light source 22-1 may be set to be the Light-reception range of the image sensor 18.
Since the image sensor 18 successively outputs the light-reception information relating to each pixel while scanning the pixels of the acquired image PC from the lower row to the upper row, the image sensor 18 outputs the light-reception information relating to a pixel in the image PC corresponding to the lowermost first light source 22-1 prior to pixels corresponding to the light sources 22-2 and 22-3. Therefore, a pixel corresponding to the first light source 22-1 is determined to be a first effective pixel which satisfies a given condition. In this embodiment, the maximum luminous intensity direction of the first light source 22-1 corresponding to the first effective pixel is set to be the front direction of the display screen 11 (basic direction of the player P) by causing the center direction CD1 of the first light source 22-1 to coincide with the front direction of the display screen 11. According to this embodiment, when the player P directs the controller 16 toward the display screen 11 along the front direction of the display screen 11, a first effective pixel can be reliably determined by light which travels from the lowermost first light source 22-1.
As shown in FIG. 15B, the second light source 22-2 positioned on the left when viewed from the front surface (light travel direction) is disposed so that a center direction CD2 of the light source 22-2 coincides with a second direction D2 which differs from the reference direction SD by an angle gamma of 90 degrees or less. In this embodiment, the second light source 22-2 is disposed so that the center direction CD2 differs from the reference direction SD by 60 degrees toward the left when viewed from the front surface. Therefore, the image sensor 18 can receive light which travels from the second light source 22-2 within a left range with respect to the front direction of the display screen 11.
As shown in FIG. 15B, the third light source 22-3 positioned on the right when viewed from the front surface (light travel direction) is disposed so that a center direction CD3 of the light source 22-3 coincides with a third direction D3 which is line-symmetrical with the second direction D2 with respect to the reference direction SD as the symmetry axis. In this embodiment, the third light source 22-3 is disposed so that the center direction CD3 differs from the reference direction SD by 60 degrees toward the right when viewed from the front surface. Therefore, the image sensor 18 according to this embodiment can receive light which travels from the third light source 22-3 within a right range with respect to the front direction of the display screen 11.
In the game system 10 according to this embodiment, since the player P moves along a floor (horizontal plane), it is preferable that the second direction D2 and the third direction D3 be parallel to the horizontal plane and differ in direction. When it is necessary to prevent light emitted downward from the light sources 22-1 to 22-3 from being reflected, the center directions CD1 to CD3 of the light sources 22-1 to 22-3 may be adjusted upward from the horizontal direction HD by about five degrees.
7. Functional blocks
The configuration of the indication position calculation system (game system) according to this embodiment is described below with reference to FIG. 16. FIG. 16 is a functional block diagram showing an example of the indication position calculation system according to this embodiment. The indication position calculation system according to this embodiment may have a configuration in which some of the elements (sections) shown in FIG. 16 are omitted.
An operation section 160 allows a player to input operation data. In this embodiment, the operation section 160 may be an indicator (controller, shooting device, or pointing device) configured so that the player can arbitrarily change the position and the direction of the operation section 160 while holding the operation section 160 and directs the operation section 160 toward an arbitrary position on the indication plane such as the display screen 11.
The operation section 160 includes a trigger as an operating section for the player to perform an ON/OFF input. The operation section 160 may include a button, a lever (analog pad), an arrow key, a steering wheel, a microphone, a touch panel display, or the like so that various types of operation data can be input.
The operation section 160 includes an imaging section 162, a determination section 164, and a calculation section 166.
The imaging section 162 may be implemented by an image sensor such as a CMOS sensor or a CCD camera. The imaging section 162 successively outputs the light-reception information relating to each pixel from a starting pixel provided on one end of the acquired image to an end pixel provided on the other end of the acquired image. The imaging section 162 may successively output the light-reception information relating to each pixel utilizing a hardware configuration, or may successively output the light-reception information relating to each pixel under software control.
The determination section 164 may be implemented by hardware such as a processor (e.g., CPU, MPU, or DSP) or an ASIC (e.g., gate array) and a program. The determination section 164 determines whether or not each pixel satisfies a given condition based on the light-reception information relating to each pixel successively output from the imaging section 162.
The calculation section 166 may be implemented by hardware such as a processor (e.g., CPU, MPU, or DSP) or an ASIC (e.g., gate array) and a program. When the pixels which satisfy the given condition are pixel corresponding to the light-emitting sections 13 and 14, the calculation section 166 performs position calculations based on the identification information relating to the pixels which satisfy the given condition to calculate the indication position of the operation section 160.
The determination section 164 and the calculation section 166 may be integrally implemented by one processor or the like. The determination section 164 and the calculation section 166 may be implemented by the function of a processing section 100 instead of providing the determination section 164 and the calculation section 166 in the operation section 160.
A storage section 170 serves as a work area for the processing section 100, a communication section 196, and the like. The function of the storage section 170 may be implemented by a RAM (VRAM) or the like. The storage section 170 according to this embodiment includes a main storage section 171 used as a work area, a frame buffer 172 in which the final display image and the like are stored, an object data storage section 173 in which model data relating to an object is stored, a texture storage section 174 in which the texture for each piece of object data is stored, and a Z buffer 176 in which a Z value is stored when generating an image of an object. Note that the storage section 170 may have a configuration in which some of these sections are omitted.
An information storage medium 180 (computer-readable medium) stores a program, data, and the like. The function of the information storage medium 180 may be implemented by an optical disk (CD or DVD), a magneto-optical disk (MO), a magnetic disk, a hard disk, a magnetic tape, a memory (ROM), or the like.
A program (data) for causing the processing section 100 to perform various processes according to this embodiment is stored in the information storage medium 180. Specifically, a program which causes a computer to function as each section according to this embodiment (program which causes a computer to perform the process of each section) is stored in the information storage medium 180.
A display section 190 outputs an image generated according to this embodiment. The function of the display section 190 may be implemented by a CRT, an LCD, a touch panel display, or the like. In this embodiment, a light-emitting unit 15 for calculating the relative positions of the operation section 160 and the display screen of the display section 190 is provided in or around the display screen of the display section 190. In this embodiment, an infrared LED which emits invisible light is used as the light source of the light-emitting unit 15.
A sound output section 192 outputs sound generated according to this embodiment. The function of the sound output section 192 may be implemented by a speaker, a headphone, or the like.
A portable information storage device 194 stores player's personal data, game save data, and the like. As the portable information storage device 194, a memory card, a portable game device, and the like can be given.
The communication section 196 performs various types of control for communicating with the outside (e.g. host device or another image generation system). The function of the communication section 196 may be implemented by hardware such as a processor or a communication ASIC, a program, or the like.
The program (data) for causing a computer to function as each section according to this embodiment may be distributed to the information storage medium 180 (storage section 170) from an information storage medium included in a host device (server) through a network and the communication section 196. Use of the information storage medium of the host device (server) is also included within the scope of the invention.
The processing section 100 (processor) performs a game process, an image generation process, a sound generation process, and the like based on operation data from the operation section 160, a program, and the like. The game process includes starting a game when game start conditions have been satisfied, proceeding with a game, disposing an object such as a character or a map, displaying an object, calculating game results, finishing a game when game end conditions have been satisfied, and the like. The processing section 100 performs various processes using the storage section 170 as a work area. The function of the processing section 100 may be implemented by hardware such as a processor (e.g. CPU or DSP) or an ASIC (e.g. gate array) and a program.
The processing section 100 according to this embodiment includes a display control section 104, a determination section 106, an evaluation section 108, a drawing section 120, and a sound generation section 130. Note that the processing section 100 may have a configuration in which some of these sections are omitted.
The display control section 104 performs a display control process on an object displayed on the display section 190. Specifically, the display control section 104 performs the display control process such as generating an object (e.g. character, background, target, car, ball, item, building, tree, pillar, wall, or map), directing the display position of an object, or causing an object to disappear. Specifically, the display control section 104 performs the display control process such as registering a generated object in an object list, transferring the object list to the drawing section 120 and the like, or deleting a disappeared object from the object list. The display control section 104 displays an object indicating the indication position (impact position) on the display screen 11 based on information relating to the indication position of the operation section 160 on the display screen 11.
The determination section 106 determines the positional relationship between the indication position information relating to the operation section 160 on the display screen 11 of the display section 190 and a target object TO based on an operation input using the operating section (trigger) provided in the operation section 160. Specifically, the determination section 106 determines whether or not the indication position has hit (coincides with or reaches) the display position of the target object TO based on the indication position information when the operation input using the operating section has been received.
The evaluation section 108 evaluates the operation using the operation section 160 based on the hit determination result. In this embodiment, the evaluation section 108 evaluates the operation of the operator by means of score calculation and the like when the target object TO has been hit.
The drawing section 120 performs a drawing process based on the results of various processes (game process) performed by the processing section 100 to generate an image, and outputs the generated image to the display section 190. An image generated by the drawing section 120 may be a two-dimensional image or a three-dimensional image. When generating a three-dimensional image, the drawing section 120 performs a geometric process such as coordinate transformation (world coordinate transformation or camera coordinate transformation), clipping, or perspective transformation, and creates drawing data (e.g. primitive surface vertex coordinates, texture coordinates, color data, normal vector, and alpha-value) based on the processing results. The drawing section 120 draws an object (one or more primitives) subjected to perspective transformation (geometric process) in a drawing buffer (buffer which can store image information in pixel units such as a frame buffer or intermediate buffer; VRAM) based on the drawing data (primitive data). This causes an image viewed from a virtual camera (given view point) to be generated in a game space.
The sound generation section 130 performs a sound process based on the results of various processes performed by the processing section 100 to generate game sound such as background music (BGM), effect sound, or voice, and outputs the generated game sound to the sound output section 192.
The image generation system according to this embodiment may be configured as a system dedicated to a single-player mode in which only one player can play a game, or a system which also implements a multi-player mode in which a plurality of players can play a game.
When a plurality of players play a game, a game image and game sound provided to the players may be generated using one terminal, or may be generated by a distributed process using a plurality of terminals (game devices or portable telephones) connected via a network (transmission line or communication line), for example.
The invention is not limited to the above-described embodiments, and various modifications can be made within the scope of the invention. For instance, any term cited together with a different term having a broader meaning or the same meaning at least once in this specification or drawings can be replaced by the different term in any place in this specification and drawings.
The above description has been given taking an example in which the light-emitting unit 15 includes two light-emitting sections 13 and 14 independently provided. Note that the light-emitting unit 15 may be formed by integrating the two light-emitting sections 13 and 14. For example, the light-emitting unit 15 may be formed as an oblong rod-shaped member, and the light-emitting sections 13 and 14 may be disposed on the right and left ends. The distance between the light-emitting sections 13 and 14 may be changed in stages in a direction indicated by an arrow. For example, the distance between the light-emitting sections 13 and 14 may be changed in stages corresponding to monitors ranging from a 20-inch monitor to a 50-inch monitor.
The above description has been given taking an example in which the light source 22 emits infrared light (i.e., invisible light). Note that the light source 22 may emit another type of invisible light or may emit visible light. A light source which emits invisible light and a light source which emits visible light may be provided so that the light sources can be switched. In particular, visible light may be output when the operator indicates the indication plane in order to initialize the positional relationship between the light-emitting section (light source) and the indicator (light-receiving section).
The invention may be applied to various image generation systems. The above embodiments have been described taking an example of applying the invention to a game system. Note that the invention may also applied to an indication position calculation system including a presentation system and the like and an indicator used for an indication position calculation system.
The invention may be applied to various image generation systems such as an arcade game system, a consumer game system, a large-scale attraction system in which a number of players participate, a simulator, a multimedia terminal, a system board which generates a game image, and a portable telephone.
Although only some embodiments of this invention have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of the invention.

Claims (16)

1. An indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
a light-emitting section;
an indicator including an imaging section which acquires an image and outputs light-reception information of a plurality of pixels of the acquired image;
a determination section which determines that each of the pixels is a primary effective pixel satisfying a first condition when a value of the light-reception information is larger than a first threshold value, and then determines that the primary effective pixel is a secondary effective pixel satisfying a second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value; and
a calculation section which performs position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making a weighing coefficient for the identification information of the secondary effective pixels larger than a weighing coefficient for the identification information of the primary effective pixels in calculation.
2. The indication position calculation system as defined in claim 1,
wherein the calculation section sets a predetermined area of the image including the first effective pixel as a determination area, and performs the position calculations based on the identification information of the primary and secondary effective pixels included in the determination area.
3. The indication position calculation system as defined in claim 2, comprising:
a plurality of the light-emitting sections having a predetermined positional relationship with each other,
wherein the calculation section sets a predetermined area of the image including the first effective pixel as a first determination area, sets another predetermined area of the image including a second effective pixel which has been primarily determined to satisfy the first condition among the pixels out of the first determination area as a second determination area, and then performs the position calculations based on the identification information of the primary and secondary effective pixels within the first determination area and the second determination area.
4. The indication position calculation system as defined in claim 1, further comprising:
a shielding section provided in the light-emitting section and shielding part of light from the light-emitting section emitted downward in a predetermined degrees of angle or less from the horizontal.
5. The indication position calculation system as defined in claim 4,
wherein the shielding section is disposed at a position enabling the shielding section to shield part of the light from the light-emitting section emitted downward from the horizontal so that no reflected light from a lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
6. The indication position calculation system as defined in claim 1,
wherein the light-emitting section is directed in a direction so as to prevent light from the light-emitting section from being emitted downward from the horizontal so that no reflected light from a lower space enters the imaging section when the light-emitting section and the imaging section have a given reference positional relationship.
7. The indication position calculation system as defined in claim 1, further comprising:
a filter which is provided in the indicator and through which light in a wavelength band equal to that of light from the light-emitting section is allowed to pass toward the imaging section.
8. The indication position calculation system as defined in claim 1, comprising:
a plurality of the light-emitting sections having a predetermined positional relationship with each other,
wherein the calculation section performs the position calculations based on a representative value corresponding to each of the light-emitting sections.
9. The indication position calculation system as defined in claim 1,
wherein the determination section determines that all pixels that are not primary effective pixels or secondary effective pixels are ineffective pixels.
10. An indicator for an indication position calculation system, the indicator comprising:
an imaging section which acquires an image of a light-emitting section and outputs light-reception information of a plurality of pixels of the acquired image;
a determination section which determines that each of the pixels is a primary effective pixel satisfying a first condition when a value of the light-reception information is larger than a first threshold value, and then determines that the primary effective pixel is a secondary effective pixel satisfying a second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value; and
a calculation section which performs position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making a weighing coefficient for the identification information of the secondary effective pixels larger than a weighing coefficient for the identification information of the primary effective pixels in calculation.
11. A game system calculating an indication position of an indicator, the game system comprising:
a display section which displays an object;
a light-emitting section which has a given positional relationship with the display section;
an indicator including an imaging section which acquires an image and outputs light-reception information of a plurality of pixels of the acquired image;
a determination section which determines that each of the pixels is a primary effective pixel satisfying a first condition when a value of the light-reception information is larger than a first threshold value and then determines that the primary effective pixel is a secondary effective pixel satisfying a second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value; and
a calculation section which performs position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making a weighing coefficient for the identification information of the secondary effective pixels larger than a weighing coefficient for the identification information of the primary effective pixels in calculation.
12. An indication position calculation method comprising:
causing an imaging section provided in an indicator to acquire an image of a light-emitting section and output light-reception information of a plurality of pixels of the acquired image;
causing a determination section to determine that each of the pixels is a primary effective pixel satisfying a first condition when a value of the light-reception information is larger than a first threshold value, and then determines that the primary effective pixel is a secondary effective pixel satisfying a second condition when the value of the light-reception information is also larger than a second threshold value which is larger than the first threshold value; and
causing a calculation section to perform position calculations based on identification information of the primary effective pixel,
wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making a weighing coefficient for the identification information of the secondary effective pixels than a weighing coefficient on the identification information of the primary effective pixels in calculation.
13. An indication position calculation system calculating an indication position of an indicator, the indication position calculation system comprising:
a light-emitting section;
an indicator including an imaging section which acquires an image and successively outputs light-reception information of a plurality of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not any of the pixels determined to be primary effective pixels are secondary effective pixels satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on identification information of the primary effective pixel in order to obtain an indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section, wherein the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making different weighing coefficients for the identification information of the primary effective pixels and the identification information of the secondary effective pixels in the calculation, and
one of the pixels having the light-reception information output first is disposed on a lower side of the image and another of the pixels having the light-reception information output last is disposed on an upper side of the image when the indicator is held in a reference position.
14. An indicator for an indication position calculation system, the indicator comprising:
an imaging section which acquires an image a light-emitting section and successively outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not any of the pixels determined to be primary effective pixels are secondary effective pixels satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on the identification information of the primary effective pixel in order to obtain the indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section, wherein
the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making different weighing coefficients for the identification information of the primary effective pixels and the identification information of the secondary effective pixels in the calculation, and
one of the pixels having the light-reception information output first is disposed on a lower side of the image and another of the pixels having the light-reception information output last is disposed on an upper side of the image when the indicator is held in a reference position.
15. A game system calculating an indication position of an indicator, the game system comprising:
a display section which displays an object;
a light-emitting section which has a given positional relationship with the display section;
an indicator including an imaging section which acquires an image and successively outputs light-reception information of pixels of the acquired image;
a determination section which determines whether or not each of the pixels is a primary effective pixel satisfying a first condition and then determines whether or not any of the pixels determined to be primary effective pixels are secondary effective pixels satisfying a second condition based on the light-reception information; and
a calculation section which performs position calculations based on the identification information of the primary effective pixel in order to obtain the indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section, wherein
the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making different weighing coefficients for the identification information of the primary effective pixels and the identification information of the secondary effective pixels in the calculation, and
one of the pixels having the light-reception information output first is disposed on a lower side of the image and another of the pixels having the light-reception information output last is disposed on an upper side of the image when the indicator is held in a reference position.
16. An indication position calculation method comprising:
causing an imaging section provided in an indicator to acquire an image of a light-emitting section and successively output light-reception information of pixels of the acquired image;
causing a determination section to determine whether or not each of the pixels is a primary effective pixel satisfying a first condition and determine whether or not any of the pixels determined to be primary effective pixels are secondary effective pixels satisfying a second condition based on the light-reception information; and
causing a calculation section to perform position calculations based on identification information of the primary effective pixel in order to obtain the indication position of the indicator when light received by the primary and secondary effective pixels is included in light emitted from the light-emitting section, wherein
the calculation section calculates a representative value representing the primary and secondary effective pixels based on the identification information of the primary and secondary effective pixels and then performs the position calculations based on the representative value, while making different weighing coefficients for the identification information of the primary effective pixels and the identification information of the secondary effective pixels in the calculation, and
one of the pixels having the light-reception information output first is disposed on a lower side of the image and another of the pixels having the light-reception information output last is disposed on an upper side of the image when the indicator is held in a reference position.
US12/068,819 2007-02-15 2008-02-12 Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method for user input in dynamic gaming systems Active 2031-07-08 US8290214B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007035227A JP4861854B2 (en) 2007-02-15 2007-02-15 Pointed position calculation system, pointer and game system
JP2007-35227 2007-02-15

Publications (2)

Publication Number Publication Date
US20080199047A1 US20080199047A1 (en) 2008-08-21
US8290214B2 true US8290214B2 (en) 2012-10-16

Family

ID=39472861

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/068,819 Active 2031-07-08 US8290214B2 (en) 2007-02-15 2008-02-12 Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method for user input in dynamic gaming systems

Country Status (6)

Country Link
US (1) US8290214B2 (en)
EP (1) EP1962103B1 (en)
JP (1) JP4861854B2 (en)
AT (1) ATE476672T1 (en)
DE (1) DE602008001997D1 (en)
ES (1) ES2350381T3 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080198129A1 (en) * 2007-02-15 2008-08-21 Namco Bandai Games Inc. Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US20110069167A1 (en) * 2009-09-24 2011-03-24 Samsung Electronics Co., Ltd. Three-dimensional pointing sensing apparatus and method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008031498B4 (en) * 2008-07-03 2012-03-08 Infineon Technologies Ag Clock determination of a sensor
JP4863507B2 (en) 2007-08-22 2012-01-25 株式会社バンダイナムコゲームス Mark unit for command position calculation
WO2014061079A1 (en) * 2012-10-15 2014-04-24 富士通株式会社 Direction detecting device, direction detecting method, and direction detecting control program

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4586037A (en) * 1983-03-07 1986-04-29 Tektronix, Inc. Raster display smooth line generation
JPH0490078A (en) 1990-08-02 1992-03-24 Juki Corp Centroid detector
JPH0871252A (en) 1994-09-05 1996-03-19 Taito Corp Method and device for detecting relative position between video screen and gun on firing game device
EP0728503A1 (en) 1995-02-21 1996-08-28 Konami Co., Ltd. A shooting game machine
JPH10249065A (en) 1997-01-10 1998-09-22 Konami Co Ltd Shooting video game device
JPH11305935A (en) 1998-04-24 1999-11-05 Image Tec Kk Position detection system
US6146278A (en) * 1997-01-10 2000-11-14 Konami Co., Ltd. Shooting video game machine
US20010010514A1 (en) * 1999-09-07 2001-08-02 Yukinobu Ishino Position detector and attitude detector
JP2001325069A (en) 2000-03-07 2001-11-22 Nikon Gijutsu Kobo:Kk Device and method for detecting position
US6625305B1 (en) * 1999-08-16 2003-09-23 Hewlett-Packard Development Company, L.P. Image demosaicing method
US6727885B1 (en) 1999-09-07 2004-04-27 Nikon Corporation Graphical user interface and position or attitude detector
US20040224761A1 (en) * 2003-05-06 2004-11-11 Nintendo Co., Ltd. Game apparatus, storing medium that stores control program of virtual camera, and control method of virtual camera
US20040257451A1 (en) * 2003-06-20 2004-12-23 Toshinori Yamamoto Image signal processing apparatus
US20050147289A1 (en) * 2002-04-30 2005-07-07 Jfe Steel Coroporation Method and instrument for measuring bead cutting shape of electric welded tube
US20050259114A1 (en) * 2004-05-19 2005-11-24 Hewlett-Packard Development Company , L.P. Method and device for rendering an image for a staggered color graphics display
JP2006155248A (en) 2004-11-30 2006-06-15 Juki Corp Method for detecting gravity center of object
US20060147128A1 (en) * 2003-02-28 2006-07-06 Sony Corporation Image processing device, method, and program
JP2007041910A (en) 2005-08-04 2007-02-15 Shinsedai Kk Pointed position detection method
US20070060384A1 (en) * 2005-09-14 2007-03-15 Nintendo Co., Ltd. Storage medium storing video game program
US20070116372A1 (en) * 2003-06-27 2007-05-24 Sony Corporation Signal processing device, signal processing method, program, and recording medium
US20080186314A1 (en) * 2007-02-02 2008-08-07 Seiko Epson Corporation Image processing device, image processing method, image processing program, recording medium storing image processing program, and image display device
US20080198129A1 (en) * 2007-02-15 2008-08-21 Namco Bandai Games Inc. Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US20080310824A1 (en) * 2007-06-18 2008-12-18 Thomson Licensing Method and device for encoding video levels into subfield code words
US20090163274A1 (en) * 2007-12-21 2009-06-25 Nintendo Co., Ltd. Computer-readable storage medium having game program stored therein and game apparatus
US7783096B2 (en) * 2005-10-17 2010-08-24 Siemens Corporation Device systems and methods for imaging

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2790965B2 (en) * 1992-08-19 1998-08-27 富士通株式会社 Optical pointing system
JP3517482B2 (en) * 1995-06-08 2004-04-12 キヤノン株式会社 Coordinate detection device and method

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4586037A (en) * 1983-03-07 1986-04-29 Tektronix, Inc. Raster display smooth line generation
JPH0490078A (en) 1990-08-02 1992-03-24 Juki Corp Centroid detector
JPH0871252A (en) 1994-09-05 1996-03-19 Taito Corp Method and device for detecting relative position between video screen and gun on firing game device
EP0728503A1 (en) 1995-02-21 1996-08-28 Konami Co., Ltd. A shooting game machine
JPH10249065A (en) 1997-01-10 1998-09-22 Konami Co Ltd Shooting video game device
JP2961097B2 (en) 1997-01-10 1999-10-12 コナミ株式会社 Shooting video game device
US6146278A (en) * 1997-01-10 2000-11-14 Konami Co., Ltd. Shooting video game machine
JPH11305935A (en) 1998-04-24 1999-11-05 Image Tec Kk Position detection system
US6625305B1 (en) * 1999-08-16 2003-09-23 Hewlett-Packard Development Company, L.P. Image demosaicing method
US20010010514A1 (en) * 1999-09-07 2001-08-02 Yukinobu Ishino Position detector and attitude detector
US6727885B1 (en) 1999-09-07 2004-04-27 Nikon Corporation Graphical user interface and position or attitude detector
JP2001325069A (en) 2000-03-07 2001-11-22 Nikon Gijutsu Kobo:Kk Device and method for detecting position
US20050147289A1 (en) * 2002-04-30 2005-07-07 Jfe Steel Coroporation Method and instrument for measuring bead cutting shape of electric welded tube
US20060147128A1 (en) * 2003-02-28 2006-07-06 Sony Corporation Image processing device, method, and program
US20040224761A1 (en) * 2003-05-06 2004-11-11 Nintendo Co., Ltd. Game apparatus, storing medium that stores control program of virtual camera, and control method of virtual camera
US7433547B2 (en) * 2003-06-20 2008-10-07 Canon Kabushiki Kaisha Image signal processing apparatus
US20040257451A1 (en) * 2003-06-20 2004-12-23 Toshinori Yamamoto Image signal processing apparatus
US20070116372A1 (en) * 2003-06-27 2007-05-24 Sony Corporation Signal processing device, signal processing method, program, and recording medium
US20050259114A1 (en) * 2004-05-19 2005-11-24 Hewlett-Packard Development Company , L.P. Method and device for rendering an image for a staggered color graphics display
JP2006155248A (en) 2004-11-30 2006-06-15 Juki Corp Method for detecting gravity center of object
JP2007041910A (en) 2005-08-04 2007-02-15 Shinsedai Kk Pointed position detection method
US20070060384A1 (en) * 2005-09-14 2007-03-15 Nintendo Co., Ltd. Storage medium storing video game program
US7783096B2 (en) * 2005-10-17 2010-08-24 Siemens Corporation Device systems and methods for imaging
US20080186314A1 (en) * 2007-02-02 2008-08-07 Seiko Epson Corporation Image processing device, image processing method, image processing program, recording medium storing image processing program, and image display device
US7982744B2 (en) * 2007-02-02 2011-07-19 Seiko Epson Corporation Image processing device, image processing method, image processing program, recording medium storing image processing program, and image display device
US20080198129A1 (en) * 2007-02-15 2008-08-21 Namco Bandai Games Inc. Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US20080310824A1 (en) * 2007-06-18 2008-12-18 Thomson Licensing Method and device for encoding video levels into subfield code words
US20090163274A1 (en) * 2007-12-21 2009-06-25 Nintendo Co., Ltd. Computer-readable storage medium having game program stored therein and game apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080198129A1 (en) * 2007-02-15 2008-08-21 Namco Bandai Games Inc. Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US8754846B2 (en) * 2007-02-15 2014-06-17 Namco Bandai Games Inc. Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US20110069167A1 (en) * 2009-09-24 2011-03-24 Samsung Electronics Co., Ltd. Three-dimensional pointing sensing apparatus and method
US8773531B2 (en) * 2009-09-24 2014-07-08 Samsung Electronics Co., Ltd. Three-dimensional pointing sensing apparatus and method

Also Published As

Publication number Publication date
US20080199047A1 (en) 2008-08-21
EP1962103B1 (en) 2010-08-04
ATE476672T1 (en) 2010-08-15
JP2008198111A (en) 2008-08-28
ES2350381T3 (en) 2011-01-21
DE602008001997D1 (en) 2010-09-16
EP1962103A1 (en) 2008-08-27
JP4861854B2 (en) 2012-01-25

Similar Documents

Publication Publication Date Title
US8754846B2 (en) Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method
US8784203B2 (en) Determination of controller three-dimensional location using image analysis and ultrasonic communication
US20070257884A1 (en) Game program and game system
US7390254B2 (en) Soccer game method for use in game apparatus, involves recognizing areas pertaining to power of character group, based on calculated arrival times of characters up to sample points
EP0836871A2 (en) Video game machine for playing video ball game and storage medium storing video ball game program
US20060035710A1 (en) Control of data processing
US8290214B2 (en) Indication position calculation system, indicator for indication position calculation system, game system, and indication position calculation method for user input in dynamic gaming systems
US7744466B2 (en) Storage medium storing a game program, game apparatus and game controlling method
JP5469516B2 (en) Image display program, image display system, image display method, and image display apparatus
CN102203695B (en) For transmitting the opertaing device of visual information
JP2002247602A (en) Image generator and control method therefor, and its computer program
US20090002365A1 (en) Image processing program and image processing apparatus
JP2007300985A (en) Game program and game device
JP2008225985A (en) Image recognition system
JP4412715B2 (en) Program, information storage medium, and image generation system
EP1716896A3 (en) Billiard game input device, billiard game system, game input device, and computer program
US8319786B2 (en) Image processing device, control method for image processing device and information recording medium
US20040259615A1 (en) Method for generating image, information storage medium, image generation device, data signal and program
JP4749064B2 (en) Program, information storage medium, and image generation system
JP2005319188A (en) Program, information storage medium and image generation system
WO2013118691A1 (en) Game system
JP2009075739A (en) Program, information storage medium and image generation system
JP2004303034A (en) Image generating system, program, and information storage medium
JP3766394B2 (en) Image processing apparatus, image processing method, and program
JP3686928B2 (en) GAME DEVICE, PROGRAM, AND IMAGE DISPLAY METHOD

Legal Events

Date Code Title Description
AS Assignment

Owner name: NAMCO BANDAI GAMES INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;REEL/FRAME:020809/0617;SIGNING DATES FROM 20080226 TO 20080308

Owner name: ZEROPLUS TECHNOLOGY CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;REEL/FRAME:020809/0617;SIGNING DATES FROM 20080226 TO 20080308

Owner name: SHANG HWANG INDUSTRY CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;REEL/FRAME:020809/0617;SIGNING DATES FROM 20080226 TO 20080308

Owner name: KUMABAYASHI, TATSUJI, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;REEL/FRAME:020809/0617;SIGNING DATES FROM 20080226 TO 20080308

Owner name: NAMCO BANDAI GAMES INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;SIGNING DATES FROM 20080226 TO 20080308;REEL/FRAME:020809/0617

Owner name: KUMABAYASHI, TATSUJI, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;SIGNING DATES FROM 20080226 TO 20080308;REEL/FRAME:020809/0617

Owner name: ZEROPLUS TECHNOLOGY CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;SIGNING DATES FROM 20080226 TO 20080308;REEL/FRAME:020809/0617

Owner name: SHANG HWANG INDUSTRY CO., LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUO, CHUN LIANG;TAI, SHING LUNG;KUMABAYASHI, TATSUJI;SIGNING DATES FROM 20080226 TO 20080308;REEL/FRAME:020809/0617

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: BANDAI NAMCO GAMES INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:NAMCO BANDAI GAMES INC.;REEL/FRAME:033061/0930

Effective date: 20140401

AS Assignment

Owner name: BANDAI NAMCO ENTERTAINMENT INC., JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:BANDAI NAMCO GAMES INC.;REEL/FRAME:038104/0788

Effective date: 20150401

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12