WO2006080546A1 - Tilt detection method and entertainment system - Google Patents

Tilt detection method and entertainment system Download PDF

Info

Publication number
WO2006080546A1
WO2006080546A1 PCT/JP2006/301610 JP2006301610W WO2006080546A1 WO 2006080546 A1 WO2006080546 A1 WO 2006080546A1 JP 2006301610 W JP2006301610 W JP 2006301610W WO 2006080546 A1 WO2006080546 A1 WO 2006080546A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge point
coordinate
point
vertical coordinate
horizontal coordinate
Prior art date
Application number
PCT/JP2006/301610
Other languages
French (fr)
Inventor
Hiromu Ueshima
Shinya Katsumata
Original Assignee
Ssd Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ssd Company Limited filed Critical Ssd Company Limited
Publication of WO2006080546A1 publication Critical patent/WO2006080546A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • A63F13/10
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1062Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to a type of game, e.g. steering wheel
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8011Ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting

Definitions

  • the present invention relates to a tilt detection method, an entertainment system and the related techniques for detecting the tilt of an operation article by taking stroboscopic images of the operation article having a reflecting object .
  • Japanese Patent Published Application No . 2004-85524 by the present applicant discloses a golf game system including a game apparatus and golf-club-type input device (operation article) , and the housing of the game apparatus houses an imaging unit which comprises an image sensor, infrared light emitting diodes and so forth .
  • the infrared light emitting diodes intermittently emit infrared light to a predetermined area above the imaging unit while the image sensor intermittently captures images of the reflecting object of the golf- club-type input device which is moving in the predetermined area .
  • the location and velocity of the golf-club-type input device can be detected by processing the stroboscopic images of the reflecting obj ect .
  • a tilt detection method of detecting a tilt of an operation article which is held and given motion by an operator comprises : emitting light to the operation article which has a reflecting obj ect in a predetermined cycle; imaging the operation article to which the light is emitted, and acquiring lighted image data including a plurality of pixel data items each of which comprises a luminance value; imaging the operation article to which the light is not emitted, and acquiring unlighted image data including a plurality of pixel data items each of which comprises a luminance value; generating differential image data by calculating difference between the lighted image data and the unlighted image data; obtaining the coordinates of a pixel having the maximum horizontal coordinate among the pixels of which the image of the operation article is made up in a differential image on the basis of the differential image data; obtaining the coordinates of a pixel having the minimum horizontal coordinate among the pixels of which the image of the operation article is made up; obtaining the coordinates of a pixel having the minimum
  • the above tilt detection method operates , in the case where there are a plurality of the pixels which have the maximum horizontal coordinate, such that the step of obtaining the coordinates of a pixel having the maximum horizontal coordinate sets the vertical coordinate of the first edge point to the average value of the vertical coordinates of the plurality of the pixels having the maximum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the minimum horizontal coordinate, the step of obtaining the coordinates of a pixel having the minimum horizontal coordinate sets the vertical coordinate of the second edge point to the average value of the vertical coordinates of the plurality of the pixels having the minimum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the maximum vertical coordinate, the step of obtaining the coordinates of a pixel having the maximum vertical coordinate sets the horizontal coordinate of the third edge point to the average value of the horizontal coordinates of the plurality of the pixels having the maximum vertical coordinate; and wherein in the case where there are a plurality of the pixels which have the minimum vertical coordinate, the step of obtaining the coordinates of
  • the above tilt detection method further comprises , in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the first edge point and the second edge point; in- the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the third edge point and the fourth edge point; and in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with which is greater
  • the above tilt detection method further comprises, in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point,
  • the above tilt detection method further comprises , in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point
  • an entertainment system comprises : an operation article that is operated by a user when the user is enj oying said entertainment system; an imaging device operable to capture an image of said operation article; and an information processing apparatus connected to said imaging device, and operable to receive the images of said operation article from said imaging device and determine tilts of said operation article on the basis of the images of said operation article, wherein said information processing apparatus at least including : a unit operable to obtain four representative points for representing a profile of said operation article in the image of said operation article; a unit operable to determine whether or not there is a representative point, among the four representative points, which is shared by the shortest side and the next shortest side of a quadrilateral which is defined by the four representative points as its vertices , a unit operable to calculate the tilt of said operation article on the basis of the tilt of a straight line passing through the shared representative point and the representative point opposed to the shared representative point when there is the shared representative point, and calculate the tilt of said operation article on
  • Fig . 1 is a block diagram showings the entire configuration of a game system in accordance with an embodiment of the present invention.
  • Fig . 2 is a schematic diagram showing the electric configuration of the game apparatus 1 of Fig. 1.
  • Fig . 3 is a flow chart for showing an example of the overall process flow of the game apparatus 1 of Fig . 1.
  • Fig. 4 is a flow chart for showing an example of the imaging process in step S2 of Fig . 3.
  • Fig . 5 is a schematic representation of a binarized image which is formed by the threshold value "ThB" calculated in step S4 of Fig . 3.
  • Fig . 6 is a view for explaining the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3.
  • Fig . 7 is a flow chart for showing an example of the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3.
  • Fig . 8 is a flow chart for showing an example of the process of calculating the coordinates of the left edge point in step S34 of Fig . 7.
  • Fig . 9 is a flow chart for showing an example of the process of calculating the coordinates of the right edge point in step S35 of Fig . 7.
  • Fig . 10 is a flow chart for showing an example of the process of calculating the coordinates of the upper edge point in step S36 of Fig. 7.
  • Fig . 11 is a flow chart for showing an example of the process of calculating the coordinates of the lower edge point in step S37 of Fig . 7.
  • Fig . 12 is a flow chart for showing an example of the process of determining whether or not the sword traj ectory obj ect is to appear in step S6 of Fig . 3.
  • Fig . 13 is a view for explaining the orientation of a sword 11 which is determined by the high speed processor 21 of Fig . 2.
  • Fig . 14 is a flow chart for showing an example of the process of determining whether or not the shield obj ect is to appear in step S7 of Fig . 3.
  • Fig . 15 is a view for explaining a third rule of the tilt determination in accordance with the present embodiment .
  • Fig . 16 is a view for explaining a first rule of the tilt determination in accordance with the present embodiment .
  • Fig. 17 is a view for explaining a " second rule of the tilt determination in accordance with the present embodiment .
  • Fig . 18 is a flow chart for showing an example of the preprocessing in accordance with the first rule in step S8 of Fig . 3.
  • Fig . 19 is a flow chart for showing an example of the preprocessing in accordance with the second rule in step S9 of Fig. 3.
  • Fig . 20 is a flow chart for showing an example of the preprocessing in accordance with the third rule in step SlO of Fig . 3.
  • Fig . 21 is a view for explaining the tilt of the sword 11 which is determined by the high speed processor 21 of Fig . 2.
  • Fig. 22 is an explanatory view for showing the animation of the sword trajectory object which is displayed in accordance with the direction in which the sword 11 of Fig. 1 is swung .
  • Fig. 23 is a view showing examples of the shield objects "AO" to "A4" which are displayed in correspondence with the tilts "a ⁇ " to "a4 " of Fig . 21.
  • Fig . 24 is a view showing an example of the shield object "AIr" which is displayed on the television monitor 7 of Fig . 1.
  • Fig . 1 is a block diagram showings the entire configuration of a game system in accordance with an embodiment of the present invention . As shown in Fig . 1 , this game system comprises a gaming apparatus 1 , an operation article 11 and a television monitor 7.
  • the present embodiment is directed to an example of a game in which a player 17 operates an operation article 11 in order to cut down enemy obj ects which are displayed on the television monitor 7 , and thereby the operation article 11 is referred to as the "sword 11" in the following explanation .
  • the sword 11 is formed in a columnar shape which is tapered down from the base end to the tip . Also, the sword 11 comprises a grip section 13 which is gripped by the player 17, and a blade portion on which a retroreflective sheet 15 is attached .
  • the retroreflective sheet 15 is provided in order to generally cover the surface of the blade portion of the sword 11.
  • the game apparatus 1 is connected to " a television monitor 7 by an AV cable 9. Furthermore, although not shown in the figure, the game apparatus 1 is supplied with a power supply voltage from an AC adapter or a battery.
  • the game apparatus 1 is provided with an infrared filter 5 which is located in the front side of the game apparatus 1 and serves to transmit only infrared light, and furthermore there are four infrared light emitting diodes 3 which are located around the infrared filter 5 and serves to emit infrared light .
  • An image sensor 19 to be described below is located behind the infrared filter 5.
  • the four infrared light emitting diodes 3 intermittently emit infrared light . Then, the infrared light emitted from the infrared light emitting diodes 3 is reflected by the retroreflective sheet 15 attached to the sword 11, and input to the image sensor 19 located behind the infrared filter 5. An image of the sword 11 can be captured by the image sensor 19 in this way. While infrared light is intermittently emitted, the image sensor 19 performs the imaging process even in non-emission periods .
  • the position, area, tilt and the like of the sword 11 can be detected in the game apparatus 1 by calculating the differential image between the image with infrared light and the image without infrared light when a player 17 swings the sword 11.
  • Fig . 2 is a schematic diagram showing the electric configuration of the game apparatus 1 of Fig . 1.
  • the game apparatus 1 includes the image sensor 19, the infrared light emitting diodes 3, a high speed processor 21 , a ROM (read only memory) 23 and a bus 25.
  • the sword 11 is illuminated with the infrared light which is emitted from the infrared light emitting diodes 3 and thereby the retroreflective sheet 15 reflects the infrared light .
  • the image sensor 19 receives the reflected light from this retroreflective sheet 15 for capturing an image, and outputs an image signal of the retroreflective sheet 15.
  • This analog image signal from the image sensor 19 is converted into digital data by an A/D converter (not shown in the figure) implemented within the high speed processor 21. This process is performed also in the periods without infrared light .
  • the high speed processor 21 lets the infrared light emitting diodes 3 intermittently flash for performing such stroboscopic imaging .
  • the processor 21 includes various functional blocks such as a CPU (central processing unit) , a graphics processor, a sound processor and a DMA controller, and in addition to this , includes the A/D converter for accepting analog signals and an input/output control circuit for receiving input signals from external electronic circuits and electronic elements and outputting output signals to them.
  • the image sensor 19 and the infrared light emitting diodes 3 are controlled by the CPU through the input/output control circuit .
  • the CPU runs a game program stored in the ROM 23, and performs the various types of arithmetic operations . Accordingly, the graphics processor and the sound processor read image data and sound data stored in the ROM 23 in accordance with the results of the operations performed by the CPU, generate a video signal and an audio signal, and outputs them through the AV cable 9.
  • the high speed processor 21 is provided with an internal memory, which is not shown in the .figure but is for example a RAM (random access memory) .
  • the internal memory is used to provide a working area, a counter area, a resister area, a temporary data area, a flag area and/or the like .
  • the high speed processor 21 processes the digital image signal input from the image sensor 19 through the A/D converter, detects the position, area, tilt and the like of the sword 11 , and generates a video signal and an audio signal by performing a graphics process , a sound process and other processes and computations .
  • the video signal and the audio signal are supplied to the television monitor 7 through the AV cable 9 and thereby the television monitor 7 displays an image corresponding to the video signal while the speaker thereof (not shown in the figure) outputs sound corresponding to the audio signal .
  • Fig . 3 is a flow chart for showing an example of the overall process flow of the game apparatus 1 of Fig . 1.
  • the high speed processor 21 performs the initial settings of the system in step Sl .
  • the high speed processor 21 performs the process of imaging the sword 11 by driving the infrared light emitting diodes 3.
  • Fig . 4 is a flow chart for showing an example of the imaging process in step S2 of Fig . 3.
  • the high speed processor 21 turns on the infrared light emitting diodes 3 in step S20.
  • the high speed processor 21 acquires , from the image sensor 19, image data with infrared light, and stores the image data in the internal memory.
  • a CMOS image sensor of 32 pixels x 32 pixels is used as the image sensor 19 of the present embodiment .
  • pixel data of 32 pixels x 32 pixels is output as image data from the image sensor 19. This pixel data is converted into digital data by the A/D converter and stored in the internal memory as a two-dimensional array element "Pl [X] [Y] " .
  • step S22 the high speed processor 21 turns off the infrared light emitting diodes 3.
  • step S23 the high speed processor 21 acquires , from the image sensor 19, image data (pixel data of 32 pixels x 32 pixels) without infrared light, and stores the image data in the internal memory. In this case, this pixel data is stored in the internal memory as a two-dimensional array element "P2 [X] [Y] " .
  • the stroboscope imaging is performed in this way.
  • the horizontal axis is X-axis and the vertical axis is Y-axis .
  • the pixel data comprises a luminance value .
  • step S3 the high speed processor 21 calculates the differential data between the pixel data acquired when the infrared light emitting diodes 3 are turned on (i . e . , the respective array elements "Pl [X] [Y] " ) and the pixel data acquired when the infrared light emitting diodes 3 are turned off (i . e . , the corresponding array elements "Pl [X] [Y] " ) , and the differential data is assigned to an array element "Dif [X] [Y] ⁇ .
  • the coordinate system in which the respective pixels of the differential image are located is the same coordinate system in which the respective pixels of the images captured by the image sensor 19 are located.
  • the array element "Pl [X] [Y] “ i . e . , the pixel data with illumination)
  • the array element “P2 [X] [Y] “ i . e . , the pixel data without illumination)
  • the array element "Dif [X] [Y] ⁇ i . e . , the differential data
  • step S4 the high speed processor ' 21 calculates a threshold value "ThB" which is used to binarize each array element "Dif [X] [Y] " which is obtained in step S3. More specifically speaking, the high speed processor 21 extracts the element (the differential data) , having the maximum luminance value from among all the array elements "Dif [X] [Y] " , and multiplies the maximum luminance value by a predetermined value (for example, 0.6) , and sets the current threshold value "ThB" to the result . As apparent from Fig . 3, the above calculation of the threshold value "ThB" is performed every time the display screen of the television monitor 7 is updated.
  • a predetermined value for example, 0.6
  • Fig . 5 is a schematic representation of a binarized image which is formed by threshold value "ThB" calculated in step S4 of Fig . 3.
  • the binarized image 27 (32 pixels x 32 pixels) is obtained by binarizing all the array elements "Dif [X] [Y] " on the basis of the threshold value "ThB .
  • the binarized image 27 contains an image "IM" of the retroreflective sheet 15 attached to the sword 11.
  • the respective processes are performed while comparing the respective array element "Dif [X] [Y] " with the threshold value ' "ThB” , and thereby the binarized image 27 as illustrated in Fig. 5 is not generated in the processes practically .
  • the coordinate system in which the respective pixels of the binarized image are located is the same coordinate system in which the respective pixels of the image captured by the image sensor 19 are located .
  • the origin "0" of the coordinate system is as illustrated in the figure .
  • step S5 the coordinates (XU, YU) of the upper edge point, the coordinates (XB, YB) of the lower edge point, the coordinates (XL, YL) of the left edge point and the coordinates (XR, YR) of the right edge point of the image "IM" of the retroreflective sheet 15 are obtained in step S5.
  • the upper edge point, the lower edge point, the left edge point and the right edge point are referred to also as the edge point respectively .
  • the upper edge point is used to represent a single pixel or a plurality of pixels having the minimum Y-coordinate of the image " IM" .
  • the lower edge point is used to represent a single pixel or a plurality of pixels having the maximum Y-coordinate of the image " IM” .
  • the left edge point is used to represent a single pixel or a plurality of pixels having the minimum X-coordinate of the image "IM” .
  • the right edge point is used to represent a single " pixel or a plurality of pixels having the maximum X-coordinate of the image "IM” .
  • the coordinates (XU, YU) of the upper edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the upper edge point, the coordinates (XU, YU) of the upper edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points .
  • the coordinates (XB, YB) of the lower edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the lower edge point, the coordinates (XB, YB) of the lower edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points .
  • the coordinates (XL, YL) of the left edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the left edge point, the coordinates (XL, YL) of the left edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points .
  • the coordinates (XR, YR) of the right edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the right edge point, the coordinates (XR, YR) of the right edge point are set to the the arithmetic mean values of the respective coordinates of the plurality of points .
  • Fig . 6 is a view for explaining the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3.
  • each rectangle corresponds to one pixel in the image "IM" of the retroreflective sheet 15. For example, as shown in Fig .
  • the high speed processor 21 sets the coordinates (XU, YU) of the upper edge point to the coordinates of the pixel indicated by an arrow "UA” in the image "IM” , the coordinates (XB, YB) of the lower edge point to the coordinates of the pixel indicated by an arrow “BA” in the image "IM” , the coordinates (XL, YL) of the left edge point to the coordinates of the pixel indicated by an arrow "LA” in the image “IM” , and the coordinates (XR, YR) of the right edge point to the coordinates of the pixel indicated by an arrow "RA” in the image "IM” .
  • the high speed processor 21 sets the coordinates (XU, YU) of the upper edge point to the coordinates of the point indicated by an arrow "UA” corresponding to the arithmetic mean values of the respective coordinates of the plurality of pixels, and in the case where there are a plurality of pixels which correspond to the left edge point, the high speed processor 21 sets the coordinates (XL, YL) of the left edge point to the coordinates of the point indicated by an arrow "LA” corresponding to the arithmetic mean values of the respective coordinates of the plurality of pixels .
  • Fig . 7 is a flow chart for showing an example of the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig. 3.
  • the high speed processor 21 assigns "0" to variables "X” , "Y” , “maxX” , “maxY” , “XL” , “YL” , “XR” , “YR” , “XU” , “YU” , “XB” , “YB” , "Ca” , “Cl” , “Cr” , “Cu” and “Cb” respectively .
  • the high speed processor 21 assigns "31” to variables "minX” and “minY” .
  • step S31 the high speed processor 21 compares the array element "Dif [X] [Y] " with the predetermined threshold value "ThB” .
  • step S32 when the array element "Dif [X] [Y] " is larger than the predetermined threshold value "ThB” , the high speed processor 21 proceeds to step S33, and conversely when it is no larger than the predetermined threshold value "ThB” , the high speed processor 21 proceeds to step S38.
  • the process in steps S31 and S32 is the process of detecting whether or not the retroreflective sheet 15 is captured.
  • the luminance values of the pixels corresponding to the retroreflective sheet 15 have larger values in the differential image on the basis of the array elements "Dif [X] [Y] " , and thereby it is possible to recognize each pixel having a luminance value larger than the threshold value "ThB" as part of the retroreflective sheet 15, as captured, by determining whether or not the luminance value is greater than the threshold value "ThB" .
  • step S33 the high speed processor 21 increments the counter value "Ca” by one in order to count the array elements "Dif [X] [Y] " having luminance values larger than the threshold value "ThB” .
  • step S34 the high speed processor 21 calculates the coordinates (XL, YL) of the left edge point of the image "IM" of the retroreflective sheet 15.
  • Fig . 8 is a flow chart for showing an example of the process of calculating the coordinates of the left edge point in step S34 of Fig . 7.
  • the high speed processor 21 compares the minimum X-coordinate "minX" with the coordinate "X" in step S50. If "X" is less than or equal to "minX" in step S51, the high speed processor 21 proceeds to step S52 , otherwise proceeds to step S35 of Fig . 7.
  • step S52 the high speed processor 21 assigns "X" to "minX” to update "minX” .
  • step S54 the high speed processor 21 assigns "0" to the variable "YL” , and in step S55 the high speed processor 21 assigns "0" to the counter value "Cl” indicative of the number of pixels having the same X-coordinate equal to "minX” and then proceeds to step S56.
  • step S56 the high speed processor 21 increments the counter value "Cl” by one .
  • step S57 the high speed processor 21 assigns "minX” to the variable “XL” to obtain a new value of the variable "XL” , and adds "Y” to the variable "YL” to obtain a new value of the variable “YL” .
  • the value of the variable “YL” is the total value of the Y-coordinates of the pixels having the same X-coordinate equal to "minX" .
  • step S35 the high speed processor 21 calculates the coordinates (XR, YR) of the right edge point of the image "IM" of the retroreflective sheet 15.
  • Fig . 9 is a flow chart for showing an example of the process of calculating the coordinates of the right edge point in step S35 of Fig . 7.
  • the high speed processor 21 compares the maximum X-coordinate "maxX" with the coordinate "X" in step S60. If "X" is larger than or equal to "maxX” in step S61 , the high speed processor 21 proceeds to step S62 otherwise proceeds to step S36 of Fig . 7.
  • step S62 the high speed processor 21 assigns "X" to "maxX” in order to obtain a new value of "maxX” .
  • step S64 the high speed processor 21 assigns "0" to the variable "YR” , and in step S65 the high speed processor 21 assigns "0" to the counter value "Cr” indicative of the number of pixels having the same X-coordinate equal to "maxX” and then proceeds to step S66.
  • step S66 the high speed processor 21 increments the counter value "Cr” by one .
  • step S67 the high speed processor 21 assigns "maxX” to the variable "XR” to obtain a new value of the variable "XR” , and adds "Y” to the variable "YR” to obtain a new value of the variable "YR” .
  • the value of the variable "YR” is the total value of the Y-coordinates of the pixels having the same X-coordinate equal to "maxX” .
  • step S36 the high speed processor 21 calculates the coordinates (XU, YU) of the upper edge point of the image " IM" of the retroreflective sheet 15.
  • Fig . 10 is a flow chart for showing an example of the process of calculating the coordinates of the upper edge point in step S36 of Fig . 7. As shown in Fig . 10 , the high speed processor 21 compares the minimum Y-coordinate "minY" with the coordinate "Y" in step S70. If "Y" is smaller than or equal to "minY" in step S71 , the high speed processor 21 proceeds to step S72 otherwise proceeds to step S37 of Fig . 7.
  • step S72 the high speed processor 21 assigns "Y” to "minY” in order to obtain a new value of "minY” .
  • step S74 the high speed processor 21 assigns "0" to the variable "XU” , and in step S75 the high speed processor 21 assigns " 0" to the counter value "Cu” indicative of the number of pixels having the same Y-coordinate equal to "minY” and then proceeds to step S76.
  • step S76 the high speed processor 21 increments the counter value "Cu” by one .
  • step S77 the high speed processor 21 adds "X" to the variable "XU” to obtain a new value of the variable "XU” , and assigns "minY” to the variable "YU” to obtain a new value of the variable "YU” .
  • the value of the variable "XU” is the total value of the X-coordinates of the pixels having the same Y-coordinate equal to "minY” .
  • step S37 the high speed processor 21 calculates the coordinates (XB, YB) of the lower edge point of the image "IM" of the retroreflective sheet 15.
  • Fig . 11 is a flow chart for showing an example of the process of calculating the coordinates of the lower edge point in step S37 of Fig . 7.
  • step S80 the high speed processor 21 compares the maximum Y-coordinate "maxY" with the coordinate "Y" . If “Y" is larger than or equal to "maxY” in step S81 , the high speed processor 21 proceeds to step S82 otherwise proceeds to step S38 of Fig . 7.
  • step S82 the high speed processor 21 assigns "Y" to "maxY” in order to obtain a new value of "maxY” .
  • step S83 the new "maxY” is compared with the previous "maxY" (i . e .
  • step S84 the high speed processor 21 assigns "0" to the variable "XB” , and in step S85 the high speed processor 21 assigns "0" to the counter value "Cb" indicative of the number of pixels having the same Y-coordinate equal to "maxY” and then proceeds to step S86.
  • step S86 the high speed processor 21 increments the counter value "Cb” by one .
  • step S87 the high speed processor 21 adds "X" to the variable "XB” to obtain a new value of the variable "XB” , and assigns "maxY” to the variable "YB” to obtain a new value of the variable "YB” .
  • the value of the variable "XB” is the total value of the X-coordinates of the pixels having the same Y-coordinate equal to "maxY” .
  • step S38 the high speed processor 21 increments the variable "Y" indicative of the Y-coordinate of pixels being processed in the differential image .
  • step S39 the high speed processor 21 determines whether or not the value of the variable "Y" reaches "32" , and if it is "YES” the process proceeds to step S40 conversely if it is "NO” the process proceeds to step S31.
  • the high speed processor 21 assigns "0" to the variable "Y” in step S40, and increments the variable "X” indicative of the X- coordinate of pixels in the differential image in step S41. After these processes, the process in steps S31 to S37 is performed for the pixels located on the next column.
  • step S42 the high speed processor 21 determines whether or not the value of the variable "X" reaches "32" , and if it is "YES” the process proceeds to step S43, conversely if it is "NO” the process proceeds to step S31.
  • X 32 in step S42 means that the process in steps S31 to S37 is finished for all the pixels of the differential image (32 x 32 pixels) .
  • step S43 the high speed processor 21 assigns "YL/C1" to the variable "YL” in order to obtain a new value of the variable "YL"
  • the high speed processor 21 acquires the coordinates (XL, YL) of the left edge point, the coordinates (XR, YR) of the right edge point, the coordinates (XU, YU) of the upper edge point, and the coordinates (XB, YB) of the lower edge point .
  • step S44 the high speed processor 21 obtains the coordinates (Xc, Yc) of the center point among the upper, lower, left and right edge points of the image "IM" (hereinafter referred to as the "representative point” ) , and returns to the main routine .
  • Xc (XL + XR) /2 ... ( 5)
  • Yc (YU + YB) /2 ... ( 6)
  • the high speed processor 21 determines whether or not the condition for displaying a sword trajectory object on the television monitor 7 is satisfied.
  • the sword trajectory object is a belt-like object " (refer to Fig . 22 to be described below) which is used to represent the swing trajectory of the sword 11 (slash mark) in the real space on the television monitor 7.
  • the above condition for displaying the sword trajectory object on the television monitor 7 is that the sword 11 is swung at a speed higher than a predetermined speed.
  • Fig . 12 is a flow chart for showing an example of the process of determining whether or not the sword trajectory object is to be displayed in step S6 of Fig . 3.
  • the high speed processor 21 checks a sword flag indicative of whether or not the condition for displaying the sword trajectory object on the television monitor 7 is satisfied, and if the sword flag is turned on (the condition is satisfied) the process proceeds to step S91 , conversely if turned off the process proceeds to step S98.
  • the high speed processor 21 checks in step S98 whether or not the current and previous representative points (Xc, Yc) are present, and if both the representative points are present, a velocity vector can be calculated so that the process proceeds to step S99, otherwise the process proceeds to step S103 in which the sword flag is turned off and returns to the main routine .
  • step S99 the velocity vector "v" (the X-coordinate of the current representative point minus the X-coordinate of the previous representative point, the Y-coordinate of the current representative point minus the Y-coordinate of the previous representative point) is calculated by setting the end point thereof to the representative point calculated in step S44 in the current cycle and the start point thereof to the representative point calculated in step S44 in the previous cycle .
  • step SlOO the high speed processor 21 calculates the absolute value
  • the high speed processor 21 determines whether or not the speed I v I of the sword 11 exceeds the threshold value "ThV" (the condition for displaying the sword traj ectory obj ect on the television monitor 7 ) in step SlOl , and if it exceeds the process proceeds to step S102 otherwise proceeds to step S103 in which the sword flag is turned off and returns to the main routine .
  • the high speed processor 21 turns on the sword flag, and returns to the main routine .
  • the sword traj ectory obj ect is not necessarily displayed in the next video frame just after the sword flag is turned on, but the process in steps S91 to S97 is performed in advance . If the sword flag is turned on, the high speed processor 21 proceeds to step S91 from step S90 to check whether or not the current representative point is present, and if it is present the process proceeds to step S92 otherwise proceeds to step S94.
  • step S92 the velocity vector "v" is calculated anew by setting the end point thereof to the representative point calculated in step S44 in the current cycle and the start point thereof to the start point of the velocity vector "v” which is calculated in step S99 in the previous cycle .
  • step S93 the high speed processor 21 determines whether or not the video frame being displayed at present on the television monitor 7 is the fourth video frame as counted from the video frame displayed when the sword flag is turned on, and if it is the fourth video frame the process proceeds to step S95 otherwise returns to the main routine . In this case, the video frame displayed when the sword flag is turned on is counted as the zeroth video frame .
  • step S94 the high speed processor 21 determines whether or not the video frame being displayed at present on the television monitor 7 is the first video frame as counted from the video frame displayed when the sword flag is turned on, and if it is the first video frame the process returns to the main routine otherwise proceeds to step S95.
  • step S95 the high speed processor 21 turns on a traj ectory flag which is used to indicate that the sword trajectory obj ect corresponding to the latest velocity vector "v" is to be displayed in the video frame next to the video frame being currently displayed on the television monitor 7.
  • the velocity vector "v” is calculated from the representative points of six video frames at a maximum by performing the process in step S93 before step S95. This is because the velocity vector "v” has been calculated from the representative points of two video frames when the sword flag is turned on (refer to step S99) , and the video frame when "YES" is first determined in step S90 is the first video frame as counted from the video frame displayed when the sword flag is turned on .
  • step S94 by performing the process in step S94 before step S95, even if the current video frame is not the fourth video frame (refer to step S93 ) but if it is the video frame after the first video frame as counted from the video frame displayed when the sword flag is turned on (i . e . , "NO" in step S94 ) , the trajectory flag is turned on in the case where the current representative point is not present .
  • the trajectory flag is not turned on in the case where the current representative point is not present, and the process returns to the main routine .
  • step S96 the high speed processor 21 turns off the sword flag .
  • step S97 the high speed processor 21 classifies the orientation of the velocity vector "v" in either of eight orientations "d ⁇ " to "d7" and sets a sword orientation flag to a value in accordance with the classification result, and the process returns to the main routine .
  • the classification of the orientation of the velocity vector "v” in step S97 will be explained in detail .
  • Fig . 13 is a view for explaining the orientation of the sword 11 which is determined by the high speed processor 21 of Fig . 2. In Fig .
  • the orientations "d ⁇ " and “d4 " are aligned with the X-axis of the differential image
  • the orientations "d2" and “d6” are aligned with the Y-axis of the differential image .
  • 360 degrees are equally divided by eight to define eight angular ranges of 45 degrees such that each angular range is represented by one of the orientations "d ⁇ " to "d7" .
  • the sword orientation flag is set to a value corresponding to the angular range in which the orientation of the velocity vector "v” of the sword 11 is located . For example, if the orientation of the velocity vector "v” is located in the angular range corresponding to the orientations "d5" , the sword orientation flag is set to the value corresponding to the direction "d5" .
  • step S7 the high speed processor 21 determines whether or not a condition for displaying a r - belt-like shield object on the television monitor 7 is satisfied .
  • the above condition for displaying the shield object on the television monitor 7 is that the representative point of the sword 11 stays in the same area for a predetermined period after the area of the sword 11 in the differential image exceeds a predetermined value .
  • Fig . 14 is a flow chart for showing an example of the process of determining whether or not the shield obj ect is to be displayed in step S7 of Fig. 3.
  • the high speed processor 21 compares the threshold value "ThA” and the counter value "Ca" corresponding to the area of the retroreflective sheet 15 in the differential image .
  • the counter value "Ca” i . e . , the area of the retroreflective sheet 15 in the differential image
  • the high speed processor 21 proceeds to step S114 otherwise proceeds to step S112.
  • step S114 the high speed processor 21 checks a shield flag indicative of whether or not the condition for displaying the shield object on the television monitor 7 is satisfied, and if the shield flag is turned off the process proceeds to step S115, conversely if turned on the process returns to the main routine .
  • step S115 the high speed processor 21 checks a counter value "Cs" indicative of the period in the number of video frames in which the representative point of the sword 11 stays within a rectangular area "Ar" to be described below, and if the counter value "Cs" is "0” the process proceeds to step S116 otherwise proceeds to step S118.
  • step S116 the high speed processor 21 sets the rectangular area "Ar” having a center positioned at the representative points (Xc, Yc) of the sword 11.
  • step S115 the high speed processor 21 increments the counter value "Cs" by one, and the process returns to the main routine .
  • step S118 the high speed processor 21 determines whether or not the representative point is located in the rectangular area "Ar" , and if it is located the process proceeds to step S119 otherwise proceeds to step S123.
  • step S119 the high speed processor 21 increments the counter value "Cs" by one .
  • step S120 the high speed processor 21 determines whether or not the counter value "Cs" is equal to a predetermined value (for example, "3" ) , and if it is equal the process proceeds to step S121 otherwise the process returns to the main routine .
  • the high speed processor 21 turns on the shield flag in step S121 , assigns "0" to the counter value "Cs" in step S122 , and the process returns to the main routine .
  • step S112 since the counter value "Ca" indicative of the area does not exceed the threshold value "ThA” , "0" is assigned to the counter value "Cs” , and in step S113 the shield flag is turned off and the process returns to the main routine .
  • step S123 since the representative point does not stay the rectangular area "Ar” , "0" is assigned to the counter value "Cs" , and in step S124 the shield flag is turned off and the process returns to the main routine .
  • the shield flag is maintained in the state of "ON” in step S114 until the area becomes smaller than the threshold value "ThA" irrespective of the position of the representative point .
  • step SlO preprocessing of determining the tilt of the sword 11 is performed in steps S8 to SlO .
  • the third rule used in step SlO will be explained. Namely, it is assumed that the upper edge point, the lower edge point, the left edge point and the right edge point are the vertices of a quadrilateral . Among four sides of the quadrilateral, if the shortest side and the next shortest side share one of the four edge points, the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point and the coordinates of the edge point opposed to the shared edge point .
  • the coordinates for determining the tilt of the sword 11 are set to the coordinates of the center point of the shortest side and the coordinates of the center point of the next shortest side .
  • Fig . 15 is a view for explaining the third rule of the tilt determination in accordance with the present embodiment .
  • the upper edge point "up” , the lower edge point “bp” , the left edge point “Ip” and the right edge point “rp” are the vertices of a quadrilateral .
  • the shortest side “si” and the next shortest side “s2" share the edge point "Ip” , and thereby the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point "Ip” and the coordinates of the edge point "rp” opposed to the shared edge point "Ip” . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through these edge points "Ip” and "rp” . In the case of the example shown in Fig .
  • the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through these center points "pi” and "p2" .
  • the first rule and the second rule are provided. Namely, it is first determined whether or not the ' first rule is applicable, and if applicable the tilt determination process is performed on the basis of the first rule; in the case where the first rule is not applicable, it is next determined whether or not the second rule is applicable, and if applicable the tilt determination process is performed on the basis of the second rule; in the case where the second rule is also not applicable, the tilt determination process is performed on the basis of the third rule .
  • Fig . 16 is a view for explaining the first rule of the tilt determination in accordance with the present embodiment .
  • the first rule is applicable in the case where the X-coordinate of the upper edge point "up” is equal to the X- coordinate of the lower edge point "bp" and the Y-coordinate of the left edge point "Ip” is equal to the Y-coordinate of the right edge point "rp” .
  • the coordinates for determining the tilt of the sword 11 are set in accordance with whether or not the ratio of the length "H” of the straight line "SH” connecting the left edge point “Ip” and the right edge point “rp” to the length "V” of the straight line “SV” connecting the upper edge point “up” and the lower edge point “bp” , i . e . , the ratio "H/V” , is larger than “1” . Namely, if "H/V" is larger than "1” , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the left edge point "Ip” and the right edge point "rp” .
  • the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the left edge point "Ip” and the right edge point “rp” in step SlI of Fig . 3 to be described below .
  • the coordinates for determining the tilt of the sword 11 are set to the coordinates of the upper edge point "up” and the lower edge point "bp” . Accordingly, in such a case, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the upper edge point "up” and the lower edge point "bp” in step SIl of Fig . 3 to be described below.
  • Fig . 17 is a view for explaining the second rule of the tilt determination in accordance with the present embodiment .
  • the coordinates for determining the tilt of the sword 11 are set to the coordinates of the left edge point "Ip” and the right edge point “rp” . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the left edge point "Ip” and the right edge point “rp” . On the other hand, if the ratio "H/V" is smaller than the predetermined value "HV2" , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the upper edge point "up” and the lower edge point "bp” . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL” passing through the upper edge point "up” and the lower edge point “bp” .
  • Fig . 18 is a flow chart for showing an example of the preprocessing in accordance with the first rule in step S8 of Fig . 3.
  • the high speed processor 21 checks the state of the shield flag in step S130 , and if it " is turned on the process proceeds to step S131, conversely if it is turned off the process proceeds to step SIl of Fig . 3.
  • the high speed processor 21 determines whether or not the X-coordinate "XU" of the upper edge point is equal to the X- coordinate "XB" of the lower edge point, and if it is not equal the process returns to the main routine, conversely if it is equal the process proceeds to step S132.
  • step S132 the high speed processor 21 determines whether or not the Y-coordinate "YL" of the left edge point is equal to the Y- coordinate "YR" of the right edge point, and if it is not equal the process returns to the main routine, conversely if it is equal the process proceeds to step S133. As has been discussed above, it is determined whether or not the first rule is applicable in steps S131 and S132.
  • step S133 the high speed processor 21 obtains the distance between the X-coordinate "XL” of the left edge point and the X- coordinate "XR" of the right edge point (i . e . , width "H” ) , and the distance between the Y-coordinate "YU” of the upper edge point and the Y-coordinate "YB” of the lower edge point (i . e . , height “V” ) in accordance with the following equations .
  • H XR - XL ... (7 )
  • V YB - YU ... ( 8 )
  • step S134 the high speed processor 21 calculates "H/V” .
  • step S135 the high speed processor 21 determines whether or not "H/V" is larger than "1" , and if it is larger the process proceeds to step S136 otherwise proceeds to step S137.
  • step S136 the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • step S137 the high speed processor 21 sets the coordinates of the upper edge point and the lower edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • Fig . 19 is a flow chart for showing an example of the preprocessing in accordance with the second rule in step S9 of Fig . 3.
  • the high speed processor 21 obtains the distance between the X-coordinate "XL” of the left edge point and the X-coordinate "XR" of the right edge point (i . e . , width "H” ) , and the distance between the Y-coordinate "YU” of the upper edge point and the Y-coordinate "YB” of the lower edge point (i . e . , height "V” ) in accordance with the equation (7 ) and the equation (8 ) .
  • step S141 the high speed processor 21 calculates H/V.
  • step S142 the high speed processor 21 determines whether or not "H/V" is larger than the predetermined value "HVl” , and if it is larger the process proceeds to step S143 otherwise proceeds to step S144. In step S142, the high speed processor 21 determines whether or not "H/V" is larger than the predetermined value "HVl" , and if it is larger the process proceeds to step S143 otherwise proceeds to step S144. In step S141, the high speed processor 21 calculates H/V.
  • step S142 the high speed processor 21 determines whether or not "H/V" is larger than the predetermined value "HVl" , and if it is larger the process proceeds to step S143 otherwise proceeds to step S144.
  • the high speed processor 21 determines whether or not "H/V" is smaller than the predetermined value "HV2" , and if it is smaller the process proceeds to step S145 otherwise returns to the main routine . As has been discussed above, it is determined whether or not the second rule is applicable in steps S142 and S144.
  • step S143 the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • step S143 the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • step S143 the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • the high speed processor 21 sets the coordinates of the upper edge point and the lower edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
  • Fig . 20 is a flow chart for showing an example of the preprocessing in accordance with the third rule in step SlO of Fig . 3.
  • the high speed processor 21 calculates the lengths of the four sides "si" to "s4 " of the quadrilateral defined by the upper edge point "up” , the lower edge point “bp” , the left edge point “Ip” and the right edge point “rp” as its vertices .
  • step S151 the high speed processor 21 determines whether or not the shortest side and the next shortest side among the four sides share one edge point .
  • step S152 if one edge point is shared, the high speed processor 21 proceeds to step S153 otherwise proceeds to step S154.
  • step S153 the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point and the edge point opposed to the shared edge point in the inner memory, and the process returns to the main routine .
  • step S154 since the shortest side and the next shortest side share none of the edge points , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the center point of the shortest side and the coordinates of the center point of the next shortest side in the inner memory, and the process returns to the main routine .
  • step SlI the high speed processor 21 calculates the tilt of the straight line "SL" passing through the two points which are set in the internal memory in steps S8 to SlO (steps S136, S137 , S143, S145, S153 and S154 ) . Then, the high speed processor 21 classifies the tilt of the straight line "SL” as calculated in either of eight tilts "a ⁇ " to "a7” and sets a shield tilt flag to a value in accordance with the classification result .
  • Fig . 21 is a view for explaining the tilt of the sword 11 which is determined by the high speed processor 21 of Fig . 2.
  • the tilt "a ⁇ ” is aligned with the X-axis of the differential image
  • the tilt "a4" is aligned with the Y-axis of the differential image .
  • the tilt of the straight line "SL" indicative of the tilt of the sword 11 is classified in either of the tilts "a ⁇ " to "a7” . More specifically speaking, angular ranges are defined respectively corresponding to the tilts "a ⁇ ” to "a7” each of which is used to indicate a center angular position of the corresponding angular range .
  • Each of the angular ranges extends for 11.25 degrees in the clockwise direction and 11.25 degrees in the counter clockwise direction from the corresponding center angular position .
  • the high speed processor 21 determines which of the angular ranges the tilt of the straight line “SL” belongs to in order to represent the tilt of the sword 11 by one of the tilts "a ⁇ " to "a7 " corresponding to the angular range to which the tilt of the straight line "SL” belongs . For example, in the case where the tilt of the straight line "SL” belongs to the angular range corresponding to the tilt "a ⁇ ” , the tilt of the sword 11 is classified into the tilt "a ⁇ ” .
  • step S12 the high speed processor 21 performs information processing by the use of the processing result in steps S3 to SIl .
  • the process of displaying images will be explained .
  • the high speed processor 21 stores in the inner memory the storage location information of the sword traj ectory obj ect in accordance with the value which is set in the sword orientation flag . Furthermore, in this case, the high speed processor 21 calculates the coordinates of the sword trajectory object in the screen coordinate system in order that the sword traj ectory obj ect in accordance with the value, which is set in the sword orientation flag, contains the coordinates (xc, yc) obtained by converting the coordinates (Xc, Yc) of the latest representative point into the " screen coordinate system. In this description, the two-dimensional coordinate system actually used in displaying images on the television monitor ' 7 is called the screen coordinate system.
  • the high speed processor 21 stores the storage location information of the shield object in the inner memory in accordance with the value which is set in the shield tilt flag . Furthermore, in this case, the high speed processor 21 calculates the coordinates of the shield object in the screen coordinate system in order that the shield object in accordance with the value, which is set in the shield tilt flag, contains the coordinates (xc, yc) obtained by converting the coordinates (Xc, Yc) of the latest representative point into the screen coordinate system.
  • the high speed processor 21 stores in the inner memory the storage location information of the background and other obj ects (for example, an enemy obj ect and so forth) to be displayed on the television monitor 7 , and calculates the coordinates of the background and other objects in the screen coordinate system.
  • obj ects for example, an enemy obj ect and so forth
  • step S13 repeats the same step S13 if it is "YES” in step S13, i . e . , while waiting for a video system synchronous interrupt (while there is no video system synchronous interrupt) .
  • step S13 i . e .
  • the CPU gets out of the state of waiting for a video system synchronous interrupt (if the CPU is given a video system synchronous interrupt) , the process proceeds to step S14.
  • step S14 the high speed processor 21 performs the process of updating the screen (video frame) displayed on the television monitor 7 in accordance with the processing result in step S12 , and the process proceeds to step S2. That is to say, the high speed processor 21 reads the image data from the ROM 23 on the basis of the storage location information of the background and each object stored in step 12 and the coordinates in the screen coordinate system, and performs the necessary processing in order to generate the video signal of the background and the respective obj ects . By this process, the sword traj ectory obj ect, the shield object and so forth are displayed on the television monitor 7 in accordance with the processing result in step S12. The sound process in step S15 is performed when a sound interrupt is issued, and the high speed processor 21 outputs music sounds and other sound effects .
  • Fig . 22 is an explanatory view for showing the animation of the sword trajectory object which is displayed in accordance with the orientation in which the sword 11 of Fig . 1 is swung .
  • a belt-like image (the sword trajectory object) has a smaller width "w” at first, gradually increases in the width "w” as the animation picture advances (as time "t” passes ) , and thereafter decreases in the width "w” as the animation picture further advances .
  • one animation picture is displayed in one video frame so that 12 animation pictures are displayed in 12 video frames .
  • the video frame is updated at 1/60 second intervals .
  • portions blacked out represents transparent portions .
  • the sword orientation flag is set to the value indicative of the orientation "d ⁇ " of Fig . 13
  • the sword trajectory objects of Fig . 22 are used and displayed after horizontally flipping it .
  • the orientation "d2" there are images similar to those shown in Fig . 22 (but rotated at 90 degrees )
  • these images are used for the orientation "d6" by vertically flipping it .
  • the orientation "dl” there are images similar to those shown in Fig . 22 (but rotated at 45 degrees ) , and these images are used for the orientation "d3” , "d5" and “d7” by horizontally and vertically flipping it .
  • Fig . 23 is a view showing examples of the shield obj ects "AO" to "A4" which are displayed in correspondence with the tilts "a ⁇ ” to "a4 " of Fig . 21.
  • the shield objects "AO” to “A4 " of Fig . 23 correspond respectively to the tilts "a ⁇ ” to "a4 " of Fig . 21. Accordingly, in the case where the shield tilt flag indicates one of the tilts "a ⁇ ” to "a4", the corresponding one of the shield objects "AO” to "A4" is used and displayed as it is . In the case where the shield tilt flag indicates one of the tilts "a5" to "a7", the corresponding one of the shield objects "A3" to “Al” is used and displayed after horizontally flipping it.
  • Fig. 24 is a view showing an example of the shield object which is displayed on the television monitor 7 of Fig. 1.
  • a shield object "AIr” is displayed on the television monitor 7 by horizontally flipping the shield object "Al” of Fig. 23.
  • the tilt of the sword 11 is the tilt "a7" of Fig. 21.
  • the shield object is displayed on the television monitor 7 in accordance with the tilt indicated by the shield tilt flag (i . e . , the tilt of the sword 11) .
  • the shield object is displayed from a side of the screen through another side . This is true also for the sword trajectory object (except for (k) and (1) of Fig. 22) .
  • the coloration of the shield object is preferably a transparent color or a semitransparent color . This is because the objects (for example, an enemy object and the like) located behind the shield object can be seen through, the player 17 can manipulate the sword 11 while viewing the object located behind.
  • the coordinates of two points are determined in accordance with the first rule to the third rule which are experientially. derived in order to calculate the tilt of the sword 11. Accordingly, it is possible to easily and precisely detect the tilt of the sword 11.
  • the operation article 11 is sword-like as an example in the above explanation, the shape of the operation article is not limited thereto . Also, the profile of the retroreflective sheet to be attached to the operation article is not limited to the profile of the retroreflective sheet 15.
  • a shield object corresponding to the tilt of the sword 11 is displayed on the television monitor 7.
  • the object to be displayed corresponding to the tilt of the sword 11 is not limited thereto, but any object having an arbitrary profile or configuration can be displayed.

Abstract

It is assumed that edge points 'up', 'bp', 'lp' and 'rp' are vertices of a quadrilateral. Among four sides of the quadrilateral, if the shortest side and the next shortest side share one of the four edge points, the tilt of the sword 11 is obtained on the basis of the straight line 'SL' passing through the coordinates of the shared edge point and the coordinates of the edge point opposed to the shared edge point. Contrary to this, if the shortest side and the next shortest side share no edge point, the tilt of the sword 11 is obtained on the basis of the straight line 'SL' passing through the coordinates of the center point of the shortest side and the coordinates of the center point of the next shortest side.

Description

DESCRIPTION
TILT DETECTION METHOD AND ENTERTAINMENT SYSTEM
Technical Field
The present invention relates to a tilt detection method, an entertainment system and the related techniques for detecting the tilt of an operation article by taking stroboscopic images of the operation article having a reflecting object .
Background Art
Japanese Patent Published Application No . 2004-85524 by the present applicant discloses a golf game system including a game apparatus and golf-club-type input device (operation article) , and the housing of the game apparatus houses an imaging unit which comprises an image sensor, infrared light emitting diodes and so forth . The infrared light emitting diodes intermittently emit infrared light to a predetermined area above the imaging unit while the image sensor intermittently captures images of the reflecting object of the golf- club-type input device which is moving in the predetermined area . The location and velocity of the golf-club-type input device can be detected by processing the stroboscopic images of the reflecting obj ect .
Summary of The Invention
Accordingly, it is an obj ect of the present invention to provide a tilt detection method, an entertainment system and the related techniques for detecting the tilt of an operation article by processing stroboscopic images of a reflecting object provided of the operation article .
In accordance with an aspect of the present invention, a tilt detection method of detecting a tilt of an operation article which is held and given motion by an operator, comprises : emitting light to the operation article which has a reflecting obj ect in a predetermined cycle; imaging the operation article to which the light is emitted, and acquiring lighted image data including a plurality of pixel data items each of which comprises a luminance value; imaging the operation article to which the light is not emitted, and acquiring unlighted image data including a plurality of pixel data items each of which comprises a luminance value; generating differential image data by calculating difference between the lighted image data and the unlighted image data; obtaining the coordinates of a pixel having the maximum horizontal coordinate among the pixels of which the image of the operation article is made up in a differential image on the basis of the differential image data; obtaining the coordinates of a pixel having the minimum horizontal coordinate among the pixels of which the image of the operation article is made up; obtaining the coordinates of a pixel having the minimum vertical coordinate among the pixels of which the image of the operation article is made up; obtaining the coordinates of a pixel having the maximum vertical coordinate among the pixels of which the image of the operation article is made up; and calculating the tilt of the operation article on the basis of a first reference point and a second reference point which are selected on the basis of a first edge point located in the pixel having the maximum horizontal coordinate, a second edge point located in the pixel having the minimum horizontal coordinate, a third edge point located in the pixel having the maximum vertical coordinate and a fourth edge point located in the pixel having the minimum vertical coordinate, wherein, in the step of calculating the tilt of the operation, in the case where the first edge point, the second edge point, the third edge point and the fourth edge point are vertices of a quadrilateral having four sides among which the shortest side and the next shortest side share one of the first edge point, the second edge point, the third edge point and the fourth edge point, the first reference point and the second reference point are set respectively to the shared edge point and the edge point opposed to the shared edge point, and in the case where the first edge point, the second edge point, the third edge point and the fourth edge point are vertices of a quadrilateral having four sides among which the shortest side and the next shortest side share none of the first edge point, the second edge point, the third edge point and the fourth edge point, the first reference point and the second reference point are set respectively to the center point of the shortest side and the center point of the next shortest side .
In accordance with this configuration, it is possible to easily and precisely detect the tilt of the operation article .
The above tilt detection method operates , in the case where there are a plurality of the pixels which have the maximum horizontal coordinate, such that the step of obtaining the coordinates of a pixel having the maximum horizontal coordinate sets the vertical coordinate of the first edge point to the average value of the vertical coordinates of the plurality of the pixels having the maximum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the minimum horizontal coordinate, the step of obtaining the coordinates of a pixel having the minimum horizontal coordinate sets the vertical coordinate of the second edge point to the average value of the vertical coordinates of the plurality of the pixels having the minimum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the maximum vertical coordinate, the step of obtaining the coordinates of a pixel having the maximum vertical coordinate sets the horizontal coordinate of the third edge point to the average value of the horizontal coordinates of the plurality of the pixels having the maximum vertical coordinate; and wherein in the case where there are a plurality of the pixels which have the minimum vertical coordinate, the step of obtaining the coordinates of a pixel having the minimum vertical coordinate sets the horizontal coordinate of the fourth edge point to the average value of the horizontal coordinates of the plurality of the pixels having the minimum vertical coordinate .
In accordance with this configuration, even in the case where there are a plurality of pixels having the maximum horizontal coordinate, a plurality of pixels having the minimum horizontal coordinate, a plurality of pixels having the maximum vertical coordinate and/or a plurality of pixels having the minimum vertical coordinate, it is possible to determine the first edge point, the second edge point, the third edge point and the fourth edge point .
The above tilt detection method further comprises , in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the first edge point and the second edge point; in- the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the third edge point and the fourth edge point; and in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with which is greater the distance between the first edge point and the second edge point or the distance between the third edge point and the fourth edge point .
In accordance with this configuration, even in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, it is possible to precisely detect the tilt of the operation article . The above tilt detection method further comprises, in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point .
In accordance with this configuration, even in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or' the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, it is possible to more precisely detect the tilt of the operation article .
The above tilt detection method further comprises , in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point if a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point is greater than a first constant which exceeds "1" ; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the third edge point and the fourth edge point if said ratio is smaller than a second constant which is smaller than "1" .
In accordance with this configuration, even in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, it is possible to furthermore precisely detect the tilt of the operation article .
In accordance with another aspect of the present invention, an entertainment system comprises : an operation article that is operated by a user when the user is enj oying said entertainment system; an imaging device operable to capture an image of said operation article; and an information processing apparatus connected to said imaging device, and operable to receive the images of said operation article from said imaging device and determine tilts of said operation article on the basis of the images of said operation article, wherein said information processing apparatus at least including : a unit operable to obtain four representative points for representing a profile of said operation article in the image of said operation article; a unit operable to determine whether or not there is a representative point, among the four representative points, which is shared by the shortest side and the next shortest side of a quadrilateral which is defined by the four representative points as its vertices , a unit operable to calculate the tilt of said operation article on the basis of the tilt of a straight line passing through the shared representative point and the representative point opposed to the shared representative point when there is the shared representative point, and calculate the tilt of said operation article on the basis of the tilt of a straight line passing through the center point of the shortest side of the quadrilateral and the center point of the next shortest side of the quadrilateral when there is not the shared representative point .
In accordance with this configuration, it is possible to easily and precisely detect the tilt of the operation article in the entertainment system.
Brief Description Of The Drawings
The novel features of the invention are set forth in the appended claims . The invention itself, however, as well as other features and advantages thereof, will be best understood by reading the detailed description of specific embodiments in conjunction with the accompanying drawings, wherein,
Fig . 1 is a block diagram showings the entire configuration of a game system in accordance with an embodiment of the present invention.
Fig . 2 is a schematic diagram showing the electric configuration of the game apparatus 1 of Fig. 1.
Fig . 3 is a flow chart for showing an example of the overall process flow of the game apparatus 1 of Fig . 1.
Fig. 4 is a flow chart for showing an example of the imaging process in step S2 of Fig . 3. Fig . 5 is a schematic representation of a binarized image which is formed by the threshold value "ThB" calculated in step S4 of Fig . 3.
Fig . 6 is a view for explaining the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3. Fig . 7 is a flow chart for showing an example of the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3.
Fig . 8 is a flow chart for showing an example of the process of calculating the coordinates of the left edge point in step S34 of Fig . 7.
Fig . 9 is a flow chart for showing an example of the process of calculating the coordinates of the right edge point in step S35 of Fig . 7.
Fig . 10 is a flow chart for showing an example of the process of calculating the coordinates of the upper edge point in step S36 of Fig. 7.
Fig . 11 is a flow chart for showing an example of the process of calculating the coordinates of the lower edge point in step S37 of Fig . 7. Fig . 12 is a flow chart for showing an example of the process of determining whether or not the sword traj ectory obj ect is to appear in step S6 of Fig . 3.
Fig . 13 is a view for explaining the orientation of a sword 11 which is determined by the high speed processor 21 of Fig . 2. Fig . 14 is a flow chart for showing an example of the process of determining whether or not the shield obj ect is to appear in step S7 of Fig . 3.
Fig . 15 is a view for explaining a third rule of the tilt determination in accordance with the present embodiment . Fig . 16 is a view for explaining a first rule of the tilt determination in accordance with the present embodiment .
Fig. 17 is a view for explaining a" second rule of the tilt determination in accordance with the present embodiment .
Fig . 18 is a flow chart for showing an example of the preprocessing in accordance with the first rule in step S8 of Fig . 3.
Fig . 19 is a flow chart for showing an example of the preprocessing in accordance with the second rule in step S9 of Fig. 3.
Fig . 20 is a flow chart for showing an example of the preprocessing in accordance with the third rule in step SlO of Fig . 3. Fig . 21 is a view for explaining the tilt of the sword 11 which is determined by the high speed processor 21 of Fig . 2.
Fig. 22 is an explanatory view for showing the animation of the sword trajectory object which is displayed in accordance with the direction in which the sword 11 of Fig. 1 is swung . Fig. 23 is a view showing examples of the shield objects "AO" to "A4" which are displayed in correspondence with the tilts "aθ" to "a4 " of Fig . 21.
Fig . 24 is a view showing an example of the shield object "AIr" which is displayed on the television monitor 7 of Fig . 1.
Best Mode for Carrying Out the Invention
In what follows, several embodiments of the present invention will be explained in conjunction with the accompanying drawings .
Meanwhile, like references indicate the same or functionally similar elements throughout the respective drawings , and therefore redundant explanation is not repeated.
Fig . 1 is a block diagram showings the entire configuration of a game system in accordance with an embodiment of the present invention . As shown in Fig . 1 , this game system comprises a gaming apparatus 1 , an operation article 11 and a television monitor 7.
Meanwhile, the present embodiment is directed to an example of a game in which a player 17 operates an operation article 11 in order to cut down enemy obj ects which are displayed on the television monitor 7 , and thereby the operation article 11 is referred to as the "sword 11" in the following explanation .
The sword 11 is formed in a columnar shape which is tapered down from the base end to the tip . Also, the sword 11 comprises a grip section 13 which is gripped by the player 17, and a blade portion on which a retroreflective sheet 15 is attached . The retroreflective sheet 15 is provided in order to generally cover the surface of the blade portion of the sword 11.
The game apparatus 1 is connected to "a television monitor 7 by an AV cable 9. Furthermore, although not shown in the figure, the game apparatus 1 is supplied with a power supply voltage from an AC adapter or a battery.
The game apparatus 1 is provided with an infrared filter 5 which is located in the front side of the game apparatus 1 and serves to transmit only infrared light, and furthermore there are four infrared light emitting diodes 3 which are located around the infrared filter 5 and serves to emit infrared light . An image sensor 19 to be described below is located behind the infrared filter 5.
The four infrared light emitting diodes 3 intermittently emit infrared light . Then, the infrared light emitted from the infrared light emitting diodes 3 is reflected by the retroreflective sheet 15 attached to the sword 11, and input to the image sensor 19 located behind the infrared filter 5. An image of the sword 11 can be captured by the image sensor 19 in this way. While infrared light is intermittently emitted, the image sensor 19 performs the imaging process even in non-emission periods . The position, area, tilt and the like of the sword 11 can be detected in the game apparatus 1 by calculating the differential image between the image with infrared light and the image without infrared light when a player 17 swings the sword 11.
Fig . 2 is a schematic diagram showing the electric configuration of the game apparatus 1 of Fig . 1. As shown in Fig . 2 , the game apparatus 1 includes the image sensor 19, the infrared light emitting diodes 3, a high speed processor 21 , a ROM (read only memory) 23 and a bus 25.
The sword 11 is illuminated with the infrared light which is emitted from the infrared light emitting diodes 3 and thereby the retroreflective sheet 15 reflects the infrared light . The image sensor 19 receives the reflected light from this retroreflective sheet 15 for capturing an image, and outputs an image signal of the retroreflective sheet 15. This analog image signal from the image sensor 19 is converted into digital data by an A/D converter (not shown in the figure) implemented within the high speed processor 21. This process is performed also in the periods without infrared light . The high speed processor 21 lets the infrared light emitting diodes 3 intermittently flash for performing such stroboscopic imaging . Although not shown in the figure, the processor 21 includes various functional blocks such as a CPU (central processing unit) , a graphics processor, a sound processor and a DMA controller, and in addition to this , includes the A/D converter for accepting analog signals and an input/output control circuit for receiving input signals from external electronic circuits and electronic elements and outputting output signals to them. The image sensor 19 and the infrared light emitting diodes 3 are controlled by the CPU through the input/output control circuit . The CPU runs a game program stored in the ROM 23, and performs the various types of arithmetic operations . Accordingly, the graphics processor and the sound processor read image data and sound data stored in the ROM 23 in accordance with the results of the operations performed by the CPU, generate a video signal and an audio signal, and outputs them through the AV cable 9.
Furthermore, the high speed processor 21 is provided with an internal memory, which is not shown in the .figure but is for example a RAM (random access memory) . The internal memory is used to provide a working area, a counter area, a resister area, a temporary data area, a flag area and/or the like .
The high speed processor 21 processes the digital image signal input from the image sensor 19 through the A/D converter, detects the position, area, tilt and the like of the sword 11 , and generates a video signal and an audio signal by performing a graphics process , a sound process and other processes and computations . The video signal and the audio signal are supplied to the television monitor 7 through the AV cable 9 and thereby the television monitor 7 displays an image corresponding to the video signal while the speaker thereof (not shown in the figure) outputs sound corresponding to the audio signal .
Fig . 3 is a flow chart for showing an example of the overall process flow of the game apparatus 1 of Fig . 1. As shown in Fig . 3, the high speed processor 21 performs the initial settings of the system in step Sl . In step S2 , the high speed processor 21 performs the process of imaging the sword 11 by driving the infrared light emitting diodes 3.
Fig . 4 is a flow chart for showing an example of the imaging process in step S2 of Fig . 3. As shown in Fig . 4 , the high speed processor 21 turns on the infrared light emitting diodes 3 in step S20. In step S21 , the high speed processor 21 acquires , from the image sensor 19, image data with infrared light, and stores the image data in the internal memory. In this case , for example, a CMOS image sensor of 32 pixels x 32 pixels is used as the image sensor 19 of the present embodiment . Accordingly, pixel data of 32 pixels x 32 pixels is output as image data from the image sensor 19. This pixel data is converted into digital data by the A/D converter and stored in the internal memory as a two-dimensional array element "Pl [X] [Y] " .
In step S22, the high speed processor 21 turns off the infrared light emitting diodes 3. In step S23, the high speed processor 21 acquires , from the image sensor 19, image data (pixel data of 32 pixels x 32 pixels) without infrared light, and stores the image data in the internal memory. In this case, this pixel data is stored in the internal memory as a two-dimensional array element "P2 [X] [Y] " .
The stroboscope imaging is performed in this way. In the two- dimensional coordinate system for identifying the positions of the respective pixels constituting the image captured by the image sensor 19, it is assumed that the horizontal axis is X-axis and the vertical axis is Y-axis . Also, it is assumed that one pixel corresponds to one unit of the dimension in the coordinates . Since the image sensor 19 of 32 pixels x 32 pixels is used in the case of the present embodiment, X = 0 to 31 and Y = 0 to 31. Incidentally, the pixel data comprises a luminance value .
Returning to Fig . 3, in step S3, the high speed processor 21 calculates the differential data between the pixel data acquired when the infrared light emitting diodes 3 are turned on (i . e . , the respective array elements "Pl [X] [Y] " ) and the pixel data acquired when the infrared light emitting diodes 3 are turned off (i . e . , the corresponding array elements "Pl [X] [Y] " ) , and the differential data is assigned to an array element "Dif [X] [Y] π .
As thus described, it is possible to eliminate, as much as possible, noise of light other than the light reflected from the sword 11 (the retroreflective sheet 15) by calculating the differential data
( i . e . , the differential image ) , and detect the sword 11 with a high degree of accuracy . The coordinate system in which the respective pixels of the differential image are located is the same coordinate system in which the respective pixels of the images captured by the image sensor 19 are located.
In this description, the array element "Pl [X] [Y] " (i . e . , the pixel data with illumination) , the array element "P2 [X] [Y] " (i . e . , the pixel data without illumination) and the array element "Dif [X] [Y] π (i . e . , the differential data) are referred to respectively also as the pixel data " Pl [X] [Y] " , the pixel data "P2 [X] [Y] " , and the differential data " DIf [X] [Y] " .
In step S4 , the high speed processor' 21 calculates a threshold value "ThB" which is used to binarize each array element "Dif [X] [Y] " which is obtained in step S3. More specifically speaking, the high speed processor 21 extracts the element (the differential data) , having the maximum luminance value from among all the array elements "Dif [X] [Y] " , and multiplies the maximum luminance value by a predetermined value (for example, 0.6) , and sets the current threshold value "ThB" to the result . As apparent from Fig . 3, the above calculation of the threshold value "ThB" is performed every time the display screen of the television monitor 7 is updated.
Fig . 5 is a schematic representation of a binarized image which is formed by threshold value "ThB" calculated in step S4 of Fig . 3. As shown in Fig. 5, the binarized image 27 (32 pixels x 32 pixels) is obtained by binarizing all the array elements "Dif [X] [Y] " on the basis of the threshold value "ThB . The binarized image 27 contains an image "IM" of the retroreflective sheet 15 attached to the sword 11. However, as apparent from a flow chart to be described below, the respective processes are performed while comparing the respective array element "Dif [X] [Y] " with the threshold value ' "ThB" , and thereby the binarized image 27 as illustrated in Fig. 5 is not generated in the processes practically . Incidentally, the coordinate system in which the respective pixels of the binarized image are located is the same coordinate system in which the respective pixels of the image captured by the image sensor 19 are located . The origin "0" of the coordinate system is as illustrated in the figure .
Returning to Fig . 3, in advance of explaining the details in step S5, the general outline of the process will be explained. In step S5, the coordinates (XU, YU) of the upper edge point, the coordinates (XB, YB) of the lower edge point, the coordinates (XL, YL) of the left edge point and the coordinates (XR, YR) of the right edge point of the image "IM" of the retroreflective sheet 15 are obtained in step S5. The upper edge point, the lower edge point, the left edge point and the right edge point are referred to also as the edge point respectively .
The upper edge point is used to represent a single pixel or a plurality of pixels having the minimum Y-coordinate of the image " IM" . The lower edge point is used to represent a single pixel or a plurality of pixels having the maximum Y-coordinate of the image " IM" . The left edge point is used to represent a single pixel or a plurality of pixels having the minimum X-coordinate of the image "IM" . The right edge point is used to represent a single" pixel or a plurality of pixels having the maximum X-coordinate of the image "IM" .
In the case where there is only a single point Which corresponds to the upper edge point, the coordinates (XU, YU) of the upper edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the upper edge point, the coordinates (XU, YU) of the upper edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points . In the case where there is only a single point which corresponds to the lower edge point, the coordinates (XB, YB) of the lower edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the lower edge point, the coordinates (XB, YB) of the lower edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points . In the case where there is only a single point which corresponds to the left edge point, the coordinates (XL, YL) of the left edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the left edge point, the coordinates (XL, YL) of the left edge point are set to the arithmetic mean values of the respective coordinates of the plurality of points . In the case where there is only a single point which corresponds to the right edge point, the coordinates (XR, YR) of the right edge point are set to the coordinates of this single point, and in the case where there are a plurality of points which correspond to the right edge point, the coordinates (XR, YR) of the right edge point are set to the the arithmetic mean values of the respective coordinates of the plurality of points . These points will be explained with reference to the drawings .
Fig . 6 is a view for explaining the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig . 3. In Fig . 6, each rectangle corresponds to one pixel in the image "IM" of the retroreflective sheet 15. For example, as shown in Fig . 6A, the high speed processor 21 sets the coordinates (XU, YU) of the upper edge point to the coordinates of the pixel indicated by an arrow "UA" in the image "IM" , the coordinates (XB, YB) of the lower edge point to the coordinates of the pixel indicated by an arrow "BA" in the image "IM" , the coordinates (XL, YL) of the left edge point to the coordinates of the pixel indicated by an arrow "LA" in the image "IM" , and the coordinates (XR, YR) of the right edge point to the coordinates of the pixel indicated by an arrow "RA" in the image "IM" .
Incidentally, as shown in Fig . 6B for example, in the case where there are a plurality of pixels which correspond to the upper edge point of the image "IM" , the high speed processor 21 sets the coordinates (XU, YU) of the upper edge point to the coordinates of the point indicated by an arrow "UA" corresponding to the arithmetic mean values of the respective coordinates of the plurality of pixels, and in the case where there are a plurality of pixels which correspond to the left edge point, the high speed processor 21 sets the coordinates (XL, YL) of the left edge point to the coordinates of the point indicated by an arrow "LA" corresponding to the arithmetic mean values of the respective coordinates of the plurality of pixels .
Fig . 7 is a flow chart for showing an example of the process of calculating the coordinates of the upper, lower, left and right edge points in step S5 of Fig. 3. As shown in Fig. 7, the high speed processor 21 assigns "0" to variables "X" , "Y" , "maxX" , "maxY" , "XL" , "YL" , "XR" , "YR" , "XU" , "YU" , "XB" , "YB" , "Ca" , "Cl" , "Cr" , "Cu" and "Cb" respectively . Also, the high speed processor 21 assigns "31" to variables "minX" and "minY" .
In step S31, the high speed processor 21 compares the array element "Dif [X] [Y] " with the predetermined threshold value "ThB" . In step S32 , when the array element "Dif [X] [Y] " is larger than the predetermined threshold value "ThB" , the high speed processor 21 proceeds to step S33, and conversely when it is no larger than the predetermined threshold value "ThB" , the high speed processor 21 proceeds to step S38.
That is to say, the process in steps S31 and S32 is the process of detecting whether or not the retroreflective sheet 15 is captured. When the retroreflective sheet 15 is captured, the luminance values of the pixels corresponding to the retroreflective sheet 15 have larger values in the differential image on the basis of the array elements "Dif [X] [Y] " , and thereby it is possible to recognize each pixel having a luminance value larger than the threshold value "ThB" as part of the retroreflective sheet 15, as captured, by determining whether or not the luminance value is greater than the threshold value "ThB" .
In step S33 , the high speed processor 21 increments the counter value "Ca" by one in order to count the array elements "Dif [X] [Y] " having luminance values larger than the threshold value "ThB" . The counter value "Ca" at the time when it is determined that X = 32 in step S42 to be described below, i . e . , at the time when the processes in steps S31 to S42 are finished for " all the pixels of the differential image, is the number of pixels having luminance values larger than the threshold value "ThB" and corresponds' to the area of the retroreflective sheet 15 in the differential image .
In step S34 , the high speed processor 21 calculates the coordinates (XL, YL) of the left edge point of the image "IM" of the retroreflective sheet 15.
Fig . 8 is a flow chart for showing an example of the process of calculating the coordinates of the left edge point in step S34 of Fig . 7. As shown in Fig . 8 , the high speed processor 21 compares the minimum X-coordinate "minX" with the coordinate "X" in step S50. If "X" is less than or equal to "minX" in step S51, the high speed processor 21 proceeds to step S52 , otherwise proceeds to step S35 of Fig . 7.
In step S52 , the high speed processor 21 assigns "X" to "minX" to update "minX" . In step S53, the new value of "minX" is compared with the previous "minX" (i . e . , "minX" in advance of performing the process in step S52 ) in order to determine whether or not they are equal to each other, and if they are equal (minX = X in step S51 ) the process proceeds to step S56 otherwise (minX > X in step S51 ) proceeds to step S54. In step S54 , the high speed processor 21 assigns "0" to the variable "YL" , and in step S55 the high speed processor 21 assigns "0" to the counter value "Cl" indicative of the number of pixels having the same X-coordinate equal to "minX" and then proceeds to step S56.
In step S56, the high speed processor 21 increments the counter value "Cl" by one . In step S57 , the high speed processor 21 assigns "minX" to the variable "XL" to obtain a new value of the variable "XL" , and adds "Y" to the variable "YL" to obtain a new value of the variable "YL" . The value of the variable "YL" is the total value of the Y-coordinates of the pixels having the same X-coordinate equal to "minX" .
Returning to Fig . 7 , in step S35, the high speed processor 21 calculates the coordinates (XR, YR) of the right edge point of the image "IM" of the retroreflective sheet 15.
Fig . 9 is a flow chart for showing an example of the process of calculating the coordinates of the right edge point in step S35 of Fig . 7. As shown in Fig . 9, the high speed processor 21 compares the maximum X-coordinate "maxX" with the coordinate "X" in step S60. If "X" is larger than or equal to "maxX" in step S61 , the high speed processor 21 proceeds to step S62 otherwise proceeds to step S36 of Fig . 7.
In step S62 , the high speed processor 21 assigns "X" to "maxX" in order to obtain a new value of "maxX" . In step S63, the new , "maxX" is compared with the previous "maxX" (i . e . , "maxX" in advance of performing the process in step S62) in order to determine whether or not they are equal to each other, and if they are equal (maxX = X in step S61 ) the process proceeds to step S66 otherwise (maxX < X in step S61 ) proceeds to step S64.
In step S64 , the high speed processor 21 assigns "0" to the variable "YR" , and in step S65 the high speed processor 21 assigns "0" to the counter value "Cr" indicative of the number of pixels having the same X-coordinate equal to "maxX" and then proceeds to step S66. In step S66, the high speed processor 21 increments the counter value "Cr" by one . In step S67 , the high speed processor 21 assigns "maxX" to the variable "XR" to obtain a new value of the variable "XR" , and adds "Y" to the variable "YR" to obtain a new value of the variable "YR" . The value of the variable "YR" is the total value of the Y-coordinates of the pixels having the same X-coordinate equal to "maxX" .
Returning to Fig . 7 , in step S36, the high speed processor 21 calculates the coordinates (XU, YU) of the upper edge point of the image " IM" of the retroreflective sheet 15. Fig . 10 is a flow chart for showing an example of the process of calculating the coordinates of the upper edge point in step S36 of Fig . 7. As shown in Fig . 10 , the high speed processor 21 compares the minimum Y-coordinate "minY" with the coordinate "Y" in step S70. If "Y" is smaller than or equal to "minY" in step S71 , the high speed processor 21 proceeds to step S72 otherwise proceeds to step S37 of Fig . 7.
In step S72 , the high speed processor 21 assigns "Y" to "minY" in order to obtain a new value of "minY" . In step S73, the new "minY" is compared with the previous "minY" (i . e . , "minY" in advance of performing the process in step S72 ) in order to determine whether or not they are equal to each other, and if they are equal (minY = Y in step S71 ) the process proceeds to step S76 otherwise (minY > Y in step S71 ) proceeds to step S74.
In step S74 , the high speed processor 21 assigns "0" to the variable "XU" , and in step S75 the high speed processor 21 assigns " 0" to the counter value "Cu" indicative of the number of pixels having the same Y-coordinate equal to "minY" and then proceeds to step S76.
In step S76, the high speed processor 21 increments the counter value "Cu" by one . In step S77 , the high speed processor 21 adds "X" to the variable "XU" to obtain a new value of the variable "XU" , and assigns "minY" to the variable "YU" to obtain a new value of the variable "YU" . The value of the variable "XU" is the total value of the X-coordinates of the pixels having the same Y-coordinate equal to "minY" . Returning to Fig. 7 , in step S37 , the high speed processor 21 calculates the coordinates (XB, YB) of the lower edge point of the image "IM" of the retroreflective sheet 15.
Fig . 11 is a flow chart for showing an example of the process of calculating the coordinates of the lower edge point in step S37 of Fig . 7. As shown in Fig . 11, in step S80 , the high speed processor 21 compares the maximum Y-coordinate "maxY" with the coordinate "Y" . If "Y" is larger than or equal to "maxY" in step S81 , the high speed processor 21 proceeds to step S82 otherwise proceeds to step S38 of Fig . 7. In step S82, the high speed processor 21 assigns "Y" to "maxY" in order to obtain a new value of "maxY" . In step S83 , the new "maxY" is compared with the previous "maxY" (i . e . , "maxY" in advance of performing the process in step S82 ) in order to determine whether or not they are equal to each other, and if they are equal (maxY = Y in step S81 ) the process proceeds to step S86 otherwise (maxY < Y in step S81 ) proceeds to step S84.
In step S84 , the high speed processor 21 assigns "0" to the variable "XB" , and in step S85 the high speed processor 21 assigns "0" to the counter value "Cb" indicative of the number of pixels having the same Y-coordinate equal to "maxY" and then proceeds to step S86.
In step S86, the high speed processor 21 increments the counter value "Cb" by one . In step S87 , the high speed processor 21 adds "X" to the variable "XB" to obtain a new value of the variable "XB" , and assigns "maxY" to the variable "YB" to obtain a new value of the variable "YB" . The value of the variable "XB" is the total value of the X-coordinates of the pixels having the same Y-coordinate equal to "maxY" .
Returning to Fig . 7, in step S38 , the high speed processor 21 increments the variable "Y" indicative of the Y-coordinate of pixels being processed in the differential image . In step S39 , the high speed processor 21 determines whether or not the value of the variable "Y" reaches "32" , and if it is "YES" the process proceeds to step S40 conversely if it is "NO" the process proceeds to step S31. Y = 32 in step S39 means that the process in steps S31 to S37 " is finished for all the pixels located on one column of the differential image .
The high speed processor 21 assigns "0" to the variable "Y" in step S40, and increments the variable "X" indicative of the X- coordinate of pixels in the differential image in step S41. After these processes, the process in steps S31 to S37 is performed for the pixels located on the next column.
In step S42 , the high speed processor 21 determines whether or not the value of the variable "X" reaches "32" , and if it is "YES" the process proceeds to step S43, conversely if it is "NO" the process proceeds to step S31. X = 32 in step S42 means that the process in steps S31 to S37 is finished for all the pixels of the differential image (32 x 32 pixels) .
In step S43, the high speed processor 21 assigns "YL/C1" to the variable "YL" in order to obtain a new value of the variable "YL"
(equation (I ) ) , "YR/Cr" to the variable "YR" in order to obtain a new value of the variable "YR" (equation (2 ) ) , "XU/Cu" to the variable
"XU" in order to obtain a new value of the variable "XU" (equation
(3) ) , "XB/Cb" to the variable "XB" in order to obtain a new value of the variable "XB" (equation ( 4 ) ) .
YL = YL/C1... ( 1 ) YR = YR/Cr ... (2 )
XU = XU/Cu ... (3 )
XB = XB/Cb ... ( 4 )
As has been discussed above, the high speed processor 21 acquires the coordinates (XL, YL) of the left edge point, the coordinates (XR, YR) of the right edge point, the coordinates (XU, YU) of the upper edge point, and the coordinates (XB, YB) of the lower edge point .
In step S44 , the high speed processor 21 obtains the coordinates (Xc, Yc) of the center point among the upper, lower, left and right edge points of the image "IM" (hereinafter referred to as the "representative point" ) , and returns to the main routine . Xc = (XL + XR) /2 ... ( 5) Yc = (YU + YB) /2 ... ( 6)
Returning to Fig . 3, in step S6, the high speed processor 21 determines whether or not the condition for displaying a sword trajectory object on the television monitor 7 is satisfied. The sword trajectory object is a belt-like object "(refer to Fig . 22 to be described below) which is used to represent the swing trajectory of the sword 11 (slash mark) in the real space on the television monitor 7. The above condition for displaying the sword trajectory object on the television monitor 7 is that the sword 11 is swung at a speed higher than a predetermined speed.
Fig . 12 is a flow chart for showing an example of the process of determining whether or not the sword trajectory object is to be displayed in step S6 of Fig . 3. As shown in Fig . 12 , in step S90 , the high speed processor 21 checks a sword flag indicative of whether or not the condition for displaying the sword trajectory object on the television monitor 7 is satisfied, and if the sword flag is turned on (the condition is satisfied) the process proceeds to step S91 , conversely if turned off the process proceeds to step S98.
The high speed processor 21 checks in step S98 whether or not the current and previous representative points (Xc, Yc) are present, and if both the representative points are present, a velocity vector can be calculated so that the process proceeds to step S99, otherwise the process proceeds to step S103 in which the sword flag is turned off and returns to the main routine .
In step S99, the velocity vector "v" (the X-coordinate of the current representative point minus the X-coordinate of the previous representative point, the Y-coordinate of the current representative point minus the Y-coordinate of the previous representative point) is calculated by setting the end point thereof to the representative point calculated in step S44 in the current cycle and the start point thereof to the representative point calculated in step S44 in the previous cycle . In step SlOO , the high speed processor 21 calculates the absolute value | v | of the velocity vector "v" of the sword 11 , i . e . , the speed | v | of the sword 11.
The high speed processor 21 determines whether or not the speed I v I of the sword 11 exceeds the threshold value "ThV" (the condition for displaying the sword traj ectory obj ect on the television monitor 7 ) in step SlOl , and if it exceeds the process proceeds to step S102 otherwise proceeds to step S103 in which the sword flag is turned off and returns to the main routine . On the other hand, in step S102 , the high speed processor 21 turns on the sword flag, and returns to the main routine . However, the sword traj ectory obj ect is not necessarily displayed in the next video frame just after the sword flag is turned on, but the process in steps S91 to S97 is performed in advance . If the sword flag is turned on, the high speed processor 21 proceeds to step S91 from step S90 to check whether or not the current representative point is present, and if it is present the process proceeds to step S92 otherwise proceeds to step S94.
In step S92, the velocity vector "v" is calculated anew by setting the end point thereof to the representative point calculated in step S44 in the current cycle and the start point thereof to the start point of the velocity vector "v" which is calculated in step S99 in the previous cycle . In step S93, the high speed processor 21 determines whether or not the video frame being displayed at present on the television monitor 7 is the fourth video frame as counted from the video frame displayed when the sword flag is turned on, and if it is the fourth video frame the process proceeds to step S95 otherwise returns to the main routine . In this case, the video frame displayed when the sword flag is turned on is counted as the zeroth video frame .
On the other hand, in step S94 , the high speed processor 21 determines whether or not the video frame being displayed at present on the television monitor 7 is the first video frame as counted from the video frame displayed when the sword flag is turned on, and if it is the first video frame the process returns to the main routine otherwise proceeds to step S95.
In step S95, the high speed processor 21 turns on a traj ectory flag which is used to indicate that the sword trajectory obj ect corresponding to the latest velocity vector "v" is to be displayed in the video frame next to the video frame being currently displayed on the television monitor 7.
In this case, the velocity vector "v" is calculated from the representative points of six video frames at a maximum by performing the process in step S93 before step S95. This is because the velocity vector "v" has been calculated from the representative points of two video frames when the sword flag is turned on (refer to step S99) , and the video frame when "YES" is first determined in step S90 is the first video frame as counted from the video frame displayed when the sword flag is turned on .
On the other hand, by performing the process in step S94 before step S95, even if the current video frame is not the fourth video frame (refer to step S93 ) but if it is the video frame after the first video frame as counted from the video frame displayed when the sword flag is turned on (i . e . , "NO" in step S94 ) , the trajectory flag is turned on in the case where the current representative point is not present . In the first video frame as counted from the video frame displayed when the sword flag is turned on (i . e . , "YE≤" in step S94 ) , the trajectory flag is not turned on in the case where the current representative point is not present, and the process returns to the main routine .
By the way, in step S96, the high speed processor 21 turns off the sword flag . In step S97 , the high speed processor 21 classifies the orientation of the velocity vector "v" in either of eight orientations "dθ" to "d7" and sets a sword orientation flag to a value in accordance with the classification result, and the process returns to the main routine . The classification of the orientation of the velocity vector "v" in step S97 will be explained in detail . Fig . 13 is a view for explaining the orientation of the sword 11 which is determined by the high speed processor 21 of Fig . 2. In Fig . 13, the orientations "dθ" and "d4 " are aligned with the X-axis of the differential image, and the orientations "d2" and "d6" are aligned with the Y-axis of the differential image . As shown in Fig . 13 , 360 degrees are equally divided by eight to define eight angular ranges of 45 degrees such that each angular range is represented by one of the orientations "dθ" to "d7" . Accordingly, the sword orientation flag is set to a value corresponding to the angular range in which the orientation of the velocity vector "v" of the sword 11 is located . For example, if the orientation of the velocity vector "v" is located in the angular range corresponding to the orientations "d5" , the sword orientation flag is set to the value corresponding to the direction "d5" .
Returning to Fig. 3, in step S7 , the high speed processor 21 determines whether or not a condition for displaying a r- belt-like shield object on the television monitor 7 is satisfied . The above condition for displaying the shield object on the television monitor 7 is that the representative point of the sword 11 stays in the same area for a predetermined period after the area of the sword 11 in the differential image exceeds a predetermined value .
Fig . 14 is a flow chart for showing an example of the process of determining whether or not the shield obj ect is to be displayed in step S7 of Fig. 3. As shown in Fig. 14 , in step SIlO, the high speed processor 21 compares the threshold value "ThA" and the counter value "Ca" corresponding to the area of the retroreflective sheet 15 in the differential image . When the counter value "Ca" (i . e . , the area of the retroreflective sheet 15 in the differential image) is larger than the threshold value "ThA" in step Sill , the high speed processor 21 proceeds to step S114 otherwise proceeds to step S112. In step S114 , the high speed processor 21 checks a shield flag indicative of whether or not the condition for displaying the shield object on the television monitor 7 is satisfied, and if the shield flag is turned off the process proceeds to step S115, conversely if turned on the process returns to the main routine . In step S115, the high speed processor 21 checks a counter value "Cs" indicative of the period in the number of video frames in which the representative point of the sword 11 stays within a rectangular area "Ar" to be described below, and if the counter value "Cs" is "0" the process proceeds to step S116 otherwise proceeds to step S118. In step S116, the high speed processor 21 sets the rectangular area "Ar" having a center positioned at the representative points (Xc, Yc) of the sword 11. This rectangular area "Ar" is updated every time the counter value "Cs" becomes "0" (steps S115 and S116) . In step S117 , the high speed processor 21 increments the counter value "Cs" by one, and the process returns to the main routine .
On the other hand, in step S118 , the high speed processor 21 determines whether or not the representative point is located in the rectangular area "Ar" , and if it is located the process proceeds to step S119 otherwise proceeds to step S123. In step S119, the high speed processor 21 increments the counter value "Cs" by one . In step S120, the high speed processor 21 determines whether or not the counter value "Cs" is equal to a predetermined value ( for example, "3" ) , and if it is equal the process proceeds to step S121 otherwise the process returns to the main routine .
The high speed processor 21 turns on the shield flag in step S121 , assigns "0" to the counter value "Cs" in step S122 , and the process returns to the main routine .
On the other hand, in step S112 , since the counter value "Ca" indicative of the area does not exceed the threshold value "ThA" , "0" is assigned to the counter value "Cs" , and in step S113 the shield flag is turned off and the process returns to the main routine .
Also, in step S123 , since the representative point does not stay the rectangular area "Ar" , "0" is assigned to the counter value "Cs" , and in step S124 the shield flag is turned off and the process returns to the main routine .
In this case, once the shield flag is" turned on (in step S121 ) , the shield flag is maintained in the state of "ON" in step S114 until the area becomes smaller than the threshold value "ThA" irrespective of the position of the representative point .
Returning to Fig . 3 , preprocessing of determining the tilt of the sword 11 is performed in steps S8 to SlO . At first, the third rule used in step SlO will be explained. Namely, it is assumed that the upper edge point, the lower edge point, the left edge point and the right edge point are the vertices of a quadrilateral . Among four sides of the quadrilateral, if the shortest side and the next shortest side share one of the four edge points, the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point and the coordinates of the edge point opposed to the shared edge point . On the other hand, if the shortest side and the next shortest side share none of the four edge points , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the center point of the shortest side and the coordinates of the center point of the next shortest side . These are the third rule . The third rule as described above is explained with an example .
Fig . 15 is a view for explaining the third rule of the tilt determination in accordance with the present embodiment . In the case of the example shown in Fig . 15A, the upper edge point "up" , the lower edge point "bp" , the left edge point "Ip" and the right edge point "rp" are the vertices of a quadrilateral . Among four sides "si" to "s4" of the quadrilateral, the shortest side "si" and the next shortest side "s2" share the edge point "Ip" , and thereby the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point "Ip" and the coordinates of the edge point "rp" opposed to the shared edge point "Ip" . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through these edge points "Ip" and "rp" . In the case of the example shown in Fig . 15B, since the shortest side "si" and the next shortest side "s3" share none of the edge points "up" , "bp" , "Ip" and "rp" , and thereby the coordinates for determining the tilt of the sword 11 are set to the coordinates of the center point "pi" of the shortest side "si" and the coordinates of the center point "p2" of the next shortest sides "s3" . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through these center points "pi" and "p2" .
However, since there may be shortcomings if only the third rule is always applied, the first rule and the second rule are provided. Namely, it is first determined whether or not the' first rule is applicable, and if applicable the tilt determination process is performed on the basis of the first rule; in the case where the first rule is not applicable, it is next determined whether or not the second rule is applicable, and if applicable the tilt determination process is performed on the basis of the second rule; in the case where the second rule is also not applicable, the tilt determination process is performed on the basis of the third rule .
Fig . 16 is a view for explaining the first rule of the tilt determination in accordance with the present embodiment . As illustrated in Fig. 16, the first rule is applicable in the case where the X-coordinate of the upper edge point "up" is equal to the X- coordinate of the lower edge point "bp" and the Y-coordinate of the left edge point "Ip" is equal to the Y-coordinate of the right edge point "rp" . Then, the coordinates for determining the tilt of the sword 11 are set in accordance with whether or not the ratio of the length "H" of the straight line "SH" connecting the left edge point "Ip" and the right edge point "rp" to the length "V" of the straight line "SV" connecting the upper edge point "up" and the lower edge point "bp" , i . e . , the ratio "H/V" , is larger than "1" . Namely, if "H/V" is larger than "1" , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the left edge point "Ip" and the right edge point "rp" . Accordingly, in such a case, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the left edge point "Ip" and the right edge point "rp" in step SlI of Fig . 3 to be described below . On the other hand, if "H/V" is no larger than "1" (as illustrated in Fig . 16) , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the upper edge point "up" and the lower edge point "bp" . Accordingly, in such a case, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the upper edge point "up" and the lower edge point "bp" in step SIl of Fig . 3 to be described below.
If the third rule were applied in the case where the edge points "up" , "bp" , "Ip" and "rp" had been detected as illustrated in Fig . 16, the shortest side "si" and the next shortest side "s4" should share the edge point "rp" so that the tilt of the sword 11 would be recognized as the tilt of the straight line "SL" (the straight line "SH" shown in Fig . 16) passing through the edge point "rp" and the edge point "Ip" opposed to the edge point "rp" . In such a case, as apparent from the figure, there is a difference from' the actual tilt of the sword 11.
Fig . 17 is a view for explaining the second rule of the tilt determination in accordance with the present embodiment . As illustrated in Fig . 17A, the second rule is applicable in the case where the ratio of the absolute value "H" of the differential X- coordinate between the left edge point "Ip" and the right edge point "rp" to the absolute value "V" of the differential Y-coordinate between the upper edge point "up" and the lower edge point "bp" , i . e . , "H/V" is larger than a predetermined "HVl" ( for example, "3/2" ) or smaller than a predetermined "HV2" (for example, "2/3" ) . In this case, HVl = 1/HV2.
If the ratio "H/V" is larger than the predetermined value "HVl" , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the left edge point "Ip" and the right edge point "rp" . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the left edge point "Ip" and the right edge point "rp" . On the other hand, if the ratio "H/V" is smaller than the predetermined value "HV2" , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the upper edge point "up" and the lower edge point "bp" . Accordingly, in step SIl of Fig . 3 to be described below, the tilt of the sword 11 is determined on the basis of the tilt of the straight line "SL" passing through the upper edge point "up" and the lower edge point "bp" .
If the third rule were applied in the case where the edge points "up" , "bp" , "Ip" and "rp" are detected as illustrated in Fig . 17A, as shown in Fig . 17B, the shortest side "si" and the next shortest side "s3" should share none of the edge points "up" , "bp" , "Ip" and "rp" so that the tilt of the sword 11 would be recognized as the tilt of the straight line "SL" passing through the center point of the shortest side "si " and the center point of the next shortest sides "s3" . In such a case, as apparent from Fig . 17A, there is a difference from the actual tilt of the sword 11. Incidentally, it is obvious that the first rule cannot be applied to the example of Fig . 17.
Fig . 18 is a flow chart for showing an example of the preprocessing in accordance with the first rule in step S8 of Fig . 3. As shown in Fig . 18 , the high speed processor 21 checks the state of the shield flag in step S130 , and if it "is turned on the process proceeds to step S131, conversely if it is turned off the process proceeds to step SIl of Fig . 3. In step S131, the high speed processor 21 determines whether or not the X-coordinate "XU" of the upper edge point is equal to the X- coordinate "XB" of the lower edge point, and if it is not equal the process returns to the main routine, conversely if it is equal the process proceeds to step S132. In step S132 , the high speed processor 21 determines whether or not the Y-coordinate "YL" of the left edge point is equal to the Y- coordinate "YR" of the right edge point, and if it is not equal the process returns to the main routine, conversely if it is equal the process proceeds to step S133. As has been discussed above, it is determined whether or not the first rule is applicable in steps S131 and S132.
In step S133 , the high speed processor 21 obtains the distance between the X-coordinate "XL" of the left edge point and the X- coordinate "XR" of the right edge point (i . e . , width "H" ) , and the distance between the Y-coordinate "YU" of the upper edge point and the Y-coordinate "YB" of the lower edge point (i . e . , height "V" ) in accordance with the following equations . H = XR - XL ... (7 ) V = YB - YU ... ( 8 ) In step S134 , the high speed processor 21 calculates "H/V" . In step S135, the high speed processor 21 determines whether or not "H/V" is larger than "1" , and if it is larger the process proceeds to step S136 otherwise proceeds to step S137.
In step S136, the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3. On the other hand, in step S137 , the high speed processor 21 sets the coordinates of the upper edge point and the lower edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
Fig . 19 is a flow chart for showing an example of the preprocessing in accordance with the second rule in step S9 of Fig . 3. As shown in Fig . 19, in step S140 , the high speed processor 21 obtains the distance between the X-coordinate "XL" of the left edge point and the X-coordinate "XR" of the right edge point (i . e . , width "H" ) , and the distance between the Y-coordinate "YU" of the upper edge point and the Y-coordinate "YB" of the lower edge point (i . e . , height "V" ) in accordance with the equation (7 ) and the equation (8 ) . In step S141, the high speed processor 21 calculates H/V. In step S142, the high speed processor 21 determines whether or not "H/V" is larger than the predetermined value "HVl" , and if it is larger the process proceeds to step S143 otherwise proceeds to step S144. In step
5144 , the high speed processor 21 determines whether or not "H/V" is smaller than the predetermined value "HV2" , and if it is smaller the process proceeds to step S145 otherwise returns to the main routine . As has been discussed above, it is determined whether or not the second rule is applicable in steps S142 and S144.
In step S143, the high speed processor 21 sets the coordinates of the left edge point and the right edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3. On the other hand, in step
5145, the high speed processor 21 sets the coordinates of the upper edge point and the lower edge point in the inner memory as the coordinates for determining the tilt of the sword 11 , and the process proceeds to step SIl of Fig . 3.
Fig . 20 is a flow chart for showing an example of the preprocessing in accordance with the third rule in step SlO of Fig . 3. As shown in Fig . 20 , in step S150 , the high speed processor 21 calculates the lengths of the four sides "si" to "s4 " of the quadrilateral defined by the upper edge point "up" , the lower edge point "bp" , the left edge point "Ip" and the right edge point "rp" as its vertices . In step S151 , the high speed processor 21 determines whether or not the shortest side and the next shortest side among the four sides share one edge point .
In step S152 , if one edge point is shared, the high speed processor 21 proceeds to step S153 otherwise proceeds to step S154. In step S153 , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the shared edge point and the edge point opposed to the shared edge point in the inner memory, and the process returns to the main routine . On the other hand, in step S154 , since the shortest side and the next shortest side share none of the edge points , the coordinates for determining the tilt of the sword 11 are set to the coordinates of the center point of the shortest side and the coordinates of the center point of the next shortest side in the inner memory, and the process returns to the main routine .
Returning to Fig . 3, in step SlI, the high speed processor 21 calculates the tilt of the straight line "SL" passing through the two points which are set in the internal memory in steps S8 to SlO (steps S136, S137 , S143, S145, S153 and S154 ) . Then, the high speed processor 21 classifies the tilt of the straight line "SL" as calculated in either of eight tilts "aθ" to "a7" and sets a shield tilt flag to a value in accordance with the classification result .
Fig . 21 is a view for explaining the tilt of the sword 11 which is determined by the high speed processor 21 of Fig . 2. In Fig . 21, the tilt "aθ" is aligned with the X-axis of the differential image, and the tilt "a4" is aligned with the Y-axis of the differential image . As shown in Fig . 21, in the case of the present embodiment, the tilt of the straight line "SL" indicative of the tilt of the sword 11 is classified in either of the tilts "aθ" to "a7" . More specifically speaking, angular ranges are defined respectively corresponding to the tilts "aθ" to "a7" each of which is used to indicate a center angular position of the corresponding angular range . Each of the angular ranges extends for 11.25 degrees in the clockwise direction and 11.25 degrees in the counter clockwise direction from the corresponding center angular position . The high speed processor 21 determines which of the angular ranges the tilt of the straight line "SL" belongs to in order to represent the tilt of the sword 11 by one of the tilts "aθ" to "a7 " corresponding to the angular range to which the tilt of the straight line "SL" belongs . For example, in the case where the tilt of the straight line "SL" belongs to the angular range corresponding to the tilt "aθ" , the tilt of the sword 11 is classified into the tilt "aθ" .
Returning to Fig . 3, in step S12 the high speed processor 21 performs information processing by the use of the processing result in steps S3 to SIl . Of the information processing in this case, the process of displaying images will be explained .
If the traj ectory flag is turned on (refer to in step S95 of Fig . 12 ) , the high speed processor 21 stores in the inner memory the storage location information of the sword traj ectory obj ect in accordance with the value which is set in the sword orientation flag . Furthermore, in this case, the high speed processor 21 calculates the coordinates of the sword trajectory object in the screen coordinate system in order that the sword traj ectory obj ect in accordance with the value, which is set in the sword orientation flag, contains the coordinates (xc, yc) obtained by converting the coordinates (Xc, Yc) of the latest representative point into the" screen coordinate system. In this description, the two-dimensional coordinate system actually used in displaying images on the television monitor '7 is called the screen coordinate system.
Also, in the case where the trajectory flag is turned off and the shield flag is turned on, the high speed processor 21 stores the storage location information of the shield object in the inner memory in accordance with the value which is set in the shield tilt flag . Furthermore, in this case, the high speed processor 21 calculates the coordinates of the shield object in the screen coordinate system in order that the shield object in accordance with the value, which is set in the shield tilt flag, contains the coordinates (xc, yc) obtained by converting the coordinates (Xc, Yc) of the latest representative point into the screen coordinate system.
In addition, the high speed processor 21 stores in the inner memory the storage location information of the background and other obj ects ( for example, an enemy obj ect and so forth) to be displayed on the television monitor 7 , and calculates the coordinates of the background and other objects in the screen coordinate system.
Furthermore, the high speed processor 21 repeats the same step S13 if it is "YES" in step S13, i . e . , while waiting for a video system synchronous interrupt (while there is no video system synchronous interrupt) . Conversely, if "NO" is determined in step S13 , i . e . , if the CPU gets out of the state of waiting for a video system synchronous interrupt (if the CPU is given a video system synchronous interrupt) , the process proceeds to step S14.
In step S14 , the high speed processor 21 performs the process of updating the screen (video frame) displayed on the television monitor 7 in accordance with the processing result in step S12 , and the process proceeds to step S2. That is to say, the high speed processor 21 reads the image data from the ROM 23 on the basis of the storage location information of the background and each object stored in step 12 and the coordinates in the screen coordinate system, and performs the necessary processing in order to generate the video signal of the background and the respective obj ects . By this process, the sword traj ectory obj ect, the shield object and so forth are displayed on the television monitor 7 in accordance with the processing result in step S12. The sound process in step S15 is performed when a sound interrupt is issued, and the high speed processor 21 outputs music sounds and other sound effects .
Fig . 22 is an explanatory view for showing the animation of the sword trajectory object which is displayed in accordance with the orientation in which the sword 11 of Fig . 1 is swung . As shown in Fig . 22 , a belt-like image (the sword trajectory object) has a smaller width "w" at first, gradually increases in the width "w" as the animation picture advances (as time "t" passes ) , and thereafter decreases in the width "w" as the animation picture further advances . This is an example of the sword trajectory object which is displayed when the sword orientation flag is set to the value indicative of the orientation "d4" shown in Fig. 13.
In this example, one animation picture is displayed in one video frame so that 12 animation pictures are displayed in 12 video frames . Meanwhile, the video frame is updated at 1/60 second intervals . As has been discussed above, it is possible to display a sword trajectory like a sharp flash in response to the swinging motion of the sword 11 by changing the width "w" of the sword trajectory object to be narrow -> wide -> narrow as the animation picture advances (as time "t" passes) .
Meanwhile, in Fig. 22 , portions blacked out represents transparent portions . In addition to this , there are portions which are shaded and colored white for illustration in the figure but actually displayed with predetermined colors (which can include white) in the screen .
In this case, if the sword orientation flag is set to the value indicative of the orientation "dθ" of Fig . 13, the sword trajectory objects of Fig . 22 are used and displayed after horizontally flipping it . Also, for the orientation "d2" , there are images similar to those shown in Fig . 22 (but rotated at 90 degrees ) , and these images are used for the orientation "d6" by vertically flipping it . In addition, for the orientation "dl" , there are images similar to those shown in Fig . 22 (but rotated at 45 degrees ) , and these images are used for the orientation "d3" , "d5" and "d7" by horizontally and vertically flipping it .
Fig . 23 is a view showing examples of the shield obj ects "AO" to "A4" which are displayed in correspondence with the tilts "aθ" to "a4 " of Fig . 21. The shield objects "AO" to "A4 " of Fig . 23 correspond respectively to the tilts "aθ" to "a4 " of Fig . 21. Accordingly, in the case where the shield tilt flag indicates one of the tilts "aθ" to "a4", the corresponding one of the shield objects "AO" to "A4" is used and displayed as it is . In the case where the shield tilt flag indicates one of the tilts "a5" to "a7", the corresponding one of the shield objects "A3" to "Al" is used and displayed after horizontally flipping it.
Fig. 24 is a view showing an example of the shield object which is displayed on the television monitor 7 of Fig. 1. As shown in Fig. 24, a shield object "AIr" is displayed on the television monitor 7 by horizontally flipping the shield object "Al" of Fig. 23. Accordingly, in this example, the tilt of the sword 11 is the tilt "a7" of Fig. 21. As has been discussed above, the shield object is displayed on the television monitor 7 in accordance with the tilt indicated by the shield tilt flag (i . e . , the tilt of the sword 11) . Meanwhile, as apparent from Fig. 24, the shield object is displayed from a side of the screen through another side . This is true also for the sword trajectory object (except for (k) and (1) of Fig. 22) .
In this case, the coloration of the shield object is preferably a transparent color or a semitransparent color . This is because the objects (for example, an enemy object and the like) located behind the shield object can be seen through, the player 17 can manipulate the sword 11 while viewing the object located behind.
By the way, in the case of the present embodiment as has been discussed above, the coordinates of two points are determined in accordance with the first rule to the third rule which are experientially. derived in order to calculate the tilt of the sword 11. Accordingly, it is possible to easily and precisely detect the tilt of the sword 11.
Meanwhile, the present invention is not limited to the above embodiments, and a variety of variations and modifications may be effected without departing from the spirit and scope thereof, as described in the following exemplary modifications .
(1) Although the operation article 11 is sword-like as an example in the above explanation, the shape of the operation article is not limited thereto . Also, the profile of the retroreflective sheet to be attached to the operation article is not limited to the profile of the retroreflective sheet 15.
(2 ) In the case of the above example, a shield object corresponding to the tilt of the sword 11 is displayed on the television monitor 7. However, the object to be displayed corresponding to the tilt of the sword 11 is not limited thereto, but any object having an arbitrary profile or configuration can be displayed.
While the present invention has been described in terms of embodiments , those skilled in the art will recognize that the invention is not limited to the embodiments described. The present invention can be practiced with modification and alteration within the spirit and scope of the appended claims . The description is thus to be regarded as illustrative instead of limiting in any way on the present invention.

Claims

1. A tilt detection method of detecting a tilt of an operation article which is held and given motion by an operator, comprising: emitting light to the operation article which has a reflecting object in a predetermined cycle; imaging the operation article to which the light is emitted, and acquiring lighted image data including a plurality of pixel data items each of which comprises a luminance value; imaging the operation article to which the light is not emitted, and acquiring unlighted image data including a plurality of pixel data items each of which comprises a luminance value; generating differential image data by calculating difference between the lighted image data and the unlighted image data; obtaining the coordinates of a pixel having the maximum horizontal coordinate among the pixels of which the image of the operation article is made up in a differential image on the basis of the differential image data; obtaining the coordinates of a pixel having the minimum horizontal coordinate among the pixels of which the image of the operation article is made up; obtaining the coordinates of a pixel having the minimum vertical coordinate among the pixels of which the image of the operation article is made up; obtaining the coordinates of a pixel having the maximum vertical coordinate among the pixels of which the image of the operation article is made up; and calculating the tilt of the operation article on the basis of a first reference point and a second reference point which are selected on the basis of a first edge point located in the pixel having the maximum horizontal coordinate, a second edge point located in the pixel having the minimum horizontal coordinate, a third edge point located in the pixel having the maximum vertical coordinate and a fourth edge point located in the pixel having the minimum vertical coordinate, wherein, in the step of calculating the tilt of the operation, in the case where the first edge point, the second edge point, the third edge point and the fourth edge point are vertices of a quadrilateral having four sides among which the shortest side and the next shortest side share one of the first edge point, the second edge point, the third edge point and the fourth edge point, the first reference point and the second reference point are set respectively to the shared edge point and the edge point opposed to the shared edge point, and in the case where the first edge point, the second edge , point, the third edge point and the fourth edge point are vertices of a quadrilateral having four sides among which the shortest side and the next shortest side share none of the first edge point, the second edge point, the third edge point and the fourth edge point, the first reference point and the second reference point are set respectively to the center point of the shortest side and the center point of the next shortest side .
2. The tilt detection method as claimed in claim 1 wherein in the case where there are a plurality of the pixels which have the maximum horizontal coordinate, the step of obtaining the coordinates of a pixel having the maximum horizontal coordinate sets the vertical coordinate of the first edge point to the average value of the vertical coordinates of the plurality of the pixels having the maximum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the minimum horizontal coordinate, the step of obtaining the coordinates of a pixel having the minimum horizontal coordinate sets the vertical coordinate of the second edge point to the average value of the vertical coordinates of the plurality of the pixels having the minimum horizontal coordinate; wherein in the case where there are a plurality of the pixels which have the maximum vertical coordinate, the step of obtaining the coordinates of a pixel having the maximum vertical coordinate sets the horizontal coordinate of the third edge point to the average value of the horizontal coordinates of the plurality of the pixels having the maximum vertical coordinate ; and wherein in the case where there are a plurality of the pixels which have the minimum vertical coordinate, the step of obtaining the coordinates of a pixel having the minimum vertical coordinate sets the horizontal coordinate of the fourth edge point to the average value of the horizontal coordinates of the plurality of the pixels having the minimum vertical coordinate .
3. The tilt detection method as claimed in claim 1 further comprising : in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the first edge point and the second edge point; in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the third edge point and the fourth edge point; and in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with which is greater the distance between the first edge point and the second edge point or the distance between the third edge point and the fourth edge point .
4. The tilt detection method as claimed in claim 2 further comprising : in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth' edge point, then obtaining the distance between the first edge point and the second edge point; in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the third edge point and the fourth edge point; and in the case where the vertical coordinate of the first edge point is equal to the vertical coordinate of the second edge point and the horizontal coordinate of the third edge point is equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with which is greater the distance between the first edge point and the second edge point or the distance between the third edge point and the fourth edge point .
5. The tilt detection method as claimed in claim 1 further comprising : in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point .
6. The tilt detection method as claimed in claim 2 further comprising : in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point, or on the basis of the coordinates of the third edge point and the fourth edge point, in accordance with a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point .
7. The tilt detection method as claimed in claim 1 further comprising : in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point , instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point if a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point is greater than a first constant which exceeds "1" ; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the third edge point and the fourth edge point if said ratio is smaller than a second constant which is smaller than "1" .
8. The tilt detection method as claimed in claim 2 further comprising : in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, then obtaining the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point; in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the first edge point and the second edge point if a ratio of the distance between the horizontal coordinate of the first edge point and the horizontal coordinate of the second edge point to the distance between the vertical coordinate of the third edge point and the vertical coordinate of the fourth edge point is greater than a first constant which exceeds "1" ; and in the case where the vertical coordinate of the first edge point is not equal to the vertical coordinate of the second edge point and/or the horizontal coordinate of the third edge point is not equal to the horizontal coordinate of the fourth edge point, instead of the step of calculating the tilt of the operation article on the basis of the first reference point and the second reference point, calculating the tilt of the operation article on the basis of the coordinates of the third edge point and the fourth edge point if said ratio is smaller than a second constant which is smaller than "1" .
9. An entertainment system comprising : an operation article that is operated by a user when the user is enjoying said entertainment system; an imaging device operable to capture an image of said operation article; and an information processing apparatus connected to said imaging device, and operable to receive the images of said operation article from said imaging device and determine tilts of said operation article on the basis of the images of said operation article, wherein said information processing apparatus at least including : a unit operable to obtain four representative points for representing a profile of said operation article in the image of said operation article; a unit operable to determine whether or not there is a representative point, among the four representative points , which is shared by the shortest side and the next shortest side of a quadrilateral which is defined by the four representative points as its vertices , a unit operable to calculate the tilt of said operation article on the basis of the tilt of a straight line passing through the shared representative point and the representative point opposed to the shared representative point when there is the shared representative point, and calculate the tilt of said operation article on the basis of the tilt of a straight line passing through the center point of the shortest side of the quadrilateral and the center point of the next shortest side of the quadrilateral when there is not the shared representative point .
PCT/JP2006/301610 2005-01-27 2006-01-25 Tilt detection method and entertainment system WO2006080546A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-019064 2005-01-27
JP2005019064 2005-01-27

Publications (1)

Publication Number Publication Date
WO2006080546A1 true WO2006080546A1 (en) 2006-08-03

Family

ID=36740560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/301610 WO2006080546A1 (en) 2005-01-27 2006-01-25 Tilt detection method and entertainment system

Country Status (1)

Country Link
WO (1) WO2006080546A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919464B2 (en) * 1980-06-06 1984-05-07 富士通株式会社 pattern recognition device
WO2004002593A1 (en) * 2002-06-27 2004-01-08 Ssd Company Limited Information processor having input system using stroboscope

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5919464B2 (en) * 1980-06-06 1984-05-07 富士通株式会社 pattern recognition device
WO2004002593A1 (en) * 2002-06-27 2004-01-08 Ssd Company Limited Information processor having input system using stroboscope

Similar Documents

Publication Publication Date Title
EP2492844B1 (en) Image recognition program, image recognition apparatus, image recognition system, and image recognition method
EP2492869B1 (en) Image processing program, image processing apparatus, image processing system, and image processing method
US9683943B2 (en) Inspection apparatus, inspection method, and program
EP1324269B1 (en) Image processing apparatus, image processing method, record medium, computer program, and semiconductor device
US8625898B2 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
US8571266B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
JP2002298145A (en) Position detector and attitude detector
JP2011152297A (en) Program, information storage medium, game system
US8718325B2 (en) Computer-readable storage medium, image processing apparatus, image processing system, and image processing method
JP5756322B2 (en) Information processing program, information processing method, information processing apparatus, and information processing system
US20080031544A1 (en) Tilt Detection Method and Entertainment System
JP2008225767A (en) Image processing program and image processor
JP2011154574A (en) Program, information storage medium, and game system
TW201705088A (en) Generating a disparity map based on stereo images of a scene
KR20210105302A (en) Information processing apparatus, information processing method, and computer program
JP2002008041A (en) Action detecting device, action detecting method, and information storage medium
JP4635164B2 (en) Tilt detection method, computer program, and entertainment system
JP2005011097A (en) Face existence determining device and face existence determining program
WO2006080546A1 (en) Tilt detection method and entertainment system
US8345001B2 (en) Information processing system, entertainment system, and information processing system input accepting method
US8705869B2 (en) Computer-readable storage medium, image recognition apparatus, image recognition system, and image recognition method
JP2007295990A (en) Moving direction calculation device and moving direction calculation program
JP2667885B2 (en) Automatic tracking device for moving objects
JP2009044631A (en) Object detection device
JP2006134339A (en) Identification method, identification device and traffic control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase

Ref document number: 06712753

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 6712753

Country of ref document: EP