WO2021090592A1 - Input system, input method, and program - Google Patents

Input system, input method, and program Download PDF

Info

Publication number
WO2021090592A1
WO2021090592A1 PCT/JP2020/036076 JP2020036076W WO2021090592A1 WO 2021090592 A1 WO2021090592 A1 WO 2021090592A1 JP 2020036076 W JP2020036076 W JP 2020036076W WO 2021090592 A1 WO2021090592 A1 WO 2021090592A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
input device
area
orientation
acquired
Prior art date
Application number
PCT/JP2020/036076
Other languages
French (fr)
Japanese (ja)
Inventor
哲法 中山
Original Assignee
株式会社ソニー・インタラクティブエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ソニー・インタラクティブエンタテインメント filed Critical 株式会社ソニー・インタラクティブエンタテインメント
Priority to JP2021554835A priority Critical patent/JP7340031B2/en
Priority to CN202080075076.4A priority patent/CN114600069B/en
Publication of WO2021090592A1 publication Critical patent/WO2021090592A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H18/00Highways or trackways for toys; Propulsion by special interaction between vehicle and track
    • A63H18/02Construction or arrangement of the trackway
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H5/00Musical or noise- producing devices for additional toy effects other than acoustical
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/04Sound-producing devices

Definitions

  • the present invention relates to an input system, an input method and a program.
  • the pattern in which the position information is encoded is printed on the sheet
  • the pattern taken by the camera provided on the tip of the pen or the like is decoded to acquire the position information.
  • the position information indicates the coordinates in the sheet
  • the acquired position information indicates the position on the sheet pointed by the tip of the pen or the like.
  • the input system determines which of the plurality of predetermined areas the acquired position information is included in, and inputs information according to the determined area. The entered information will be used for subsequent processing.
  • Patent Document 1 discloses that a paper on which a game selection area is printed is read by a camera built in an input device, and a game is started according to the reading result.
  • the amount of information that can be input by one operation is small.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving an input interface for a sheet on which a pattern in which position information is encoded is printed.
  • the input system includes a sheet on which a pattern whose position is encoded is printed, an input device including a camera that captures the pattern, and an image captured by the camera.
  • the acquisition means for acquiring the position and orientation of the input device based on the pattern, and the execution means for executing the process based on the position and orientation acquired by the acquisition means.
  • the input method according to the present invention is an image obtained by taking a sheet on which a pattern in which a position is encoded is printed, and the position and orientation of the input device recognized from the image taken by the input device. Includes a step of acquiring the above and a step of executing the process based on the acquired position and orientation.
  • the program according to the present invention determines the position and orientation of the input device, which is an image taken by a sheet on which a pattern whose position is encoded is printed and is recognized from the image taken by the input device.
  • the computer functions as an acquisition means for acquisition and an execution means for executing processing based on the acquired position and orientation.
  • the input interface for a sheet on which a pattern in which position information is encoded is printed is improved.
  • the executing means may output sound and determine the parameters of the output sound according to the user's operation based on the acquired position and orientation.
  • the sheet comprises a setting area
  • the executing means responds to the user's operation based on the acquired orientation when the position of the input device is included in the setting area.
  • the type, pitch, and volume of the output sound may be determined.
  • the executing means when the position of the input device is included in the setting area, the executing means outputs a type of sound according to the operation of the user based on the acquired orientation. Either the pitch or the volume may be determined, and the determined sound may be output.
  • the sheet further includes a plurality of playing areas
  • the executing means performs a performance including the position of the input device when the position of the input device is any of the plurality of playing areas.
  • the execution means outputs a sound having a pitch corresponding to the region, and when the position of the input device is included in the setting region, the execution means outputs the type and pitch of the output sound based on the acquired orientation. You may change either the volume or the scale of the scale.
  • the executing means may determine the parameters of the output sound according to the amount of change in the acquired orientation while the acquired position is within a predetermined region. ..
  • the executing means determines the parameters of the sound output by the input device based on the orientation of the input device with respect to the sheet, when the acquired position is within a predetermined range. You may.
  • the executing means may select information based on the acquired position and orientation, and execute processing based on the selected information.
  • FIG. 1 It is a figure which shows an example of the operation system which concerns on 1st Embodiment. It is a figure which shows the hardware configuration of an operation system. It is a figure which shows the bottom surface of an input device. It is a figure which shows an example of a sheet and an input device. It is a figure which shows an example of a sheet. It is a figure which shows schematicly the pattern on a sheet. It is a block diagram which shows the function realized by an operation system. It is a flow chart which shows the outline of the processing of an operation system. It is a figure which shows another example of a sheet. It is a figure which shows another example of a sheet. It is a flow chart which shows an example of the processing about a setting area.
  • the user holds a self-propelled input device and inputs information by pointing to an area.
  • the process is executed according to the input information.
  • FIG. 1 is a diagram showing an example of an operation system according to the first embodiment.
  • the operation system according to the present invention includes a device control device 10, position input devices 20a and 20b, a controller 17, and a cartridge 18.
  • the position input devices 20a and 20b are self-propelled devices having a camera 24, and both have the same function. In the following, these position input devices 20a and 20b will be referred to as position input devices 20 unless otherwise specified.
  • the device control device 10 controls the position input device 20 via radio.
  • the device control device 10 has a recess 32, and when the position input device 20 is fitted in the recess 32, the device control device 10 charges the position input device 20.
  • the controller 17 is an input device that acquires an operation by the user, and is connected to the device control device 10 by a cable.
  • the cartridge 18 has a built-in non-volatile memory.
  • FIG. 2 is a diagram showing an example of the hardware configuration of the operation system according to the embodiment of the present invention.
  • the device control device 10 includes a processor 11, a storage unit 12, a communication unit 13, and an input / output unit 14.
  • the position input device 20 includes a processor 21, a storage unit 22, a communication unit 23, a camera 24, and two motors 25.
  • the device control device 10 may be a dedicated device optimized for controlling the position input device 20, or may be a general-purpose computer.
  • the processor 11 operates according to the program stored in the storage unit 12, and controls the communication unit 13, the input / output unit 14, and the like.
  • the processor 21 operates according to a program stored in the storage unit 22, and controls the communication unit 23, the camera 24, the motor 25, and the like.
  • the program is stored and provided in a computer-readable storage medium such as a flash memory in the cartridge 18, but may be provided via a network such as the Internet.
  • the storage unit 12 is composed of a DRAM and a non-volatile memory built in the device control device 10, a non-volatile memory in the cartridge 18, and the like.
  • the storage unit 22 is composed of a DRAM, a non-volatile memory, and the like.
  • the storage units 12 and 22 store the above program. Further, the storage units 12 and 22 store information and calculation results input from the processors 11, 21 and the communication units 13, 23 and the like.
  • Communication units 13 and 23 are composed of integrated circuits, antennas, etc. for communicating with other devices.
  • the communication units 13 and 23 have a function of communicating with each other according to, for example, a Bluetooth (registered trademark) protocol.
  • the communication units 13 and 23 Based on the control of the processors 11 and 21, the communication units 13 and 23 input the information received from the other devices to the processors 11 and 21 and the storage units 12 and 22, and transmit the information to the other devices.
  • the communication unit 13 may have a function of communicating with another device via a network such as a LAN.
  • the input / output unit 14 includes a circuit for acquiring information from an input device such as a controller 17 and a circuit for controlling an output device such as an audio output device or an image display device.
  • the input / output unit 14 acquires an input signal from the input device, and inputs the converted information of the input signal to the processor 11 and the storage unit 12. Further, the input / output unit 14 causes the speaker to output the sound and outputs the image to the display device based on the control of the processor 11 or the like.
  • the motor 25 is a so-called servomotor in which the rotation direction, the amount of rotation, and the rotation speed are controlled by the processor 21.
  • the camera 24 is arranged so as to photograph the lower part of the position input device 20, and photographs the pattern 71 (see FIG. 6) printed on the sheet 31 (see FIGS. 4 and 5) on which the position input device 20 is placed. ..
  • the sheet 31 is printed with the pattern 71 recognized in the infrared frequency domain, and the camera 24 captures the infrared image.
  • FIG. 3 is a diagram showing an example of the position input device 20.
  • FIG. 3 is a view of the position input device 20 as viewed from below.
  • the position input device 20 further includes a power switch 250, a switch 222, and two wheels 254.
  • One motor 25 is assigned to each of the two wheels 254, and the motor 25 drives the assigned wheel 254.
  • FIG. 4 is a diagram showing an example of the sheet 31 and the position input devices 20a and 20b.
  • a cover 75 is attached to the position input device 20.
  • the cover 75 has a shape that does not interfere with the shooting of the sheet 31 by the camera 24 and that the user can easily recognize the direction of the position input device 20.
  • the cover 75 is covered with the position input device 20 and covers other than the lower surface of the position input device 20.
  • FIG. 18 is a view showing an example of the cover 75, and is a view showing the front surface and the side surface of the cover 75.
  • the cover 75 shown in FIG. 18 is covered from above the position input device 20a.
  • the cover 75 makes it easier for the user to recognize the front of the position input device 20.
  • the user holds the position input device 20a with his right hand and holds the position input device 20b with his left hand. Then, when the user places the position input device 20 in the area on the sheet 31, information corresponding to the area is input, and the sound corresponding to the input information is output from the device control device 10.
  • FIG. 5 is a diagram showing an example of the sheet 31.
  • An image that can be visually recognized by the user is printed on the sheet 31.
  • a plurality of setting areas 41a to 41i, a plurality of keyboard areas 43, a plurality of rhythm areas 44, and a plurality of chord areas 45 are provided on the sheet 31a shown in FIG. The area can be visually recognized.
  • a pattern 71 that can be read by the camera 24 is further printed on the sheet 31.
  • FIG. 6 is a diagram schematically showing the pattern 71 on the sheet 31.
  • Patterns 71 of a predetermined size for example, 0.2 mm square
  • Each of the patterns 71 is an image in which the coordinates of the position where the pattern 71 is arranged are encoded.
  • the sheet 31 is assigned an area corresponding to the size of the sheet 31 in the coordinate space that can be represented by the encoded coordinates.
  • the camera 24 of the position input device 20 photographs the pattern 71 printed on the sheet 31 or the like, and the position input device 20 or the device control device 10 decodes the pattern 71 to acquire the coordinates. To do.
  • the position input device 20 or the device control device 10 detects the direction of the position input device 20 (for example, the angle A from the reference direction) by detecting the inclination of the pattern 71 in the image captured by the camera 24. To do.
  • the plurality of keyboard areas 43 are arranged side by side in the horizontal direction.
  • the keyboard area 43 is an area for instructing the output of the performance sound.
  • a sound is output.
  • the output sound becomes higher as the keyboard area 43 on which the position input device 20 is placed is on the right side.
  • the plurality of rhythm regions 44 are arranged above the keyboard region 43 in FIG.
  • the rhythm area 44 is an area for instructing the output of the performance sound.
  • the sound corresponding to the arranged rhythm area 44 is output from the device control device 10.
  • the plurality of chord regions 45 are arranged on the right side of the rhythm region 44 in FIG.
  • the chord area 45 is an area for instructing the output of the performance sound.
  • the plurality of setting areas 41a to 41i are areas for acquiring instructions for setting sound parameters output when the position input device 20 is arranged in the keyboard area 43 or the rhythm area 44. In the following, when distinction is unnecessary, these are referred to as setting areas 41.
  • the setting area 41 corresponds to the parameter of the output sound.
  • the setting area 41a is an area for designating the parameters of the volume of the output sound
  • the setting area 41b is an area for adjusting the parameters of the pitch of the sound output when the position input device 20 is arranged in the keyboard area 43.
  • the setting area 41d is an area for setting the parameters of the scale scale (for example, major, minor, pentatonic, or Ryukyu scale) for the sounds associated with the plurality of keyboard areas 43.
  • the setting areas 41h, 41i, and 41g are areas for setting parameters of the type of sound output when the position input device 20 is arranged in the keyboard area 43, the rhythm area 44, and the chord area 45, respectively.
  • FIG. 7 is a block diagram showing the functions realized by the operation system.
  • the operation system functionally includes a position acquisition unit 51, an information input unit 52, an application execution unit 58, and a voice output unit 59.
  • the information input unit 52 functionally includes an area determination unit 53 and an input information determination unit 55.
  • These functions are mainly realized by the processor 11 included in the device control device 10 executing a program stored in the storage unit 12 and controlling the position input device 20 via the communication unit 13. Further, in some of the functions such as the position acquisition unit 51, the processor 21 included in the position input device 20 executes a program stored in the storage unit 22 and exchanges data with the device control device 10 via the communication unit 23. However, it may be realized by controlling the camera 24 and the motor 25.
  • the position acquisition unit 51 recognizes the coordinate-encoded pattern 71 from the image captured by the camera 24, and from the coordinates indicated by the pattern 71, the position coordinate of the position input device 20 and the orientation of the position input device 20. And get.
  • the information input unit 52 acquires input information input by the user based on the position and orientation acquired by the position acquisition unit 51.
  • the area determination unit 53 included in the information input unit 52 determines in which area the position of the position input device 20 is included.
  • the input information determination unit 55 included in the information input unit 52 determines the input information based on the area including the position of the position input device 20. Further, when the area including the position of the position input device 20 is a predetermined area, the input information determination unit 55 determines the input information based on the area and the orientation of the position input device 20. To do.
  • the input information determination unit 55 is based on whether or not one position of the position input device 20 is included in a certain area and whether or not the other position of the position input device 20 is included in the other area. , Determine the input information.
  • the application execution unit 58 executes the process based on the acquired input information.
  • the application execution unit 58 sets sound parameters based on the acquired input information.
  • the audio output unit 59 included in the application execution unit 58 outputs a sound according to the set parameters.
  • FIG. 8 is a flow chart showing an outline of processing of the operation system.
  • the position input device 20a will be referred to as a first device
  • the position input device 20b will be referred to as a second device.
  • the process shown in FIG. 8 is repeatedly executed.
  • the position acquisition unit 51 determines whether the first device has photographed the surface of the sheet 31 (step S101). The position acquisition unit 51 determines whether or not the first device has photographed the surface of the sheet 31 based on whether or not an image element peculiar to the pattern 71 is present in the image captured by the camera 24 of the first device. You can do it.
  • the position acquisition unit 51 acquires the first position which is the position on the sheet 31 of the first device and the direction of the first device.
  • the area determination unit 53 detects the area including the acquired first position (step S103).
  • step S102 the position acquisition unit 51 acquires the orientation of the first device based on the inclination of the pattern 71 taken by the camera 24 of the first device in the image.
  • step S103 the area determination unit 53 selects an area including the first position based on the information relating each of the plurality of areas to the coordinate range on the sheet 31 and the coordinates of the first position. ..
  • the processing corresponding to steps S101 to S103 is also performed on the second device.
  • the position acquisition unit 51 acquires the second position which is the position on the sheet 31 of the second device and the direction of the second device.
  • the area determination unit 53 detects the area including the acquired second position (step S106).
  • the area determined by the area determination unit 53 does not have to be limited to the setting areas 41a to 41i, the keyboard area 43, the rhythm area 44, and the chord area 45 on a single sheet 31.
  • the area determination unit 53 may determine whether or not the first position or the second position is included in the area on the other sheet 31. Patterns 71 in which coordinates in different ranges are encoded are printed on sheets 31 that are different from each other. As a result, even if the area is arranged on the plurality of sheets 31, the area is determined simply by the coordinates of the first position or the second position.
  • FIG. 9 is a diagram showing another example of the sheet 31.
  • a plurality of character areas 46, a plurality of switching areas 47, and a free area 42 are provided on the sheet 31b shown in FIG. 9, and the user can visually recognize these areas.
  • the user arranges the position input device 20 on the sheet 31b shown in FIG. 8 and points to the character area 46 and the switching area 47, so that the operation system outputs the voice of the character corresponding to the pointed area.
  • the plurality of character areas 46 are areas for selecting characters.
  • the number of character areas 46 is smaller than the number of character types that can be input.
  • the character area 46 is an area in which characters mainly indicating seion in Japanese are displayed.
  • the character area 46 is not an area indicating the voiced sound, the semi-voiced sound, and the yoon itself, but the combination of the switching area 47 and the character area 46, which will be described later, makes it possible to input the voiced sound, the semi-voiced sound, the yoon, or a combination thereof. ..
  • the plurality of switching areas 47 are areas for inputting more types of characters than the number of character areas 46.
  • Each of the plurality of character areas 46 is associated with any of a plurality of candidate values included in a certain candidate set. It is also associated with any of a plurality of candidate values included in other candidate sets.
  • the candidate value corresponding to the indicated character area 46 is selected as input information.
  • the plurality of switching regions 47 specifically include a region showing a voiced sound, a region showing a semi-voiced sound, a region showing a yoon, a region showing a sokuon, and a region showing a combination of a voiced sound and a yoon. , Includes areas showing handakuon and yoon. The processing related to these areas will be described later.
  • the free area 42 included in the sheet 31b is an area for inputting a set value according to the planar position of the position input device 20 when the position input device 20 is arranged on the sheet 31b.
  • the x-axis Lx and the y-axis Ly are described by broken lines on the free region 42 of FIG. 9, these may not be described.
  • the setting of a plurality of items may be set based on the movement of one input device 20 in the x direction and the movement in the y direction. For example, the pitch of the output sound may be determined according to the y coordinate of the position of the input device 20, and the mixing volume for the L / R speaker may be determined according to the x coordinate.
  • the orientation of the position input device 20 is not used, and the free area 42 receives one input corresponding to two, a slider corresponding to the movement in the x-axis direction and a slider corresponding to the y-axis direction. It is possible with one position input device 20.
  • FIG. 10 is a diagram showing another example of the sheet 31.
  • a faucet region 48 and a glass region 49 are provided on the sheet 31c shown in FIG. 10, and the user can visually recognize these regions.
  • the faucet area 48 is an area for obtaining an instruction as to whether or not to put water in the glass in a pseudo manner by the amount of rotation of the position input device 20 arranged on the faucet area 48.
  • the glass region 49 is a region for outputting the sound of a glass containing water in a pseudo manner. The processing related to these areas will be described later.
  • the information input unit 52 sets the type of the region including the first position.
  • the input information is acquired by the corresponding processing, and the voice output unit 59 outputs the sound (step S107).
  • the information input unit 52 acquires input information by processing according to the type of the region including the second position, and the voice output unit 59 outputs sound (step S108).
  • step S107 the details of the process of step S107, that is, the process according to the type of the region will be described.
  • the process of step S108 sets the first position and direction of the first device and the second position and direction of the second device in step S107 to the second position and direction of the second device and the first of the first device. Since it is only necessary to change the position and direction, detailed description thereof will be omitted.
  • FIG. 11 is a flow chart showing an example of processing for the setting area 41.
  • FIG. 11 shows a process of acquiring input information and outputting a sound when the first position is included in the setting area 41 in step S107.
  • the input information determination unit 55 acquires correspondence information indicating the correspondence between the direction of the position input device 20 and the input value according to the setting area 41 including the first position (step S301).
  • Correspondence information may include a plurality of items, and each item may associate a range of directions with an input value.
  • the input value is a set value of the parameter corresponding to the setting area 41.
  • Correspondence information is prepared in advance for each of the setting areas 41a to 41i.
  • the input information determination unit 55 acquires an input value based on the acquired correspondence information and the direction of the first device (step S302).
  • the input information determination unit 55 acquires, for example, an input value associated with an item having a range including the direction of the first device among the items included in the corresponding information.
  • the application execution unit 58 sets the acquired input value in the parameter corresponding to the setting area 41 to which the first position belongs (step S303).
  • the volume is set according to the direction.
  • the pitch of the sound (the pitch of the reference sound) output when any one of the keyboard areas 43 is pointed to changes according to the direction thereof.
  • the heights of the keyboard regions 43 are set according to the direction while maintaining the relative difference in pitch between the keyboard regions 43.
  • the application execution unit 58 determines the scale of the scale (for example, major, etc.) of the sound associated with the plurality of keyboard areas 43 according to the direction in which the first device is arranged. Minor, pentatonic, or Ryukyu scale) may be set.
  • the application execution unit 58 is located in the keyboard area 43, the rhythm area 44, and the chord area 45, respectively, according to the direction in which the first device is arranged.
  • the type of sound output when the input device 20 is arranged may be set.
  • the area required for inputting information can be reduced. Further, since the rotation of the position input device 20 is associated with the operation of the knob, the user can input the value more intuitively.
  • the audio output unit 59 When an input value is set in the parameter, the audio output unit 59 outputs a confirmation sound according to the parameter (step S304). By outputting the confirmation sound, the user can easily recognize the current setting, and the setting becomes easier.
  • FIG. 12 is a flow chart showing an example of processing for the keyboard area 43.
  • FIG. 12 shows a process of acquiring input information and outputting a sound when the first position is included in the keyboard area 43 in step S107.
  • the input information determination unit 55 acquires information for identifying which keyboard area 43 is, according to the keyboard area 43 including the first position (step S401).
  • the input information determination unit 55 acquires whether or not the pitch is changed based on the direction of the first device (step S402). More specifically, the input information determination unit 55 receives an instruction to raise the pitch by a semitone when the direction of the first device is tilted to the right by a predetermined angle or more with respect to the reference direction of the sheet as input information. get. Further, the input information determination unit 55 may acquire an instruction to lower the pitch by a semitone as input information when the direction of the first device is tilted to the left by a predetermined angle or more with respect to the reference direction of the sheet 31. Good.
  • the application execution unit 58 determines the pitch of the output sound based on the scale and the pitch of the reference sound, the acquired identification information, and whether or not the pitch of the sound is changed (step S403). More specifically, the application execution unit 58 obtains a relative pitch based on the scale and the identification information of the keyboard area 43, and determines the obtained relative pitch and the reference pitch. Based on this, the absolute pitch of the output sound is obtained, and if there is a change in the pitch, the pitch is changed by a semitone.
  • the audio output unit 59 included in the application execution unit 58 has a determined pitch and outputs a sound of a sound type and a volume set as parameters (step S404).
  • the pitch of the output sound can be raised or lowered by a semitone using the direction of the position input device 20 when pointing to the keyboard area 43.
  • the application execution unit 58 may determine the length of the output sound (the type of musical note corresponding to the output sound) according to the direction of the position input device 20. For example, it is difficult for a child user to accurately adjust the time for placing the position input device 20 in the keyboard area 43, so that the child user can more easily indicate the length of the sound by using the direction. ..
  • FIG. 13 is a flow chart showing an example of processing for the rhythm region 44.
  • FIG. 13 shows a process of acquiring input information and outputting a sound when the first position is included in the rhythm region 44 in step S107.
  • the input information determination unit 55 acquires information (rhythm set) indicating the correspondence between each of the plurality of rhythm areas 44 and the type of the rhythm sound, based on the parameters previously set by the setting area 41i (step). S501).
  • rhythm set There are a plurality of rhythm sets, and each of the plurality of rhythm regions 44 is associated with any of the plurality of rhythm sound types included in a certain rhythm set.
  • Each of the plurality of rhythm regions 44 is also associated with any of the plurality of rhythm sound types included in the other rhythm set.
  • the type of rhythm sound is, for example, drum, percussion, Japanese drum, animal bark, etc., and the parameters used for selecting the rhythm set are set according to the direction of the position input device 20 when the setting area 41i is pointed to. Has been done.
  • the input information determination unit 55 acquires the type of rhythm sound corresponding to the rhythm region 44 including the first position as input information based on the acquired set (step S502). Then, the voice output unit 59 outputs the acquired type of rhythm sound at the set volume (step S503).
  • FIG. 14 is a flow chart showing an example of processing for the character area 46.
  • FIG. 14 shows a process of acquiring input information and outputting voice when the first position is included in the character area 46 in step S107.
  • the input information determination unit 55 determines whether or not the second position is included in any of the plurality of switching areas 47 (step S601). If the second position is not included in any of the switching regions 47 (N in step S601), the input information determination unit 55 selects the default candidate set from the plurality of candidate sets (step S602). Each of the plurality of character areas 46 is associated with any of a plurality of candidate values included in a certain candidate set. It is also associated with any of a plurality of candidate values included in other candidate sets.
  • the input information determination unit 55 determines the switching area 47 including the second position among the plurality of candidate sets. Select a candidate set according to (step S603). When any candidate set is selected, the input information determination unit 55 acquires, as an input value, a candidate site corresponding to the character area 46 including the first position from the selected candidate set (step S604). Further, the input information determination unit 55 acquires an instruction of the type and pitch of the output voice based on the direction of the first device (step S605).
  • the input information determination unit 55 divides the 360-degree range of the angle A into two ranges, and when the angle A of the first device is in one range, the male voice is used as the type of voice. As an instruction, and when the angle A is in the other range, a female voice may be acquired as an instruction as the type of voice. Further, the input information determination unit 55 may acquire an instruction of the pitch of the sound based on the difference between the reference angle and the angle A in each range.
  • the voice output unit 59 outputs the voice corresponding to the acquired input value according to the type and pitch of the acquired voice (step S606).
  • the switching area 47 can be used to switch the candidate set of input values, and many types of information can be input even in a small character area 46. Further, in the process shown in FIG. 14, the candidate set can be switched by leaving the position input device 20 in the switching area 47. Since the user can always confirm which candidate set information (for example, voiced sound or handakuon character) is input from the switching area 47 in which the position input device 20 is placed, the information is input by pointing to the printed area. It becomes possible to input intuitively depending on the case. Further, by using the direction of the position input device 20, it becomes possible to easily adjust the pitch of the voice.
  • candidate set information for example, voiced sound or handakuon character
  • the input information determination unit 55 acquires the position correspondence information indicating the relationship between the parameter item set by the free area 42 and the position, and the position.
  • the input value is acquired based on the correspondence information and the acquired position (x-coordinate and y-coordinate) of the first device.
  • the application execution unit 58 sets the acquired input value in the parameter item.
  • FIG. 15 is a flow chart showing an example of processing for the faucet region 48.
  • FIG. 15 shows a process of acquiring input information and outputting voice when the first position is included in the faucet region 48 in step S107. In this process, the process is executed according to the amount of change in the direction of the first device.
  • the input information determination unit 55 determines whether the first position acquired last time is included in the faucet area 48 (step S701). When the previous first position is not included in the faucet area 48 (N in step S701), the input information determination unit 55 stores the current direction of the first device as the initial direction (step S702).
  • the input information determination unit 55 is based on the difference (direction change amount) between the initial direction and the current direction of the first device. Then, an instruction to turn on or off the running water mode is obtained (step S703). More specifically, the input information determination unit 55 acquires an instruction to turn on the running water mode when the amount of change in the direction is equal to or more than the threshold value, and acquires an instruction to turn off the running water mode when the amount of change in the direction is less than the threshold value.
  • the application execution unit 58 turns on the running water mode, and the voice output unit 59 outputs the sound of flowing water (step S704). Further, the application execution unit 58 determines the pitch of the sound to be output when the glass region 49 is pointed to according to the period during which the running water mode is on (step S705).
  • the sound (output sound) output when the glass area 49 is pointed to corresponds to the sound output when the glass containing water is struck.
  • the set value may be increased or decreased according to the amount of change in the direction of the position input device 20.
  • the input information determination unit 55 determines the amount of change in the volume and the increase / decrease in the height of the key (for example, the reference sound) in the keyboard area 43 according to the amount of rotation of the input device 20 on the setting areas 41a and 41b, respectively. You may.
  • the application execution unit 58 may set the key height of the volume or keyboard area 43 based on the determined increase / decrease and the previously set volume or key height. Intuitive operation is possible by changing a set value such as a volume according to the amount of rotation of the input device 20. This effect can also be obtained by the process shown in FIG.
  • FIG. 16 is a diagram showing an example of the sheet 31d according to the second embodiment.
  • a 9 ⁇ 9 square 80 is printed on the sheet 31d shown in FIG. Further, one of the four corners is the first determination area 81, and the other is the second determination area 82.
  • the application execution unit 58 executes the processing of the game in which the position input device 20 runs on the seat 31d.
  • the application execution unit 58 starts the game from any of a plurality of initial states stored in the storage unit 12 in advance, and changes the values of internal variables and the position of the position input device 20 according to the user's operation.
  • the initial position of the position input device 20 in the initial state is also predetermined, and the position input device 20 self-propells toward the initial position at the start of the game.
  • FIG. 17 is a diagram showing an example of a process of acquiring input information and controlling the position input device 20, and is a process corresponding to steps S107 and S108 of FIG.
  • the input information determination unit 55 includes the first position in the first determination area 81 (Y in step S901) and the second position in the second determination area 82 (Y in step S902). In that case, the restart instruction is acquired as an input value (step S903). Further, when the first position is not included in the first determination area 81 (N in step S901) or the second position is not included in the second determination area 82 (N in step S902). , The input information determination unit 55 acquires the input value according to the area including the first position and the second position (step S904). The restart instruction may be acquired as an input value even when the first position is included in the second determination area 82 and the second position is included in the first determination area 81.
  • step S906 when the restart instruction is acquired as the input value (Y in step S906), the application execution unit 58 initializes the variable of the game currently being executed so that the position input device 20 moves to the initial position. Control (step S907). On the other hand, if the restart instruction is not acquired as the input value (N in step S906), the application execution unit 58 continues the game processing (step S908).
  • information can be input by arranging the two position input devices 20 at special positions. As a result, the area on the sheet 31 can be used more effectively.
  • the position input device 20 can be self-propelled, but it does not have to be self-propelled. Further, the present invention may be used not only for sound output and game restart, but also for general-purpose information input.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This input system offers an improved input interface for a sheet which has printed thereon a pattern having positional information encoded therein. The input system comprises: a sheet (31) on which a pattern (71) having a position encoded therein is to be printed; an input device (20) which includes a camera (24) for photographing said pattern; an acquisition means for acquiring a position and an orientation of the input device on the basis of the pattern contained in an image captured by the camera; and an execution means for executing a process on the basis of the position and orientation acquired by the acquisition means.

Description

入力システム、入力方法およびプログラムInput system, input method and program
 本発明は入力システム、入力方法およびプログラムに関する。 The present invention relates to an input system, an input method and a program.
 シートに位置情報が符号化されたパターンが印刷されている既存の入力システムでは、ペンの先などに設けられたカメラにより撮影されたパターンを復号して位置情報を取得する。位置情報はシート内の座標を示しており、取得された位置情報はペンの先などが指しているシート上の位置を示す。入力システムは取得された位置情報が予め定められた複数の領域のうちいずれに含まれるかを判定し、判定された領域に応じた情報を入力する。入力された情報はその後の処理に用いられる。 In the existing input system in which the pattern in which the position information is encoded is printed on the sheet, the pattern taken by the camera provided on the tip of the pen or the like is decoded to acquire the position information. The position information indicates the coordinates in the sheet, and the acquired position information indicates the position on the sheet pointed by the tip of the pen or the like. The input system determines which of the plurality of predetermined areas the acquired position information is included in, and inputs information according to the determined area. The entered information will be used for subsequent processing.
 特許文献1には、ゲーム選択領域が印刷された紙を、入力デバイスに内蔵されたカメラにより読みとり、その読取結果に応じてゲームを開始することが開示されている。 Patent Document 1 discloses that a paper on which a game selection area is printed is read by a camera built in an input device, and a game is started according to the reading result.
国際公開第2018/025467号International Publication No. 2018/025467
 従来の入力手法では、一度の操作により入力することが可能な情報量が少ない。情報量を増やすため、例えば領域の数を多くする必要が生じる。 With the conventional input method, the amount of information that can be input by one operation is small. In order to increase the amount of information, it becomes necessary to increase the number of areas, for example.
 本発明は上記課題を鑑みてなされたものであって、その目的は、位置情報が符号化されたパターンが印刷されたシートに対する入力インターフェースを改良する技術を提供することにある。 The present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving an input interface for a sheet on which a pattern in which position information is encoded is printed.
 上記課題を解決するために、本発明にかかる入力システムは、位置が符号化されたパターンが印刷されるシートと、前記パターンを撮影するカメラを含む入力デバイスと、前記カメラが撮影した画像に含まれるパターンに基づいて、前記入力デバイスの位置および向きを取得する取得手段と、前記取得手段により取得された位置および向きに基づいて処理を実行する実行手段と、を含む。 In order to solve the above problems, the input system according to the present invention includes a sheet on which a pattern whose position is encoded is printed, an input device including a camera that captures the pattern, and an image captured by the camera. The acquisition means for acquiring the position and orientation of the input device based on the pattern, and the execution means for executing the process based on the position and orientation acquired by the acquisition means.
 また、本発明にかかる入力方法は、位置が符号化されたパターンが印刷されるシートが撮影された画像であって、入力デバイスにより撮影された画像から認識される、前記入力デバイスの位置および向きを取得するステップと、前記取得された位置および向きに基づいて処理を実行するステップと、を含む。 Further, the input method according to the present invention is an image obtained by taking a sheet on which a pattern in which a position is encoded is printed, and the position and orientation of the input device recognized from the image taken by the input device. Includes a step of acquiring the above and a step of executing the process based on the acquired position and orientation.
 また、本発明にかかるプログラムは、位置が符号化されたパターンが印刷されるシートが撮影された画像であって、入力デバイスにより撮影された画像から認識される、前記入力デバイスの位置および向きを取得する取得手段、および、前記取得された位置および向きに基づいて処理を実行する実行手段、としてコンピュータを機能させる。 Further, the program according to the present invention determines the position and orientation of the input device, which is an image taken by a sheet on which a pattern whose position is encoded is printed and is recognized from the image taken by the input device. The computer functions as an acquisition means for acquisition and an execution means for executing processing based on the acquired position and orientation.
 本発明によれば、位置情報が符号化されたパターンが印刷されたシートに対する入力インターフェースが改良される。 According to the present invention, the input interface for a sheet on which a pattern in which position information is encoded is printed is improved.
 本発明の一形態では、前記実行手段は音を出力し、前記取得された位置及び向きに基づいて、ユーザの操作に応じて出力される音のパラメータを決定してもよい。 In one embodiment of the present invention, the executing means may output sound and determine the parameters of the output sound according to the user's operation based on the acquired position and orientation.
 本発明の一形態では、前記シートは設定領域を含み、前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記ユーザの操作に応じて出力される音の種類、高さ、音量のうちいずれかを決定してもよい。 In one embodiment of the invention, the sheet comprises a setting area, and the executing means responds to the user's operation based on the acquired orientation when the position of the input device is included in the setting area. The type, pitch, and volume of the output sound may be determined.
 本発明の一形態では、前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記ユーザの操作に応じて出力される音の種類、高さ、音量のうちいずれかを決定し、決定された音を出力してもよい。 In one embodiment of the present invention, when the position of the input device is included in the setting area, the executing means outputs a type of sound according to the operation of the user based on the acquired orientation. Either the pitch or the volume may be determined, and the determined sound may be output.
 本発明の一形態では、前記シートは複数の演奏領域をさらに含み、前記実行手段は、前記入力デバイスの位置が前記複数の演奏領域のいずれかである場合に、前記入力デバイスの位置を含む演奏領域に応じた高さの音を出力し、前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記出力される音の種類、高さ、音量、音階のスケールのうちいずれかを変更してもよい。 In one embodiment of the present invention, the sheet further includes a plurality of playing areas, and the executing means performs a performance including the position of the input device when the position of the input device is any of the plurality of playing areas. The execution means outputs a sound having a pitch corresponding to the region, and when the position of the input device is included in the setting region, the execution means outputs the type and pitch of the output sound based on the acquired orientation. You may change either the volume or the scale of the scale.
 本発明の一形態では、前記実行手段は、前記取得された位置が所定の領域内にある間における前記取得された向きの変化量に応じて、出力される音のパラメータを決定してもよい。 In one embodiment of the present invention, the executing means may determine the parameters of the output sound according to the amount of change in the acquired orientation while the acquired position is within a predetermined region. ..
 本発明の一形態では、前記実行手段は、前記取得された位置が所定の範囲にある場合に、前記入力デバイスが取得した、シートに対する入力デバイスの向きに基づいて出力される音のパラメータを決定してもよい。 In one embodiment of the invention, the executing means determines the parameters of the sound output by the input device based on the orientation of the input device with respect to the sheet, when the acquired position is within a predetermined range. You may.
 本発明の一形態では、前記実行手段は、前記取得された位置及び向きに基づいて情報を選択し、前記選択された情報に基づいて処理を実行してもよい。 In one embodiment of the present invention, the executing means may select information based on the acquired position and orientation, and execute processing based on the selected information.
第1の実施形態にかかる操作システムの一例を示す図である。It is a figure which shows an example of the operation system which concerns on 1st Embodiment. 操作システムのハードウェア構成を示す図である。It is a figure which shows the hardware configuration of an operation system. 入力デバイスの底面を示す図である。It is a figure which shows the bottom surface of an input device. シートおよび入力デバイスの一例を示す図である。It is a figure which shows an example of a sheet and an input device. シートの一例を示す図である。It is a figure which shows an example of a sheet. シート上のパターンを概略的に示す図である。It is a figure which shows schematicly the pattern on a sheet. 操作システムが実現する機能を示すブロック図である。It is a block diagram which shows the function realized by an operation system. 操作システムの処理の概要を示すフロー図である。It is a flow chart which shows the outline of the processing of an operation system. シートの他の一例を示す図である。It is a figure which shows another example of a sheet. シートの他の一例を示す図である。It is a figure which shows another example of a sheet. 設定領域についての処理の一例を示すフロー図である。It is a flow chart which shows an example of the processing about a setting area. 鍵盤領域についての処理の一例を示すフロー図である。It is a flow chart which shows an example of the processing about a keyboard area. リズム領域についての処理の一例を示すフロー図である。It is a flow figure which shows an example of the processing about a rhythm area. 文字領域について入力情報を取得し音を出力する処理の一例を示すフロー図である。It is a flow chart which shows an example of the process which acquires the input information about a character area and outputs a sound. 蛇口領域について入力情報を取得する処理の一例を示すフロー図である。It is a flow chart which shows an example of the process of acquiring input information about a faucet area. 第2の実施形態にかかるシートの一例を示す図である。It is a figure which shows an example of the sheet which concerns on 2nd Embodiment. 入力情報を取得し入力デバイスを制御する処理の一例を示す図である。It is a figure which shows an example of the process which acquires the input information and controls an input device. カバーの一例を示す図である。It is a figure which shows an example of a cover.
 以下では、本発明の実施形態について図面に基づいて説明する。出現する構成要素のうち同一機能を有するものには同じ符号を付し、その説明を省略する。本発明の実施形態では、ユーザは、自走可能な入力デバイスを保持し、また領域を指し示すことにより情報を入力する。また入力された情報に応じて処理が実行される。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. Among the components that appear, those having the same function are designated by the same reference numerals, and the description thereof will be omitted. In an embodiment of the invention, the user holds a self-propelled input device and inputs information by pointing to an area. In addition, the process is executed according to the input information.
[第1の実施形態]
 図1は、第1の実施形態にかかる操作システムの一例を示す図である。本発明にかかる操作システムは、デバイス制御装置10と、位置入力デバイス20a,20bと、コントローラ17と、カートリッジ18とを含む。位置入力デバイス20a,20bはカメラ24を有する自走可能なデバイスであり、どちらも同じ機能を有する。以下では特に区別する必要がない限り、これらの位置入力デバイス20a,20bを位置入力デバイス20と記載する。デバイス制御装置10は、無線を介して位置入力デバイス20を制御する。デバイス制御装置10は凹部32を有し、位置入力デバイス20が凹部32にはめ込まれると、デバイス制御装置10は位置入力デバイス20を充電する。コントローラ17はユーザによる操作を取得する入力装置であり、ケーブルによりデバイス制御装置10に接続されている。カートリッジ18は不揮発性メモリを内蔵する。
[First Embodiment]
FIG. 1 is a diagram showing an example of an operation system according to the first embodiment. The operation system according to the present invention includes a device control device 10, position input devices 20a and 20b, a controller 17, and a cartridge 18. The position input devices 20a and 20b are self-propelled devices having a camera 24, and both have the same function. In the following, these position input devices 20a and 20b will be referred to as position input devices 20 unless otherwise specified. The device control device 10 controls the position input device 20 via radio. The device control device 10 has a recess 32, and when the position input device 20 is fitted in the recess 32, the device control device 10 charges the position input device 20. The controller 17 is an input device that acquires an operation by the user, and is connected to the device control device 10 by a cable. The cartridge 18 has a built-in non-volatile memory.
 図2は、本発明の実施形態にかかる操作システムのハードウェア構成の一例を示す図である。デバイス制御装置10は、プロセッサ11、記憶部12、通信部13、入出力部14を含む。位置入力デバイス20は、プロセッサ21、記憶部22、通信部23、カメラ24、2つのモータ25を含む。デバイス制御装置10は、位置入力デバイス20の制御に最適化された専用の装置であってもよいし、汎用的なコンピュータであってもよい。 FIG. 2 is a diagram showing an example of the hardware configuration of the operation system according to the embodiment of the present invention. The device control device 10 includes a processor 11, a storage unit 12, a communication unit 13, and an input / output unit 14. The position input device 20 includes a processor 21, a storage unit 22, a communication unit 23, a camera 24, and two motors 25. The device control device 10 may be a dedicated device optimized for controlling the position input device 20, or may be a general-purpose computer.
 プロセッサ11は、記憶部12に格納されているプログラムに従って動作し、通信部13、入出力部14などを制御する。プロセッサ21は、記憶部22に格納されているプログラムに従って動作し、通信部23、カメラ24、モータ25などを制御する。上記プログラムは、カートリッジ18内のフラッシュメモリ等のコンピュータで読み取り可能な記憶媒体に格納されて提供されるが、インターネット等のネットワークを介して提供されてもよい。 The processor 11 operates according to the program stored in the storage unit 12, and controls the communication unit 13, the input / output unit 14, and the like. The processor 21 operates according to a program stored in the storage unit 22, and controls the communication unit 23, the camera 24, the motor 25, and the like. The program is stored and provided in a computer-readable storage medium such as a flash memory in the cartridge 18, but may be provided via a network such as the Internet.
 記憶部12は、デバイス制御装置10に内蔵されるDRAMおよび不揮発性メモリと、カートリッジ18内の不揮発性メモリ等によって構成されている。記憶部22は、DRAMおよび不揮発性メモリ等によって構成されている。記憶部12,22は、上記プログラムを格納する。また、記憶部12,22は、プロセッサ11,21や通信部13,23等から入力される情報や演算結果を格納する。 The storage unit 12 is composed of a DRAM and a non-volatile memory built in the device control device 10, a non-volatile memory in the cartridge 18, and the like. The storage unit 22 is composed of a DRAM, a non-volatile memory, and the like. The storage units 12 and 22 store the above program. Further, the storage units 12 and 22 store information and calculation results input from the processors 11, 21 and the communication units 13, 23 and the like.
 通信部13,23は他の機器と通信するための集積回路やアンテナなどにより構成されている。通信部13,23は、例えばBluetooth(登録商標)プロトコルに従って互いに通信する機能を有する。通信部13,23は、プロセッサ11,21の制御に基づいて、他の装置から受信した情報をプロセッサ11,21や記憶部12,22に入力し、他の装置に情報を送信する。なお、通信部13はLANなどのネットワークを介して他の装置と通信する機能を有してもよい。 Communication units 13 and 23 are composed of integrated circuits, antennas, etc. for communicating with other devices. The communication units 13 and 23 have a function of communicating with each other according to, for example, a Bluetooth (registered trademark) protocol. Based on the control of the processors 11 and 21, the communication units 13 and 23 input the information received from the other devices to the processors 11 and 21 and the storage units 12 and 22, and transmit the information to the other devices. The communication unit 13 may have a function of communicating with another device via a network such as a LAN.
 入出力部14は、コントローラ17などの入力デバイスからの情報を取得する回路と、音声出力デバイスや画像表示デバイスなどの出力デバイスを制御する回路とを含む。入出力部14は、入力デバイスから入力信号を取得し、その入力信号が変換された情報をプロセッサ11や記憶部12に入力する。また入出力部14は、プロセッサ11などの制御に基づいて、音声をスピーカに出力させ、画像を表示デバイスに出力させる。 The input / output unit 14 includes a circuit for acquiring information from an input device such as a controller 17 and a circuit for controlling an output device such as an audio output device or an image display device. The input / output unit 14 acquires an input signal from the input device, and inputs the converted information of the input signal to the processor 11 and the storage unit 12. Further, the input / output unit 14 causes the speaker to output the sound and outputs the image to the display device based on the control of the processor 11 or the like.
 モータ25は、プロセッサ21により回転方向、回転量および回転速度が制御される、いわゆるサーボモータである。 The motor 25 is a so-called servomotor in which the rotation direction, the amount of rotation, and the rotation speed are controlled by the processor 21.
 カメラ24は、位置入力デバイス20の下方を撮影するように配置され、位置入力デバイス20が置かれているシート31(図4,5参照)に印刷されたパターン71(図6参照)を撮影する。本実施形態では、シート31には赤外線の周波数領域で認識されるパターン71が印刷されており、カメラ24は、その赤外線の画像を撮影する。 The camera 24 is arranged so as to photograph the lower part of the position input device 20, and photographs the pattern 71 (see FIG. 6) printed on the sheet 31 (see FIGS. 4 and 5) on which the position input device 20 is placed. .. In the present embodiment, the sheet 31 is printed with the pattern 71 recognized in the infrared frequency domain, and the camera 24 captures the infrared image.
 図3は、位置入力デバイス20の一例を示す図である。図3は、位置入力デバイス20を下からみた図である。位置入力デバイス20は、電源スイッチ250、スイッチ222、2つの車輪254をさらに含む。2つの車輪254のそれぞれには、1つのモータ25が割り当てられており、モータ25は、割り当てられた車輪254を駆動する。 FIG. 3 is a diagram showing an example of the position input device 20. FIG. 3 is a view of the position input device 20 as viewed from below. The position input device 20 further includes a power switch 250, a switch 222, and two wheels 254. One motor 25 is assigned to each of the two wheels 254, and the motor 25 drives the assigned wheel 254.
 図4は、シート31および位置入力デバイス20a,20bの一例を示す図である。図4に示されるように、位置入力デバイス20には、カバー75が取り付けられている。カバー75は、カメラ24によりシート31の撮影の邪魔にならず、かつ、位置入力デバイス20の方向をユーザが容易に認識できるような形状を有する。図4の例では、カバー75は、位置入力デバイス20に被され、位置入力デバイス20の下面以外を覆っている。図18は、カバー75の一例を示す図であり、カバー75の正面および側面を示す図である。図18に示されるカバー75は、位置入力デバイス20aの上方から被されている。カバー75により、位置入力デバイス20の正面をユーザが認識することが容易になる。 FIG. 4 is a diagram showing an example of the sheet 31 and the position input devices 20a and 20b. As shown in FIG. 4, a cover 75 is attached to the position input device 20. The cover 75 has a shape that does not interfere with the shooting of the sheet 31 by the camera 24 and that the user can easily recognize the direction of the position input device 20. In the example of FIG. 4, the cover 75 is covered with the position input device 20 and covers other than the lower surface of the position input device 20. FIG. 18 is a view showing an example of the cover 75, and is a view showing the front surface and the side surface of the cover 75. The cover 75 shown in FIG. 18 is covered from above the position input device 20a. The cover 75 makes it easier for the user to recognize the front of the position input device 20.
 図4の例では、ユーザは右手で位置入力デバイス20aを把持し、左手で位置入力デバイス20bを把持する。そして、ユーザが位置入力デバイス20をシート31上の領域に置くと、その領域に応じた情報が入力され、その入力された情報に応じた音がデバイス制御装置10から出力される。 In the example of FIG. 4, the user holds the position input device 20a with his right hand and holds the position input device 20b with his left hand. Then, when the user places the position input device 20 in the area on the sheet 31, information corresponding to the area is input, and the sound corresponding to the input information is output from the device control device 10.
 図5は、シート31の一例を示す図である。シート31には、ユーザが視認できる画像が印刷される。図5に示されるシート31aの上には、複数の設定領域41a~41iと、複数の鍵盤領域43と、複数のリズム領域44と、複数のコード領域45とが設けられ、またユーザはそれらの領域を視認できる。シート31には、さらに、カメラ24により読み取り可能なパターン71が印刷されている。 FIG. 5 is a diagram showing an example of the sheet 31. An image that can be visually recognized by the user is printed on the sheet 31. A plurality of setting areas 41a to 41i, a plurality of keyboard areas 43, a plurality of rhythm areas 44, and a plurality of chord areas 45 are provided on the sheet 31a shown in FIG. The area can be visually recognized. A pattern 71 that can be read by the camera 24 is further printed on the sheet 31.
 図6は、シート31上のパターン71を概略的に示す図である。シート31上には、所定の大きさ(例えば0.2mm角)のパターン71がマトリクス状に並んでいる。パターン71のそれぞれは、そのパターン71が配置される位置の座標が符号化された画像である。シート31には、符号化された座標が表現することが可能な座標空間のうち、シート31の大きさに対応する領域が割り当てられている。本実施形態にかかる操作システムでは、位置入力デバイス20のカメラ24がシート31等に印刷されたパターン71を撮影し、位置入力デバイス20またはデバイス制御装置10がそのパターン71を復号して座標を取得する。これにより、位置入力デバイス20のシート31等の上における位置が認識される。また、位置入力デバイス20またはデバイス制御装置10は、カメラ24により撮影された画像内にあるパターン71の傾きを検出することにより、位置入力デバイス20の方向(例えば基準方向からの角度A)を検出する。 FIG. 6 is a diagram schematically showing the pattern 71 on the sheet 31. Patterns 71 of a predetermined size (for example, 0.2 mm square) are arranged in a matrix on the sheet 31. Each of the patterns 71 is an image in which the coordinates of the position where the pattern 71 is arranged are encoded. The sheet 31 is assigned an area corresponding to the size of the sheet 31 in the coordinate space that can be represented by the encoded coordinates. In the operation system according to the present embodiment, the camera 24 of the position input device 20 photographs the pattern 71 printed on the sheet 31 or the like, and the position input device 20 or the device control device 10 decodes the pattern 71 to acquire the coordinates. To do. As a result, the position of the position input device 20 on the sheet 31 or the like is recognized. Further, the position input device 20 or the device control device 10 detects the direction of the position input device 20 (for example, the angle A from the reference direction) by detecting the inclination of the pattern 71 in the image captured by the camera 24. To do.
 複数の鍵盤領域43は、横方向に並んで配置されている。鍵盤領域43は演奏音の出力の指示のための領域である。鍵盤領域43に位置入力デバイス20が配置されると音が出力される。位置入力デバイス20が置かれる鍵盤領域43が右側にあるほど出力される音が高くなる。 The plurality of keyboard areas 43 are arranged side by side in the horizontal direction. The keyboard area 43 is an area for instructing the output of the performance sound. When the position input device 20 is arranged in the keyboard area 43, a sound is output. The output sound becomes higher as the keyboard area 43 on which the position input device 20 is placed is on the right side.
 複数のリズム領域44は、図5において鍵盤領域43の上側に配置されている。リズム領域44は演奏音の出力の指示のための領域である。位置入力デバイス20がリズム領域44のいずれかの上に配置されると、その配置されたリズム領域44に対応する音がデバイス制御装置10から出力される。 The plurality of rhythm regions 44 are arranged above the keyboard region 43 in FIG. The rhythm area 44 is an area for instructing the output of the performance sound. When the position input device 20 is arranged on any of the rhythm areas 44, the sound corresponding to the arranged rhythm area 44 is output from the device control device 10.
 複数のコード領域45は、図5においてリズム領域44の右側に配置されている。コード領域45は演奏音の出力の指示のための領域である。位置入力デバイス20がコード領域45のいずれかの上に配置されると、その配置されたコード領域45に対応する高さのコードがデバイス制御装置10から出力される。 The plurality of chord regions 45 are arranged on the right side of the rhythm region 44 in FIG. The chord area 45 is an area for instructing the output of the performance sound. When the position input device 20 is arranged on any of the code areas 45, a code having a height corresponding to the arranged code area 45 is output from the device control device 10.
 複数の設定領域41a~41iは、鍵盤領域43またはリズム領域44に位置入力デバイス20が配置された際に出力される音のパラメータを設定する指示を取得するための領域である。以下では、区別が不要な場合にはこれらを設定領域41と呼ぶ。設定領域41は出力される音のパラメータに対応する。複数の設定領域41のいずれかの上に位置入力デバイス20が配置されると、その配置された設定領域41に対応するパラメータに、その位置入力デバイス20の方向に応じた値が設定される。 The plurality of setting areas 41a to 41i are areas for acquiring instructions for setting sound parameters output when the position input device 20 is arranged in the keyboard area 43 or the rhythm area 44. In the following, when distinction is unnecessary, these are referred to as setting areas 41. The setting area 41 corresponds to the parameter of the output sound. When the position input device 20 is arranged on any one of the plurality of setting areas 41, a value corresponding to the direction of the position input device 20 is set in the parameter corresponding to the arranged setting area 41.
 設定領域41aは出力される音のボリュームのパラメータを指定する領域であり、設定領域41bは鍵盤領域43に位置入力デバイス20が配置された際に出力される音の高さのパラメータを調整する領域である。設定領域41dは複数の鍵盤領域43に対応付けられる音についての音階のスケール(例えばメジャー、マイナー、ペンタトニック、琉球音階のいずれか)のパラメータを設定する領域である。設定領域41h,41i,41gは、それぞれ、鍵盤領域43、リズム領域44、コード領域45に位置入力デバイス20が配置された際に出力される音の種類のパラメータを設定する領域である。 The setting area 41a is an area for designating the parameters of the volume of the output sound, and the setting area 41b is an area for adjusting the parameters of the pitch of the sound output when the position input device 20 is arranged in the keyboard area 43. Is. The setting area 41d is an area for setting the parameters of the scale scale (for example, major, minor, pentatonic, or Ryukyu scale) for the sounds associated with the plurality of keyboard areas 43. The setting areas 41h, 41i, and 41g are areas for setting parameters of the type of sound output when the position input device 20 is arranged in the keyboard area 43, the rhythm area 44, and the chord area 45, respectively.
 本実施形態にかかる操作システムでは、ユーザが把持する位置入力デバイス20をシート31上の領域に配置すると、その配置された領域および位置入力デバイス20の向きに応じた情報が入力され、その入力された情報に応じて音が出力される。以下では、この操作システムの動作について説明する。 In the operation system according to the present embodiment, when the position input device 20 held by the user is arranged in the area on the sheet 31, information corresponding to the arranged area and the orientation of the position input device 20 is input and input. Sound is output according to the input information. The operation of this operation system will be described below.
 図7は、操作システムが実現する機能を示すブロック図である。操作システムは、機能的に、位置取得部51、情報入力部52、アプリケーション実行部58、音声出力部59を含む。情報入力部52は機能的に領域判定部53、入力情報決定部55を含む。これらの機能は、主に、デバイス制御装置10に含まれるプロセッサ11が記憶部12に格納されるプログラムを実行し、通信部13を介して位置入力デバイス20を制御することにより実現される。また、位置取得部51などの機能の一部は、位置入力デバイス20に含まれるプロセッサ21が記憶部22に格納されるプログラムを実行し、通信部23を介してデバイス制御装置10とデータをやり取りし、カメラ24やモータ25を制御することにより実現されてよい。 FIG. 7 is a block diagram showing the functions realized by the operation system. The operation system functionally includes a position acquisition unit 51, an information input unit 52, an application execution unit 58, and a voice output unit 59. The information input unit 52 functionally includes an area determination unit 53 and an input information determination unit 55. These functions are mainly realized by the processor 11 included in the device control device 10 executing a program stored in the storage unit 12 and controlling the position input device 20 via the communication unit 13. Further, in some of the functions such as the position acquisition unit 51, the processor 21 included in the position input device 20 executes a program stored in the storage unit 22 and exchanges data with the device control device 10 via the communication unit 23. However, it may be realized by controlling the camera 24 and the motor 25.
 位置取得部51は、カメラ24により撮影された画像から、座標が符号化されたパターン71を認識し、そのパターン71が示す座標から、位置入力デバイス20の位置する座標と位置入力デバイス20の向きとを取得する。 The position acquisition unit 51 recognizes the coordinate-encoded pattern 71 from the image captured by the camera 24, and from the coordinates indicated by the pattern 71, the position coordinate of the position input device 20 and the orientation of the position input device 20. And get.
 情報入力部52は、位置取得部51により取得された位置および向きに基づいて、ユーザにより入力される入力情報を取得する。情報入力部52に含まれる領域判定部53は、位置入力デバイス20の位置がどの領域に含まれるかを判定する。情報入力部52に含まれる入力情報決定部55は、位置入力デバイス20の位置を含む領域に基づいて入力情報を決定する。また、入力情報決定部55は、位置入力デバイス20の位置を含む領域が予め定められた領域である場合には、その領域と、その位置入力デバイス20の向きとに基づいて、入力情報を決定する。入力情報決定部55は、位置入力デバイス20のうち一方の位置がある領域に含まれるか否かと、位置入力デバイス20のうち他方の位置が他の1つの領域に含まれるか否かとに基づいて、入力情報を決定する。 The information input unit 52 acquires input information input by the user based on the position and orientation acquired by the position acquisition unit 51. The area determination unit 53 included in the information input unit 52 determines in which area the position of the position input device 20 is included. The input information determination unit 55 included in the information input unit 52 determines the input information based on the area including the position of the position input device 20. Further, when the area including the position of the position input device 20 is a predetermined area, the input information determination unit 55 determines the input information based on the area and the orientation of the position input device 20. To do. The input information determination unit 55 is based on whether or not one position of the position input device 20 is included in a certain area and whether or not the other position of the position input device 20 is included in the other area. , Determine the input information.
 アプリケーション実行部58は、取得された入力情報に基づいて処理を実行する。アプリケーション実行部58は、取得された入力情報に基づいて音のパラメータを設定する。アプリケーション実行部58に含まれる音声出力部59は、設定されたパラメータに応じた音を出力する。 The application execution unit 58 executes the process based on the acquired input information. The application execution unit 58 sets sound parameters based on the acquired input information. The audio output unit 59 included in the application execution unit 58 outputs a sound according to the set parameters.
 以下ではこの操作システムが実行する処理についてより詳細に説明する。図8は、操作システムの処理の概要を示すフロー図である。以下では、説明の容易のため、位置入力デバイス20aを第1デバイス、位置入力デバイス20bを第2デバイスと記載する。図8に示される処理は繰り返し実行される。 The processing executed by this operation system will be explained in more detail below. FIG. 8 is a flow chart showing an outline of processing of the operation system. Hereinafter, for the sake of simplicity, the position input device 20a will be referred to as a first device, and the position input device 20b will be referred to as a second device. The process shown in FIG. 8 is repeatedly executed.
 はじめに、位置取得部51は、第1デバイスがシート31の表面を撮影したか判定する(ステップS101)。位置取得部51は、第1デバイスのカメラ24が撮影された画像にパターン71に特有の画像要素が存在するか否かに基づいて、第1デバイスがシート31の表面を撮影したか否か判定してよい。第1デバイスがシート31の表面を撮影した場合には(ステップS101のY)、位置取得部51は、第1デバイスのシート31上の位置である第1の位置および第1デバイスの方向を取得し(ステップS102)、領域判定部53は取得された第1の位置を含む領域を検出する(ステップS103)。位置取得部51はステップS102において、第1デバイスのカメラ24により撮影されたパターン71の画像内の傾きに基づいて、第1デバイスの向きを取得する。領域判定部53はステップS103において、複数の領域のそれぞれと、シート31上の座標範囲とを関連付ける情報と、第1の位置の座標とに基づいて、第1の位置が含まれる領域を選択する。 First, the position acquisition unit 51 determines whether the first device has photographed the surface of the sheet 31 (step S101). The position acquisition unit 51 determines whether or not the first device has photographed the surface of the sheet 31 based on whether or not an image element peculiar to the pattern 71 is present in the image captured by the camera 24 of the first device. You can do it. When the first device photographs the surface of the sheet 31 (Y in step S101), the position acquisition unit 51 acquires the first position which is the position on the sheet 31 of the first device and the direction of the first device. (Step S102), the area determination unit 53 detects the area including the acquired first position (step S103). In step S102, the position acquisition unit 51 acquires the orientation of the first device based on the inclination of the pattern 71 taken by the camera 24 of the first device in the image. In step S103, the area determination unit 53 selects an area including the first position based on the information relating each of the plurality of areas to the coordinate range on the sheet 31 and the coordinates of the first position. ..
 第2デバイスに対してもステップS101~S103に相当する処理が行われる。第2デバイスがシート31の表面を撮影した場合には(ステップS104のY)、位置取得部51は、第2デバイスのシート31上の位置である第2の位置および第2デバイスの方向を取得し(ステップS105)、領域判定部53は取得された第2の位置を含む領域を検出する(ステップS106)。 The processing corresponding to steps S101 to S103 is also performed on the second device. When the second device photographs the surface of the sheet 31 (Y in step S104), the position acquisition unit 51 acquires the second position which is the position on the sheet 31 of the second device and the direction of the second device. (Step S105), the area determination unit 53 detects the area including the acquired second position (step S106).
 領域判定部53が判定する領域は、単一のシート31上にある設定領域41a~41i、鍵盤領域43、リズム領域44、コード領域45だけでなくてよい。領域判定部53は第1の位置または第2の位置が他のシート31上の領域に含まれるか否かを判定してもよい。互いに異なるシート31には、互いに異なる範囲の座標が符号化されたパターン71が印刷されている。これにより、複数のシート31上に領域が配置されていても、単に第1の位置または第2の位置の座標により領域が判定される。 The area determined by the area determination unit 53 does not have to be limited to the setting areas 41a to 41i, the keyboard area 43, the rhythm area 44, and the chord area 45 on a single sheet 31. The area determination unit 53 may determine whether or not the first position or the second position is included in the area on the other sheet 31. Patterns 71 in which coordinates in different ranges are encoded are printed on sheets 31 that are different from each other. As a result, even if the area is arranged on the plurality of sheets 31, the area is determined simply by the coordinates of the first position or the second position.
 図9は、シート31の他の一例を示す図である。図9に示されるシート31bの上には、複数の文字領域46と、複数の切替領域47と、フリー領域42とが設けられ、またユーザはそれらの領域を視認できる。ユーザは、図8に示すシート31bの上に位置入力デバイス20を配置して文字領域46および切替領域47を指し示すことにより、操作システムは指し示された領域に対応する文字の音声を出力させる。 FIG. 9 is a diagram showing another example of the sheet 31. A plurality of character areas 46, a plurality of switching areas 47, and a free area 42 are provided on the sheet 31b shown in FIG. 9, and the user can visually recognize these areas. The user arranges the position input device 20 on the sheet 31b shown in FIG. 8 and points to the character area 46 and the switching area 47, so that the operation system outputs the voice of the character corresponding to the pointed area.
 複数の文字領域46は、文字を選択するための領域である。文字領域46の数は、入力が可能な文字の種類の数より少ない。具体的には、文字領域46は、日本語の主に清音を示す文字が表示される領域である。文字領域46は、濁音、半濁音、拗音そのものを示す領域ではなく、後述の切替領域47と文字領域46との組み合わせにより濁音、半濁音、拗音、またそれらの組み合わせを入力することが可能となる。 The plurality of character areas 46 are areas for selecting characters. The number of character areas 46 is smaller than the number of character types that can be input. Specifically, the character area 46 is an area in which characters mainly indicating seion in Japanese are displayed. The character area 46 is not an area indicating the voiced sound, the semi-voiced sound, and the yoon itself, but the combination of the switching area 47 and the character area 46, which will be described later, makes it possible to input the voiced sound, the semi-voiced sound, the yoon, or a combination thereof. ..
 複数の切替領域47は、文字領域46の数より多くの種類の文字を入力するための領域である。入力される情報(例えば文字)の候補値のセット(候補セット)が複数存在し、候補セットの数は切替領域47の数に1を加えた数である。複数の文字領域46のそれぞれは、ある候補セットに含まれる複数の候補値のいずれかに対応づけられている。また他の候補セットに含まれる複数の候補値のいずれかにも対応づけられている。切替領域47に一つの位置入力デバイス20が配置された状態で、複数の文字領域46のいずれかが他の位置入力デバイス20により指し示されると、切替領域47に対応する候補セットに含まれる複数の候補値のうち、指し示された文字領域46に対応する候補値が入力情報として選択される。ここで、複数の切替領域47は、具体的には、濁音を示す領域と、半濁音を示す領域と、拗音を示す領域と、促音を示す領域と、濁音と拗音との組み合わせを示す領域と、半濁音と拗音を示す領域とを含む。これらの領域に関する処理については後述する。 The plurality of switching areas 47 are areas for inputting more types of characters than the number of character areas 46. There are a plurality of sets (candidate sets) of candidate values of input information (for example, characters), and the number of candidate sets is the number obtained by adding 1 to the number of switching areas 47. Each of the plurality of character areas 46 is associated with any of a plurality of candidate values included in a certain candidate set. It is also associated with any of a plurality of candidate values included in other candidate sets. When one of the plurality of character areas 46 is pointed to by another position input device 20 in a state where one position input device 20 is arranged in the switching area 47, a plurality of characters included in the candidate set corresponding to the switching area 47. Of the candidate values of, the candidate value corresponding to the indicated character area 46 is selected as input information. Here, the plurality of switching regions 47 specifically include a region showing a voiced sound, a region showing a semi-voiced sound, a region showing a yoon, a region showing a sokuon, and a region showing a combination of a voiced sound and a yoon. , Includes areas showing handakuon and yoon. The processing related to these areas will be described later.
 シート31bに含まれるフリー領域42は、その上に位置入力デバイス20が配置された際に、その平面的な位置に応じた設定値を入力するための領域である。図9のフリー領域42上には破線によりx軸Lxおよびy軸Lyが記載されているが、これらは記載されていなくてもよい。フリー領域42においては、一つの入力デバイス20のx方向の移動およびy方向の移動に基づいて、複数の項目の設定が設定可能であってよい。例えば、入力デバイス20の位置のy座標に応じて出力される音声のピッチが決定され、x座標に応じてL/Rスピーカーに対するミキシングのボリュームが決定されてもよい。このような例では位置入力デバイス20の向きは用いられておらず、フリー領域42は、x軸方向の動きに対応するスライダーとy軸方向に対応するスライダーとの2つに相当する入力を1つの位置入力デバイス20で可能にする。 The free area 42 included in the sheet 31b is an area for inputting a set value according to the planar position of the position input device 20 when the position input device 20 is arranged on the sheet 31b. Although the x-axis Lx and the y-axis Ly are described by broken lines on the free region 42 of FIG. 9, these may not be described. In the free area 42, the setting of a plurality of items may be set based on the movement of one input device 20 in the x direction and the movement in the y direction. For example, the pitch of the output sound may be determined according to the y coordinate of the position of the input device 20, and the mixing volume for the L / R speaker may be determined according to the x coordinate. In such an example, the orientation of the position input device 20 is not used, and the free area 42 receives one input corresponding to two, a slider corresponding to the movement in the x-axis direction and a slider corresponding to the y-axis direction. It is possible with one position input device 20.
 図10は、シート31の他の一例を示す図である。図10に示されるシート31cの上には、蛇口領域48と、グラス領域49とが設けられ、またユーザはそれらの領域を視認できる。蛇口領域48は、その上に配置される位置入力デバイス20の回転量により、疑似的にグラスに水を入れるか否かの指示を取得するための領域である。グラス領域49は、疑似的に水の入ったグラスの音を出力させるための領域である。これらの領域に関する処理については後述する。 FIG. 10 is a diagram showing another example of the sheet 31. A faucet region 48 and a glass region 49 are provided on the sheet 31c shown in FIG. 10, and the user can visually recognize these regions. The faucet area 48 is an area for obtaining an instruction as to whether or not to put water in the glass in a pseudo manner by the amount of rotation of the position input device 20 arranged on the faucet area 48. The glass region 49 is a region for outputting the sound of a glass containing water in a pseudo manner. The processing related to these areas will be described later.
 ステップS101からS106により、第1の位置が含まれる領域および第2の位置が含まれる領域のうち少なくとも一方が検出されると、情報入力部52は、第1の位置が含まれる領域の種類に応じた処理により入力情報を取得し、音声出力部59は音を出力する(ステップS107)。情報入力部52は、第2の位置が含まれる領域の種類に応じた処理により入力情報を取得し、音声出力部59は音を出力する(ステップS108)。以下では、ステップS107の処理の詳細、つまり、領域の種類に応じた処理について説明する。ステップS108の処理は、ステップS107における第1デバイスの第1の位置及び方向および第2デバイスの第2の位置及び方向を、第2デバイスの第2の位置及び方向および第1デバイスの第1の位置及び方向に変えるだけでよいため、詳細の説明は省略する。 When at least one of the region including the first position and the region including the second position is detected by steps S101 to S106, the information input unit 52 sets the type of the region including the first position. The input information is acquired by the corresponding processing, and the voice output unit 59 outputs the sound (step S107). The information input unit 52 acquires input information by processing according to the type of the region including the second position, and the voice output unit 59 outputs sound (step S108). Hereinafter, the details of the process of step S107, that is, the process according to the type of the region will be described. The process of step S108 sets the first position and direction of the first device and the second position and direction of the second device in step S107 to the second position and direction of the second device and the first of the first device. Since it is only necessary to change the position and direction, detailed description thereof will be omitted.
 図11は、設定領域41についての処理の一例を示すフロー図である。図11は、ステップS107において第1の位置が設定領域41に含まれる場合に、入力情報を取得し音を出力する処理を示す。 FIG. 11 is a flow chart showing an example of processing for the setting area 41. FIG. 11 shows a process of acquiring input information and outputting a sound when the first position is included in the setting area 41 in step S107.
 はじめに、入力情報決定部55は、第1の位置を含む設定領域41に応じて、位置入力デバイス20の方向と入力値との対応を示す対応情報を取得する(ステップS301)。対応情報は、複数のアイテムを含み、各アイテムは方向の範囲と入力値とを関連付けるものであってよい。入力値は設定領域41に対応するパラメータの設定値となる。対応情報は設定領域41a~41iごとに予め準備されている。次に入力情報決定部55は、取得された対応情報と第1デバイスの方向とに基づいて、入力値を取得する(ステップS302)。入力情報決定部55は、例えば、対応情報に含まれるアイテムのうち、第1デバイスの方向を含む範囲を有するアイテムにおいてその範囲と関連付けられている入力値を取得する。 First, the input information determination unit 55 acquires correspondence information indicating the correspondence between the direction of the position input device 20 and the input value according to the setting area 41 including the first position (step S301). Correspondence information may include a plurality of items, and each item may associate a range of directions with an input value. The input value is a set value of the parameter corresponding to the setting area 41. Correspondence information is prepared in advance for each of the setting areas 41a to 41i. Next, the input information determination unit 55 acquires an input value based on the acquired correspondence information and the direction of the first device (step S302). The input information determination unit 55 acquires, for example, an input value associated with an item having a range including the direction of the first device among the items included in the corresponding information.
 入力値が取得されると、アプリケーション実行部58は、第1の位置が属する設定領域41に対応するパラメータに、取得された入力値を設定する(ステップS303)。 When the input value is acquired, the application execution unit 58 sets the acquired input value in the parameter corresponding to the setting area 41 to which the first position belongs (step S303).
 例えば、図5の設定領域41aに第1デバイスが配置されると、その方向に応じてボリュームが設定される。その方向とシート31の基準方向との角度Aが小さいほどボリュームが大きくなってよい。設定領域41bに第1デバイスが配置されると、その方向に応じて鍵盤領域43のうちいずれかが指し示された際に出力される音の高さ(基準音の高さ)が変化する。また、鍵盤領域43どうしの相対的な音の高さの差が維持されつつ鍵盤領域43のそれぞれの高さが方向に応じて設定される。 For example, when the first device is arranged in the setting area 41a of FIG. 5, the volume is set according to the direction. The smaller the angle A between that direction and the reference direction of the seat 31, the larger the volume may be. When the first device is arranged in the setting area 41b, the pitch of the sound (the pitch of the reference sound) output when any one of the keyboard areas 43 is pointed to changes according to the direction thereof. Further, the heights of the keyboard regions 43 are set according to the direction while maintaining the relative difference in pitch between the keyboard regions 43.
 設定領域41dに第1デバイスが配置されると、アプリケーション実行部58は、その第1デバイスが配置された方向に応じて複数の鍵盤領域43に対応付けられる音についての音階のスケール(例えばメジャー、マイナー、ペンタトニック、琉球音階のうちいずれか)を設定してよい。設定領域41h,41i,41gに第1デバイスが配置されると、アプリケーション実行部58は、その第1デバイスが配置された方向に応じて、それぞれ鍵盤領域43、リズム領域44、コード領域45に位置入力デバイス20が配置された際に出力される音の種類を設定してよい。 When the first device is arranged in the setting area 41d, the application execution unit 58 determines the scale of the scale (for example, major, etc.) of the sound associated with the plurality of keyboard areas 43 according to the direction in which the first device is arranged. Minor, pentatonic, or Ryukyu scale) may be set. When the first device is arranged in the setting areas 41h, 41i, 41g, the application execution unit 58 is located in the keyboard area 43, the rhythm area 44, and the chord area 45, respectively, according to the direction in which the first device is arranged. The type of sound output when the input device 20 is arranged may be set.
 方向を用いて入力値を取得することにより、情報の入力に必要な領域を減らすことができる。また、位置入力デバイス20の回転はつまみの操作を連想させるため、ユーザがより直観的に値を入力することもできる。 By acquiring the input value using the direction, the area required for inputting information can be reduced. Further, since the rotation of the position input device 20 is associated with the operation of the knob, the user can input the value more intuitively.
 パラメータに入力値が設定されると、音声出力部59は、そのパラメータによる確認用の音を出力する(ステップS304)。確認用の音を出力することにより、ユーザが容易に現在の設定を認識することができ、設定がより容易になる。 When an input value is set in the parameter, the audio output unit 59 outputs a confirmation sound according to the parameter (step S304). By outputting the confirmation sound, the user can easily recognize the current setting, and the setting becomes easier.
 図12は、鍵盤領域43についての処理の一例を示すフロー図である。図12は、ステップS107において第1の位置が鍵盤領域43に含まれる場合に、入力情報を取得し音を出力する処理を示す。 FIG. 12 is a flow chart showing an example of processing for the keyboard area 43. FIG. 12 shows a process of acquiring input information and outputting a sound when the first position is included in the keyboard area 43 in step S107.
 はじめに、入力情報決定部55は、第1の位置を含む鍵盤領域43に応じて、いずれの鍵盤領域43であるか識別する情報を取得する(ステップS401)。入力情報決定部55は、第1デバイスの方向に基づいて、音の高さの変更の有無を取得する(ステップS402)。入力情報決定部55は、より具体的には、第1デバイスの方向がシートの基準方向に対して右側に所定の角度以上傾いている場合に、音の高さを半音上げる指示を入力情報として取得する。また入力情報決定部55は、第1デバイスの方向がシート31の基準方向に対して左側に所定の角度以上傾いている場合に、音の高さを半音下げる指示を入力情報として取得してもよい。 First, the input information determination unit 55 acquires information for identifying which keyboard area 43 is, according to the keyboard area 43 including the first position (step S401). The input information determination unit 55 acquires whether or not the pitch is changed based on the direction of the first device (step S402). More specifically, the input information determination unit 55 receives an instruction to raise the pitch by a semitone when the direction of the first device is tilted to the right by a predetermined angle or more with respect to the reference direction of the sheet as input information. get. Further, the input information determination unit 55 may acquire an instruction to lower the pitch by a semitone as input information when the direction of the first device is tilted to the left by a predetermined angle or more with respect to the reference direction of the sheet 31. Good.
 そして、アプリケーション実行部58は、スケールおよび基準音の高さと、取得された識別情報および音の高さの変更の有無とに基づいて、出力する音の高さを決定する(ステップS403)。より具体的には、アプリケーション実行部58は、スケールと鍵盤領域43の識別情報とに基づいて相対的な音の高さを求め、求められた相対的な音の高さと基準音の高さとに基づいて出力する音の絶対的な高さを求め、さらに音の高さの変更が有りの場合には音の高さを半音変化させる。アプリケーション実行部58に含まれる音声出力部59は、決定された高さを有し、パラメータとして設定された音の種類およびボリュームの音を出力する(ステップS404)。 Then, the application execution unit 58 determines the pitch of the output sound based on the scale and the pitch of the reference sound, the acquired identification information, and whether or not the pitch of the sound is changed (step S403). More specifically, the application execution unit 58 obtains a relative pitch based on the scale and the identification information of the keyboard area 43, and determines the obtained relative pitch and the reference pitch. Based on this, the absolute pitch of the output sound is obtained, and if there is a change in the pitch, the pitch is changed by a semitone. The audio output unit 59 included in the application execution unit 58 has a determined pitch and outputs a sound of a sound type and a volume set as parameters (step S404).
 これらの処理により、鍵盤領域43を指し示す際の位置入力デバイス20の方向を用いて出力される音の高さを半音上げる、または下げることができる。これにより鍵盤領域43のデザインをシンプルにすることが可能になる。なお、アプリケーション実行部58は、位置入力デバイス20の方向に応じて出力される音の長さ(出力される音に対応する音符の種類)を決定してもよい。例えば子どものユーザは位置入力デバイス20を鍵盤領域43に置く時間を正確に調整することは難しいので、方向を用いることで子どものユーザがより容易に音の長さを指示することが可能になる。 By these processes, the pitch of the output sound can be raised or lowered by a semitone using the direction of the position input device 20 when pointing to the keyboard area 43. This makes it possible to simplify the design of the keyboard area 43. The application execution unit 58 may determine the length of the output sound (the type of musical note corresponding to the output sound) according to the direction of the position input device 20. For example, it is difficult for a child user to accurately adjust the time for placing the position input device 20 in the keyboard area 43, so that the child user can more easily indicate the length of the sound by using the direction. ..
 図13は、リズム領域44についての処理の一例を示すフロー図である。図13は、ステップS107において第1の位置がリズム領域44に含まれる場合に、入力情報を取得し音を出力する処理を示す。 FIG. 13 is a flow chart showing an example of processing for the rhythm region 44. FIG. 13 shows a process of acquiring input information and outputting a sound when the first position is included in the rhythm region 44 in step S107.
 はじめに、入力情報決定部55は、以前に設定領域41iにより設定されたパラメータに基づいて、複数のリズム領域44のそれぞれとリズム音の種類との対応を示す情報(リズムセット)を取得する(ステップS501)。複数のリズムセットが存在し、複数のリズム領域44のそれぞれは、あるリズムセットに含まれる複数のリズム音の種類のいずれかに対応づけられている。また複数のリズム領域44のそれぞれは、他のリズムセットに含まれる複数のリズム音の種類のいずれかにも対応づけられている。リズム音の種類は、例えば、ドラム、パーカッション、和太鼓、動物の鳴き声などであり、リズムセットの選択に用いるパラメータは設定領域41iが指し示された際の位置入力デバイス20の方向に応じて設定されている。 First, the input information determination unit 55 acquires information (rhythm set) indicating the correspondence between each of the plurality of rhythm areas 44 and the type of the rhythm sound, based on the parameters previously set by the setting area 41i (step). S501). There are a plurality of rhythm sets, and each of the plurality of rhythm regions 44 is associated with any of the plurality of rhythm sound types included in a certain rhythm set. Each of the plurality of rhythm regions 44 is also associated with any of the plurality of rhythm sound types included in the other rhythm set. The type of rhythm sound is, for example, drum, percussion, Japanese drum, animal bark, etc., and the parameters used for selecting the rhythm set are set according to the direction of the position input device 20 when the setting area 41i is pointed to. Has been done.
 入力情報決定部55は、取得されたセットに基づいて、第1の位置を含むリズム領域44に対応するリズム音の種類を入力情報として取得する(ステップS502)。そして、音声出力部59は、設定された音量で、取得された種類のリズム音を出力する(ステップS503)。 The input information determination unit 55 acquires the type of rhythm sound corresponding to the rhythm region 44 including the first position as input information based on the acquired set (step S502). Then, the voice output unit 59 outputs the acquired type of rhythm sound at the set volume (step S503).
 図14は、文字領域46についての処理の一例を示すフロー図である。図14は、ステップS107において第1の位置が文字領域46に含まれる場合に、入力情報を取得し音声を出力する処理を示す。 FIG. 14 is a flow chart showing an example of processing for the character area 46. FIG. 14 shows a process of acquiring input information and outputting voice when the first position is included in the character area 46 in step S107.
 はじめに、入力情報決定部55は、第2の位置が複数の切替領域47のいずれかに含まれるか否か判定する(ステップS601)。第2の位置がどの切替領域47にも含まれない場合は(ステップS601のN)、入力情報決定部55は、複数の候補セットのうちデフォルトの候補セットを選択する(ステップS602)。複数の文字領域46のそれぞれは、ある候補セットに含まれる複数の候補値のいずれかに対応づけられている。また他の候補セットに含まれる複数の候補値のいずれかにも対応づけられている。 First, the input information determination unit 55 determines whether or not the second position is included in any of the plurality of switching areas 47 (step S601). If the second position is not included in any of the switching regions 47 (N in step S601), the input information determination unit 55 selects the default candidate set from the plurality of candidate sets (step S602). Each of the plurality of character areas 46 is associated with any of a plurality of candidate values included in a certain candidate set. It is also associated with any of a plurality of candidate values included in other candidate sets.
 一方、第2の位置が複数の切替領域47のいずれかに含まれる場合には(ステップS601のY)、入力情報決定部55は、複数の候補セットのうち第2の位置を含む切替領域47に応じた候補セットを選択する(ステップS603)。何らかの候補セットが選択されると、入力情報決定部55は、選択された候補セットから、第1の位置を含む文字領域46に対応する候補地を入力値として取得する(ステップS604)。また、入力情報決定部55は、第1デバイスの方向に基づいて、出力される音声の種類およびピッチの指示を取得する(ステップS605)。より具体的には、入力情報決定部55は、角度Aの360度の範囲を2つの範囲に分割し、第1デバイスの角度Aが片方の範囲にある場合には音声の種類として男性の声を指示として取得し、角度Aが他方の範囲にある場合には音声の種類として女性の声を指示として取得してよい。また、入力情報決定部55は、それぞれの範囲における基準角度と角度Aとの差に基づいて音のピッチの指示を取得してよい。 On the other hand, when the second position is included in any of the plurality of switching areas 47 (Y in step S601), the input information determination unit 55 determines the switching area 47 including the second position among the plurality of candidate sets. Select a candidate set according to (step S603). When any candidate set is selected, the input information determination unit 55 acquires, as an input value, a candidate site corresponding to the character area 46 including the first position from the selected candidate set (step S604). Further, the input information determination unit 55 acquires an instruction of the type and pitch of the output voice based on the direction of the first device (step S605). More specifically, the input information determination unit 55 divides the 360-degree range of the angle A into two ranges, and when the angle A of the first device is in one range, the male voice is used as the type of voice. As an instruction, and when the angle A is in the other range, a female voice may be acquired as an instruction as the type of voice. Further, the input information determination unit 55 may acquire an instruction of the pitch of the sound based on the difference between the reference angle and the angle A in each range.
 そして、音声出力部59は、取得された音声の種類およびピッチにより、取得された入力値に対応する音声を出力する(ステップS606)。 Then, the voice output unit 59 outputs the voice corresponding to the acquired input value according to the type and pitch of the acquired voice (step S606).
 切替領域47を用いて、入力値の候補セットを切り替えることが可能になり、少ない文字領域46でも多くの種類の情報を入力できる。また、図14に示される処理では、位置入力デバイス20を切替領域47に置いたままにすることで候補セットの切り替えが可能である。位置入力デバイス20が置いてある切替領域47からユーザがどの候補セットの情報(例えば濁音や半濁音の文字)が入力されるかを常に確認できるため、印刷された領域を指し示すことで情報を入力する場合により直観的に入力することが可能になる。また、位置入力デバイス20の方向を用いることで、容易に音声のピッチを調整することが可能になる。 The switching area 47 can be used to switch the candidate set of input values, and many types of information can be input even in a small character area 46. Further, in the process shown in FIG. 14, the candidate set can be switched by leaving the position input device 20 in the switching area 47. Since the user can always confirm which candidate set information (for example, voiced sound or handakuon character) is input from the switching area 47 in which the position input device 20 is placed, the information is input by pointing to the printed area. It becomes possible to input intuitively depending on the case. Further, by using the direction of the position input device 20, it becomes possible to easily adjust the pitch of the voice.
 なお、これまでの説明ではフリー領域42における処理の記載を省略している。ステップS107において第1の位置がフリー領域42に含まれる場合には、入力情報決定部55は、フリー領域42により設定するパラメータの項目と位置との関係を示す位置対応情報を取得し、その位置対応情報と、取得された第1デバイスの位置(x座標およびy座標)とに基づいて、入力値を取得する。そして、アプリケーション実行部58は、そのパラメータの項目に、取得された入力値を設定する。 In the explanation so far, the description of the process in the free area 42 is omitted. When the first position is included in the free area 42 in step S107, the input information determination unit 55 acquires the position correspondence information indicating the relationship between the parameter item set by the free area 42 and the position, and the position. The input value is acquired based on the correspondence information and the acquired position (x-coordinate and y-coordinate) of the first device. Then, the application execution unit 58 sets the acquired input value in the parameter item.
 図15は、蛇口領域48についての処理の一例を示すフロー図である。図15は、ステップS107において第1の位置が蛇口領域48に含まれる場合に、入力情報を取得し音声を出力する処理を示す。この処理では、第1デバイスの方向の変化量に応じて処理が実行される。 FIG. 15 is a flow chart showing an example of processing for the faucet region 48. FIG. 15 shows a process of acquiring input information and outputting voice when the first position is included in the faucet region 48 in step S107. In this process, the process is executed according to the amount of change in the direction of the first device.
 はじめに、入力情報決定部55は、前回に取得された第1の位置が、蛇口領域48に含まれていたか判定する(ステップS701)。前回の第1の位置が蛇口領域48に含まれない場合には(ステップS701のN)、入力情報決定部55は現在の第1デバイスの方向を初期方向として記憶する(ステップS702)。 First, the input information determination unit 55 determines whether the first position acquired last time is included in the faucet area 48 (step S701). When the previous first position is not included in the faucet area 48 (N in step S701), the input information determination unit 55 stores the current direction of the first device as the initial direction (step S702).
 前回の第1の位置が蛇口領域48に含まれる場合には(ステップS701のY)、入力情報決定部55は初期方向と現在の第1デバイスの方向との差(方向の変化量)に基づいて、流水モードのオンまたはオフの指示を取得する(ステップS703)。より具体的には、入力情報決定部55は、方向の変化量が閾値以上の場合には流水モードのオンの指示を取得し、閾値未満の場合には流水モードのオフの指示を取得する。 When the previous first position is included in the faucet region 48 (Y in step S701), the input information determination unit 55 is based on the difference (direction change amount) between the initial direction and the current direction of the first device. Then, an instruction to turn on or off the running water mode is obtained (step S703). More specifically, the input information determination unit 55 acquires an instruction to turn on the running water mode when the amount of change in the direction is equal to or more than the threshold value, and acquires an instruction to turn off the running water mode when the amount of change in the direction is less than the threshold value.
 そして、流水モードのオンの指示が取得された場合には、アプリケーション実行部58は流水モードをオンにし、音声出力部59は水が流れる音を出力する(ステップS704)。またアプリケーション実行部58は流水モードがオンである期間に応じて、グラス領域49が指し示された場合に出力する音の高さを決定する(ステップS705)。 Then, when the instruction to turn on the running water mode is acquired, the application execution unit 58 turns on the running water mode, and the voice output unit 59 outputs the sound of flowing water (step S704). Further, the application execution unit 58 determines the pitch of the sound to be output when the glass region 49 is pointed to according to the period during which the running water mode is on (step S705).
 グラス領域49が指し示された場合に出力される音(出力音)は、水の入ったグラスをたたいた際に出力される音に相当する。通常のグラスが水の量が多いほど高い音が鳴るので、流水モードの期間が長いほど出力音も高くなる。蛇口が印刷された蛇口領域48に位置入力デバイス20を配置し方向を変える操作は、蛇口をひねる操作に類似するため、ユーザは直観的に操作することが可能である。 The sound (output sound) output when the glass area 49 is pointed to corresponds to the sound output when the glass containing water is struck. The longer the running water mode, the higher the output sound, because the higher the amount of water in a normal glass, the higher the sound. Since the operation of arranging the position input device 20 in the faucet area 48 on which the faucet is printed and changing the direction is similar to the operation of twisting the faucet, the user can operate it intuitively.
 なお、位置入力デバイス20の方向の変化量に応じて設定値が増減されてもよい。例えば、入力情報決定部55は、設定領域41a,41bの上における入力デバイス20の回転量に応じて、それぞれボリュームの変化量および鍵盤領域43のキー(例えば基準音)の高さの増減を決定してもよい。アプリケーション実行部58は、決定されたその増減と、前回設定されたボリュームまたはキーの高さとに基づいて、ボリュームまたは鍵盤領域43のキーの高さを設定してよい。入力デバイス20の回転量に応じて例えばボリュームといった設定値が変化することにより、直観的な操作が可能となる。またこの効果は図11に示される処理でも得られる。 The set value may be increased or decreased according to the amount of change in the direction of the position input device 20. For example, the input information determination unit 55 determines the amount of change in the volume and the increase / decrease in the height of the key (for example, the reference sound) in the keyboard area 43 according to the amount of rotation of the input device 20 on the setting areas 41a and 41b, respectively. You may. The application execution unit 58 may set the key height of the volume or keyboard area 43 based on the determined increase / decrease and the previously set volume or key height. Intuitive operation is possible by changing a set value such as a volume according to the amount of rotation of the input device 20. This effect can also be obtained by the process shown in FIG.
[第2の実施形態]
 第2の実施形態では、2つの位置入力デバイス20を用いて情報を入力する構成について、主に第1の実施形態との相違点について説明する。第1の実施形態の図14の例では2つの位置入力デバイス20が置かれた位置に応じて文字を入力していたが、第2の実施形態では、文字と異なる情報、例えば指示が入力される。
[Second Embodiment]
In the second embodiment, the difference between the configuration in which information is input using the two position input devices 20 and the first embodiment will be mainly described. In the example of FIG. 14 of the first embodiment, characters are input according to the positions where the two position input devices 20 are placed, but in the second embodiment, information different from the characters, for example, an instruction is input. To.
 図16は、第2の実施形態にかかるシート31dの一例を示す図である。図16に示されるシート31dには、9×9のマス80が印刷されている。また、その四隅のうち1つが第1の判定領域81であり、もう1つが第2の判定領域82である。図16の例では、アプリケーション実行部58により、シート31d上を位置入力デバイス20が走行するゲームの処理が実行されている。アプリケーション実行部58は、予め記憶部12に格納された複数の初期状態のいずれかからゲームを開始し、ユーザの操作に応じて、内部の変数の値や位置入力デバイス20の位置を変更する。初期状態における位置入力デバイス20の初期位置も予め定められており、ゲームの開始の際には位置入力デバイス20は初期位置へ向けて自走する。 FIG. 16 is a diagram showing an example of the sheet 31d according to the second embodiment. A 9 × 9 square 80 is printed on the sheet 31d shown in FIG. Further, one of the four corners is the first determination area 81, and the other is the second determination area 82. In the example of FIG. 16, the application execution unit 58 executes the processing of the game in which the position input device 20 runs on the seat 31d. The application execution unit 58 starts the game from any of a plurality of initial states stored in the storage unit 12 in advance, and changes the values of internal variables and the position of the position input device 20 according to the user's operation. The initial position of the position input device 20 in the initial state is also predetermined, and the position input device 20 self-propells toward the initial position at the start of the game.
 図17は、入力情報を取得し位置入力デバイス20を制御する処理の一例を示す図であり、図8のステップS107およびS108に対応する処理である。入力情報決定部55は、第1の位置が第1の判定領域81に含まれ(ステップS901のY)、かつ、第2の位置が第2の判定領域82に含まれる(ステップS902のY)場合には、リスタート指示を入力値として取得する(ステップS903)。また、第1の位置が第1の判定領域81に含まれない(ステップS901のN)、または、第2の位置が第2の判定領域82に含まれない(ステップS902のN)場合には、入力情報決定部55は第1の位置および第2の位置が含まれる領域に応じて入力値を取得する(ステップS904)。なお、第1の位置が第2の判定領域82に含まれ、第2の位置が第1の判定領域81に含まれる場合にもリスタート指示が入力値として取得されてもよい。 FIG. 17 is a diagram showing an example of a process of acquiring input information and controlling the position input device 20, and is a process corresponding to steps S107 and S108 of FIG. The input information determination unit 55 includes the first position in the first determination area 81 (Y in step S901) and the second position in the second determination area 82 (Y in step S902). In that case, the restart instruction is acquired as an input value (step S903). Further, when the first position is not included in the first determination area 81 (N in step S901) or the second position is not included in the second determination area 82 (N in step S902). , The input information determination unit 55 acquires the input value according to the area including the first position and the second position (step S904). The restart instruction may be acquired as an input value even when the first position is included in the second determination area 82 and the second position is included in the first determination area 81.
 そして、入力値としてリスタート指示が取得された場合には(ステップS906のY)、アプリケーション実行部58は現在実行されているゲームの変数を初期化し、位置入力デバイス20が初期位置へ移動するよう制御する(ステップS907)。一方、入力値としてリスタート指示が取得されない場合には(ステップS906のN)、アプリケーション実行部58はゲームの処理を続行する(ステップS908)。 Then, when the restart instruction is acquired as the input value (Y in step S906), the application execution unit 58 initializes the variable of the game currently being executed so that the position input device 20 moves to the initial position. Control (step S907). On the other hand, if the restart instruction is not acquired as the input value (N in step S906), the application execution unit 58 continues the game processing (step S908).
 第2の実施形態では、特別な位置に2つの位置入力デバイス20を配置することにより、情報を入力することが可能になる。これにより、シート31上の領域をより有効活用できる。 In the second embodiment, information can be input by arranging the two position input devices 20 at special positions. As a result, the area on the sheet 31 can be used more effectively.
 これまでの説明では、位置入力デバイス20は自走可能であったが、自走できなくてもよい。また本発明は、音の出力やゲームのリスタートだけでなく、汎用的な情報の入力に用いられてもよい。

 
In the explanation so far, the position input device 20 can be self-propelled, but it does not have to be self-propelled. Further, the present invention may be used not only for sound output and game restart, but also for general-purpose information input.

Claims (10)

  1.  位置が符号化されたパターンが印刷されるシートと、
     前記パターンを撮影するカメラを含む入力デバイスと、
     前記カメラが撮影した画像に含まれるパターンに基づいて、前記入力デバイスの位置および向きを取得する取得手段と、
     前記取得手段により取得された位置および向きに基づいて処理を実行する実行手段と、
     を含む入力システム。
    A sheet on which a pattern with coded positions is printed, and
    An input device including a camera that captures the pattern, and
    An acquisition means for acquiring the position and orientation of the input device based on the pattern included in the image captured by the camera, and
    An execution means that executes processing based on the position and orientation acquired by the acquisition means, and
    Input system including.
  2.  請求項1に記載の入力システムにおいて、
     前記実行手段は音を出力し、前記取得された位置及び向きに基づいて、ユーザの操作に応じて出力される音のパラメータを決定する、
     入力システム。
    In the input system according to claim 1,
    The execution means outputs a sound, and determines the parameters of the output sound according to the operation of the user based on the acquired position and orientation.
    Input system.
  3.  請求項2に記載の入力システムにおいて、
     前記シートは設定領域を含み、
     前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記ユーザの操作に応じて出力される音の種類、高さ、音量のうちいずれかを決定する、
     入力システム。
    In the input system according to claim 2,
    The sheet contains a setting area
    When the position of the input device is included in the setting area, the executing means has any of the types, pitches, and volumes of sounds output in response to the user's operation based on the acquired orientation. Decide whether
    Input system.
  4.  請求項3に記載の入力システムにおいて、
     前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記ユーザの操作に応じて出力される音の種類、高さ、音量のうちいずれかを決定し、決定された音を出力する、
     入力システム。
    In the input system according to claim 3,
    When the position of the input device is included in the setting area, the executing means has any of the types, pitches, and volumes of sounds output in response to the user's operation based on the acquired orientation. And output the determined sound,
    Input system.
  5.  請求項3に記載の入力システムにおいて、
     前記シートは複数の演奏領域をさらに含み、
     前記実行手段は、前記入力デバイスの位置が前記複数の演奏領域のいずれかである場合に、前記入力デバイスの位置を含む演奏領域に応じた高さの音を出力し、
     前記実行手段は、前記入力デバイスの位置が前記設定領域に含まれる場合に、前記取得された向きに基づいて、前記出力される音の種類、高さ、音量、音階のスケールのうちいずれかを変更する、
     入力システム。
    In the input system according to claim 3,
    The sheet further includes a plurality of playing areas.
    When the position of the input device is any of the plurality of playing areas, the executing means outputs a sound having a pitch corresponding to the playing area including the position of the input device.
    When the position of the input device is included in the setting area, the executing means determines one of the output sound type, pitch, volume, and scale scale based on the acquired orientation. change,
    Input system.
  6.  請求項2から5のいずれかに記載の入力システムにおいて、
     前記実行手段は、前記取得された位置が所定の領域内にある間における前記取得された向きの変化量に応じて、出力される音のパラメータを決定する、
     入力システム。
    In the input system according to any one of claims 2 to 5,
    The executing means determines the parameters of the output sound according to the amount of change in the acquired orientation while the acquired position is within a predetermined region.
    Input system.
  7.  請求項2から5のいずれかに記載の入力システムにおいて、
     前記実行手段は、前記取得された位置が所定の範囲にある場合に、前記入力デバイスが取得した、シートに対する入力デバイスの向きに基づいて出力される音のパラメータを決定する、
     入力システム。
    In the input system according to any one of claims 2 to 5,
    The executing means determines the parameters of the sound output by the input device based on the orientation of the input device with respect to the sheet, when the acquired position is within a predetermined range.
    Input system.
  8.  請求項1に記載の入力システムにおいて、
     前記実行手段は、前記取得された位置及び向きに基づいて情報を選択し、前記選択された情報に基づいて処理を実行する、
     入力システム。
    In the input system according to claim 1,
    The executing means selects information based on the acquired position and orientation, and executes processing based on the selected information.
    Input system.
  9.  位置が符号化されたパターンが印刷されるシートが撮影された画像であって、入力デバイスにより撮影された画像から認識される、前記入力デバイスの位置および向きを取得するステップと、
     前記取得された位置および向きに基づいて処理を実行するステップと、
     を含む入力方法。
    A step of acquiring the position and orientation of the input device, which is an image on which the sheet on which the position-encoded pattern is printed is captured and is recognized from the image captured by the input device.
    The step of executing the process based on the acquired position and orientation, and
    Input method including.
  10.  位置が符号化されたパターンが印刷されるシートが撮影された画像であって、入力デバイスにより撮影された画像から認識される、前記入力デバイスの位置および向きを取得する取得手段、および、
     前記取得された位置および向きに基づいて処理を実行する実行手段、
     としてコンピュータを機能させるためのプログラム。

     
    An acquisition means for acquiring the position and orientation of the input device, which is an image on which the sheet on which the position-encoded pattern is printed is captured and is recognized from the image captured by the input device, and
    Execution means that executes processing based on the acquired position and orientation,
    A program to make your computer work as.

PCT/JP2020/036076 2019-11-08 2020-09-24 Input system, input method, and program WO2021090592A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021554835A JP7340031B2 (en) 2019-11-08 2020-09-24 Input systems, input methods and programs
CN202080075076.4A CN114600069B (en) 2019-11-08 2020-09-24 Input system, input method, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-203467 2019-11-08
JP2019203467 2019-11-08

Publications (1)

Publication Number Publication Date
WO2021090592A1 true WO2021090592A1 (en) 2021-05-14

Family

ID=75849898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/036076 WO2021090592A1 (en) 2019-11-08 2020-09-24 Input system, input method, and program

Country Status (3)

Country Link
JP (1) JP7340031B2 (en)
CN (1) CN114600069B (en)
WO (1) WO2021090592A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5598788A (en) * 1978-12-28 1980-07-28 Gaber Howard S Method and device for generating acoustic output from musical toy
US20020102910A1 (en) * 2001-01-29 2002-08-01 Donahue Kevin Gerard Toy vehicle and method of controlling a toy vehicle from a printed track
JP2010240345A (en) * 2009-04-02 2010-10-28 Koto:Kk Moving body toy
WO2018025467A1 (en) * 2016-08-04 2018-02-08 ソニー株式会社 Information processing device, information processing method, and information medium
JP3215614U (en) * 2017-12-20 2018-04-05 安譜國際股▲分▼有限公司 Educational toys
US20190164447A1 (en) * 2017-11-30 2019-05-30 Beijing Xiaomi Mobile Software Co., Ltd. Story machine, control method and control device therefor, storage medium and story machine player system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5705823B2 (en) * 2012-12-28 2015-04-22 株式会社東芝 Image forming apparatus and method for generating confirmation sound in image forming apparatus
JP6900705B2 (en) * 2017-02-28 2021-07-07 コニカミノルタ株式会社 Information processing systems, information processing devices, and programs

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5598788A (en) * 1978-12-28 1980-07-28 Gaber Howard S Method and device for generating acoustic output from musical toy
US20020102910A1 (en) * 2001-01-29 2002-08-01 Donahue Kevin Gerard Toy vehicle and method of controlling a toy vehicle from a printed track
JP2010240345A (en) * 2009-04-02 2010-10-28 Koto:Kk Moving body toy
WO2018025467A1 (en) * 2016-08-04 2018-02-08 ソニー株式会社 Information processing device, information processing method, and information medium
US20190164447A1 (en) * 2017-11-30 2019-05-30 Beijing Xiaomi Mobile Software Co., Ltd. Story machine, control method and control device therefor, storage medium and story machine player system
JP3215614U (en) * 2017-12-20 2018-04-05 安譜國際股▲分▼有限公司 Educational toys

Also Published As

Publication number Publication date
CN114600069B (en) 2024-04-30
JPWO2021090592A1 (en) 2021-05-14
CN114600069A (en) 2022-06-07
JP7340031B2 (en) 2023-09-06

Similar Documents

Publication Publication Date Title
US8085242B2 (en) Input control device and image forming apparatus
EP1806643B1 (en) Method for entering commands and/or characters for a portable communication device equipped with a tilt sensor
US8172681B2 (en) Storage medium having stored therein game program and game device
JP4253029B2 (en) Image processing method
JP4309871B2 (en) Information processing apparatus, method, and program
JP2009116583A (en) Input controller and input control method
US20110032202A1 (en) Portable computer with touch panel display
EP2218485A1 (en) Image generation device, image generation program, image generation program recording medium, and image generation method
EP1850208A1 (en) Data input device, data input method, data input program and recording medium wherein such data input program is recorded
US8292710B2 (en) Game program and game apparatus
JP2010224764A (en) Portable game machine with touch panel display
US8376851B2 (en) Storage medium having game program stored therein and game apparatus
EP1532577A1 (en) Position-coding pattern
JP6973025B2 (en) Display devices, image processing devices and programs
JP2003516576A (en) Portable communication device and communication method thereof
JPH10283115A (en) Display input device
WO2021090592A1 (en) Input system, input method, and program
JP2021077113A (en) Input system, input method and program
JP2004199260A (en) Graphics preparing device and graphics preparing method
JP2021196800A (en) Program and method for controlling computer
JP5232890B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
EP2362639A1 (en) Information processing apparatus and control method therefor
JP2007159800A (en) Pointer-based object selection system
JP2010011891A (en) Game control program and game apparatus
KR100681550B1 (en) Mobile communication terminal and method for playing music using action recognizing thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20885521

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021554835

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20885521

Country of ref document: EP

Kind code of ref document: A1