KR20100051746A - Audio processing device, audio processing method, information recording medium, and program - Google Patents

Audio processing device, audio processing method, information recording medium, and program Download PDF

Info

Publication number
KR20100051746A
KR20100051746A KR1020107007589A KR20107007589A KR20100051746A KR 20100051746 A KR20100051746 A KR 20100051746A KR 1020107007589 A KR1020107007589 A KR 1020107007589A KR 20107007589 A KR20107007589 A KR 20107007589A KR 20100051746 A KR20100051746 A KR 20100051746A
Authority
KR
South Korea
Prior art keywords
detected
predetermined
voice
contact position
satisfied
Prior art date
Application number
KR1020107007589A
Other languages
Korean (ko)
Other versions
KR101168322B1 (en
Inventor
마사시 타케히로
Original Assignee
가부시키가이샤 코나미 데지타루 엔타테인멘토
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가부시키가이샤 코나미 데지타루 엔타테인멘토 filed Critical 가부시키가이샤 코나미 데지타루 엔타테인멘토
Publication of KR20100051746A publication Critical patent/KR20100051746A/en
Application granted granted Critical
Publication of KR101168322B1 publication Critical patent/KR101168322B1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/32Constructional details
    • G10H1/34Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
    • G10H1/342Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments for guitar-like instruments with or without strings and with a neck on which switches or string-fret contacts are used to detect the notes being played
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/096Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith using a touch screen
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/135Musical aspects of games or videogames; Musical instrument-shaped game input interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/161User input interfaces for electrophonic musical instruments with 2D or x/y surface coordinates sensing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/045Special instrument [spint], i.e. mimicking the ergonomy, shape, sound or other characteristic of a specific acoustic musical instrument category
    • G10H2230/075Spint stringed, i.e. mimicking stringed instrument features, electrophonic aspects of acoustic stringed musical instruments without keyboard; MIDI-like control therefor
    • G10H2230/135Spint guitar, i.e. guitar-like instruments in which the sound is not generated by vibrating strings, e.g. guitar-shaped game interfaces

Abstract

The detection unit 1001 detects the presence or absence of contact with the surface of the touch screen and the coordinate value when there is a contact. The audio output unit 1002 determines that effective stroke operation is performed when the rocking sweep is specified or when the direction of the stroke with respect to the touch screen is changed in the opposite direction. When it is determined that the effective stroke operation is performed at the correct timing stored in the audio processing device 1000, the audio output unit 1002 starts outputting the performance audio.

Description

AUDIO PROCESSING DEVICE, AUDIO PROCESSING METHOD, INFORMATION RECORDING MEDIUM, AND PROGRAM}

A voice processing device, a voice processing method, an information recording medium, and a program suitable for simulating the performance of a musical instrument while utilizing the characteristics of hardware such as a touch screen that can detect the presence or absence of a contact and a contact position.

Background Art Conventionally, a game simulating guitar performance has been proposed. Such a technique is disclosed in the following Patent Document 1, for example.

The technique disclosed in Patent Literature 1 is a simulation guitar having a neck button for instructing selection of at least one rhythm sound according to the flow of the performance song from a plurality of rhythm sounds of a performance song, and a picking blade for instructing the output timing of the selected rhythm sound. Thus, pronunciation output control and rhythm input operation evaluation are performed at the output timing of the sound selection instruction.

On the other hand, portable game machines having a touch screen are now widely used.

JP2001-293246 A

Therefore, for example, it is strongly demanded to realize the technique which simulates the performance of the musical instrument disclosed by patent document 1, and utilizes the characteristic of a touch screen, and implements it as a portable game machine.

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and is suitable for simulating the performance of a musical instrument and voice processing while utilizing the characteristics of hardware such as a touch screen and the like that can detect the presence or absence of a contact. It is an object to provide a method, an information recording medium and a program.

In order to achieve the above object, the voice processing device according to the first aspect of the present invention includes a detection unit and a voice output unit.

The detection unit detects the position when the user is in contact with the surface of the to-be-contacted portion, and the effect when the user releases the surface. In a game device in which a speech processing device is realized, the contacted portion is, for example, a touch screen in which a touch sensor is superimposed on a liquid crystal screen, and when a user touches the surface of the touch screen, the detection portion indicates a coordinate value indicating the contact position. Detect In addition, in a state where there is no contact with the surface, that is, in a released state, the detection unit detects that there is no contact. In addition, the detection unit detects at predetermined time intervals, for example.

The audio output unit starts outputting the predetermined output audio when the predetermined operation condition is satisfied. At this time, the predetermined operation condition is

(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,

(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range

Shall be satisfied.

That is, "(a) when release is detected immediately after a contact position is detected and the change of the contact position just before a release is detected is more than a predetermined threshold speed", for example, when the user touches the touch screen After that, it is the same as removing the touch screen. This operation by the user is determined to be an effective operation that simulates an operation of stroke of the guitar string in one direction, and the audio output unit starts outputting the output audio.

In addition, "the case where (b) the direction of change of the contact position detected continuously becomes the opposite direction within a predetermined error range" means, for example, the user making the contact position reciprocate while touching a touchscreen. The same case is assumed. The direction of change of the contact position may be almost the opposite direction within a predetermined range, even if not completely opposite. For example, when the direction of change of a contact position changes, the angle which the direction of the change of the contact position just before a change, and the direction of the change of the contact position immediately after a change should just be in a predetermined range.

Such operation by the user that the direction of change in the continuously detected contact position becomes the opposite direction within a predetermined error range may be performed after the down stroke of the reciprocating stroke, that is, the other strings. It is determined that it is a valid operation to simulate the operation (or vice versa), and the audio output unit starts outputting the output audio.

According to such a voice processing apparatus, output of a predetermined output voice is started when release is detected based on a predetermined condition or at the moment when the direction of the stroke is changed. Therefore, the user can enjoy other simulations without worrying about the direction in which the user holds the game device on which the audio processing device is mounted or the direction in which the stroke is performed.

In addition, the said predetermined | prescribed operation condition is made into being satisfied in the case of said (a) and (b),

(c) It may be satisfied when the trajectory of the subsequently detected contact position crosses the predetermined determination line.

Here, the determination line is a line arranged at a predetermined position on the surface of the touch screen. When the user touches the touch screen and crosses the determination line while the user touches the touch screen, the determination line starts outputting the predetermined output voice at the time when the determination line is over. do. That is, the determination line corresponds to the guitar string, and by introducing the determination line, it becomes possible to more faithfully simulate the structure of the sound of the guitar.

In addition, the sound processing apparatus may further include an adjusting unit. The adjusting unit may adjust the direction of the determination line such that the trajectory of the contact position detected subsequently and the angle at which the determination line intersect is close to the right angle.

In other words, there are individual differences in the direction of lifting the game device in which the present audio processing device is realized by the user and the direction of the stroke performed on the touch screen. In order to absorb this individual difference, the adjustment unit adjusts the direction of the determination line so as to be close to the right angle with respect to the direction of the stroke performed by the user.

The audio output unit may insert a predetermined delay time before starting to output the audio when release is detected, and reduce the delay time when the predetermined operation condition is satisfied.

In other words, if the operation condition is satisfied, the audio output unit may start outputting the audio after a predetermined delay time has elapsed. When performing in a large place, a voice is transmitted to a person who is far from the player after a certain delay time has passed since the operation was performed. Therefore, by inserting the delay time in this way, it becomes possible to provide the effect of playing in a wide place even in a portable game machine.

In general, it is considered that the delay is less likely to be felt when the stroke is continuously performed as compared with the case where the stroke is performed only once. Therefore, in order to emphasize the first delay, when the release is detected, the delay time is returned to the default value, and if the operation condition is continuously satisfied, the delay time is shortened compared with the case where the operation condition is satisfied last time. You may make it short.

In addition, the audio output unit,

(d) the distance from the position at which the contact is initiated to the contact position detected just before the predetermined operation condition is satisfied, and

(e) The distance from the contact position detected at the time when the predetermined operation condition is satisfied immediately before the contact position detected immediately before the next operation condition is satisfied.

Based on this, the volume of the output voice which should start the output may be determined.

In other words, when the user makes a small stroke with respect to the touch screen, the operation condition is satisfied from the position where the contact is started on the touch screen so that a small output voice is outputted and a large output audio is output when the user makes a large stroke. The volume is determined based on the distance to the contact position detected just before the end. Alternatively, when the user performs a reciprocating stroke with respect to the touch screen, the sound volume can be controlled in the same manner so as to detect the volume immediately before the predetermined operation condition is satisfied next from the contact position detected at the time when the predetermined operation condition is satisfied. The volume is determined based on the distance to the contact position. At this time, the distance between the two contact positions at which the distance is measured may be a linear distance or a length of a trajectory.

The audio output unit further outputs an accompaniment voice of a predetermined piece of music, and the accompaniment voice corresponds to a performance timing specified by the elapsed time since starting output and a performance voice to be outputted to the performance timing.

In other words, in a game simulating guitar performance, an accompaniment voice is output, and the user plays in accordance with the accompaniment voice. The game memorizes the correct timing at which the user should perform the play and the playing voice to be output at that time.

In addition, the audio output unit should be output at the performance timing when the predetermined operation condition is satisfied and when the timing at which one of the conditions is satisfied coincides with any one of the performance timings corresponding to the accompaniment voice. An output voice to be played may be started as the predetermined output voice.

That is, in a game simulating guitar performance, the user performs a stroke operation (effective stroke operation) that satisfies the operation condition at which the audio output is started, with respect to the accompaniment on the touch screen. When the user performs a stroke operation on the touch screen at the correct timing (that is, the timing of the user's operation coincides with the timing to be played), the correct playing voice is output.

In addition, the audio output unit is a voice indicating failure when the predetermined operation condition is satisfied, and the timing at which either of the above conditions is satisfied does not coincide with any of the performance timings corresponding to the accompaniment voice. May be started as the predetermined output voice.

In other words, when the user performs a stroke operation on the touch screen at an incorrect timing that does not match the accompaniment, the audio output unit outputs a voice indicating failure.

The audio output unit may stop the output of the above-described output audio when the contact position detected continuously for a predetermined threshold time is within a predetermined position range. That is, "the contact position detected continuously in predetermined threshold time is in a predetermined position range" means that a stroke was stopped. Therefore, when such a condition is satisfied, the audio output unit stops the output. Moreover, arbitrary positions should just continue to contact in a predetermined range.

The contact member may further include a contact member for picking up and contacting the surface, wherein the contact member has a peak shape or a shape in which protrusions are arranged at a tip end of the peak shape. That is, the contact member is a so-called touch pen, and the handle portion has a shape of other peak, and the tip portion may be provided with a projection. By having the shape of the guitar peak, the user can easily perform the stroke operation and can feel the same realism as operating the guitar. Moreover, a contact position becomes easy to be detected by providing a processus | protrusion in a front-end | tip part.

A sound processing method according to another aspect of the present invention is a sound processing method using a sound processing apparatus including a detection unit and a sound output unit. In the detection step, the detection unit detects the position of the user when the user comes into contact with the surface of the contacted part. If the surface is released, the effect thereof is detected.

In addition, in the audio output process, if the predetermined operation condition is satisfied, the audio output unit starts outputting the predetermined output audio. Here, the predetermined operation condition is

(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,

(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range

Shall be satisfied.

In addition, the audio processing apparatus further includes an adjusting unit, and the predetermined operation conditions are not to be satisfied in the case of (a) and (b) above,

(c) It may be satisfied when the trajectory of the subsequently detected contact position crosses the predetermined determination line.

At this time, the adjustment part may be equipped with the adjustment process which adjusts the direction of the said determination line so that the angle | interval of the trace | position of the contact position detected continuously and the said determination line may be close to a right angle.

In addition, the program recorded by the information recording medium according to another aspect of the present invention is configured so that the computer functions as the above-mentioned sound processing apparatus. A program recorded by an information recording medium according to another aspect of the present invention is configured to cause a computer to execute the above voice processing method.

Moreover, the program which concerns on another aspect of this invention is comprised so that a computer may function as the said audio processing apparatus. A program according to another aspect of the present invention is configured to cause a computer to execute the above voice processing method.

In addition, the program of the present invention can be recorded on a computer-readable information recording medium such as a compact disk, a flexible disk, a hard disk, a magneto-optical disk, a digital video disk, a magnetic tape, a semiconductor memory, and the like. The program can be distributed and sold via a computer communication network independently of the computer on which the program is executed. The information recording medium can be distributed and sold independently of the computer.

According to the present invention, there is provided a speech processing apparatus, a speech processing method, an information recording medium, and a program suitable for simulating the performance of a musical instrument while utilizing the characteristics of hardware such as a touch screen that can detect the presence or absence of a contact and the contact position. can do.

1 is a schematic diagram showing a schematic configuration of a typical game device in which an item selection device according to an embodiment is realized;
2 is a view showing an appearance of a typical game device in which the item selection device according to one embodiment is realized;
3 is a functional block diagram of an item selecting apparatus according to an embodiment;
4A is a diagram showing a table, a window, and their relationship in an item selection device according to an embodiment;
4B illustrates a situation in which a table element covered with a window is displayed on the touch screen;
5 is a flowchart for describing a processing operation of an item selection device according to an embodiment;
6A is a diagram for explaining a window moving direction with respect to the movement of a contact position of a touch screen;
FIG. 6B is a diagram for explaining the movement direction of a window with respect to the movement of the touch position of the touch screen; FIG.
FIG. 6C shows a situation of change of the area of the displayed table with respect to the movement of the contact position; FIG.
FIG. 6D shows a situation of change of the area of the displayed table with respect to the movement of the contact position; FIG.
FIG. 7A illustrates a situation in which the window is rearranged when the position of the window is outside the area of the table; FIG.
FIG. 7B illustrates a situation in which the elements of the table are traversed and displayed when the window reaches the end of the table;
FIG. 7C illustrates a situation in which the elements of the table are traversed and displayed when the window reaches the end of the table; FIG.
8A illustrates an example of a peak-shaped touch pen;
8B is a view showing a situation where a user grabs a touch pen having a peak shape;
FIG. 8C is a diagram showing a situation in which a user is simulating a guitar performance on a touch screen using the touch pen shown in FIG. 8A;
9 is a functional block diagram of a speech processing apparatus according to an embodiment;
10 is a flowchart for explaining the processing operation of the audio processing device according to the embodiment;
11 shows an example of the trajectory of a contact position;
12 is a functional block diagram of a speech processing device according to another embodiment;
13 is a flowchart for explaining a processing operation of a speech processing device according to another embodiment;
14A shows an example of the trajectory of the contact position;
14B is a diagram showing a situation of adjusting the position and direction of the determination line;
Fig. 14C is a diagram showing a method for obtaining the direction of a stroke when adjusting the judgment line.

The game apparatus according to the present embodiment is largely divided as will be described later, and functions as an item selection apparatus and an audio processing apparatus. That is, the user first selects a piece of music to be simulated by the game device as the item selection device from the music piece list. Next, in the game apparatus as the audio processing apparatus, the guitar performance is simulated using the selected piece of music.

1 is a schematic diagram showing a schematic configuration of a typical portable game device in which an item selection device and a sound processing device according to an embodiment of the present invention are realized. 2 shows an external view of the portable game device. The following description will be made with reference to this drawing.

The game device 100 includes a CPU (Central Processing Unit) 101, a ROM (Read Only Memory) 102, a RAM (Random Access Memory) 103, an interface 104, an input unit 105, Memory cassette 106, image processing unit 107, touch screen 108, NIC (Network Interface Card) 109, audio processing unit 110, microphone 111, and speaker 112.

By mounting a memory cassette 106 (described later in detail) storing a program and data for a game in a slot (not shown) connected to the interface 104, and turning on the power of the game device 100, The program is executed to realize the item selection device and sound processing device of this embodiment.

The CPU 101 controls the operation of the entire information processing apparatus 100 and is connected to each component to exchange control signals and data. The CPU 101 has a clock (not shown), and the peripheral device operates in synchronization with the signal generated by the clock.

The ROM 102 stores an IPL (Initial Program Loader) that is executed immediately after power is turned on. By the CPU 101 executing this IPL, a program recorded in the memory cassette 106 or the like is read into the RAM 103, and execution by the CPU 101 is started.

The ROM 102 also records a program and various data of an operating system required for operation control of the entire game device 100.

The RAM 103 is for temporarily storing data and programs, and holds programs and data read out from the memory cassette 106 and the like, data necessary for the progress of games, and the like.

The memory cassette 106 detachably connected via the interface 104 includes a read-only ROM area for storing a program for realizing a game and image data and audio data accompanying the game, as described above; An SRAM area for saving data such as a play result is provided. The CPU 101 performs a read process on the memory cassette 106, reads out necessary programs and data, and temporarily stores the read data in the RAM 103 or the like.

The input unit 105 is a control button or the like shown in FIG. 2, and receives an instruction input by a user.

The image processing unit 107 processes the data read out from the memory cassette 106 by an image computing processor (not shown) included in the CPU 101 or the image processing unit 107, and then processes the data, and then processes the data. Writes to a frame memory (not shown) provided in the. The image information recorded in the frame memory is converted into a video signal at a predetermined synchronization timing and output to a touch sensor type display (touch screen 108). This makes it possible to display various images.

The image calculation processor can execute a superimposition operation of two-dimensional images, a transmission operation such as alpha blending, and various saturation operations at high speed.

Further, polygon information arranged in a three-dimensional virtual space and to which various texture information is added is rendered by the Z buffer method to look down on the polygon arranged in the three-dimensional virtual space from a predetermined viewpoint position. It is also possible to perform a high speed operation of obtaining a rendered image.

In addition, by the cooperative operation of the CPU 101 and the image calculation processor, it is possible to draw a character string in a frame memory as a two-dimensional image or to draw on each polygonal surface in accordance with font information defining the shape of a character. Although the font information is recorded in the ROM 102, it is also possible to use exclusive font information recorded in the memory cassette 106.

Moreover, the said touch screen 108 is a liquid crystal panel comprised by overlapping a touch sensor. The touch screen 108 detects the positional information corresponding to the position pressed by the user with a finger or an input pen, and inputs the same to the CPU 101.

In addition, in accordance with the instruction input by the user via the input unit 105 or the touch screen 108, data temporarily stored in the RAM 103 can be appropriately stored in the memory cassette 106.

The NIC 109 is for connecting the game device 100 to a computer communication network (not shown) such as the Internet. The NIC 109 is configured by, for example, an interface (not shown) conforming to a standard such as IEEE 802.11 when the game device is wirelessly connected to a local area network (LAN). In addition, when connecting to a LAN by wire, it is based on the 10BASE-T / 100BASE-T standard, an analog modem for connecting to the Internet using a telephone line, an ISDN (Integrated Services Digital Network) modem, and ADSL (Asymmetric Digital). Subscriber Line) A modem, a cable modem for connecting to the Internet using a cable television line, and the like, and an interface (not shown) for mediating them with the CPU 101 is provided.

The current date and time information can also be obtained by connecting to a SNTP server in the Internet via the NIC 109 and obtaining the information therefrom. In addition, the configuration may be such that the server apparatuses of various network games function similarly to SNTP servers.

The audio processing unit 110 converts the audio data read out from the memory cassette 106 into an analog audio signal and outputs it from the speaker 112 connected to the audio processing unit 110. In addition, under the control of the CPU 101, sound effects or sound data to be generated during the progress of the game are generated, and the corresponding sound is output from the speaker 112. FIG.

When the voice data recorded in the memory cassette 106 is MIDI data, the voice processing unit 110 refers to the sound source data it has and converts the MIDI data into PCM data. In addition, in the case of compressed speech data such as the Adaptive Differential Pulse Code Modulation (ADPCM) format or the Ogg Vorbis format, this is expanded and converted into PCM data. The PCM data is subjected to D / A (Digital / Analog) conversion at the timing corresponding to the sampling frequency and output to the speaker 112 or the like, so that audio output is possible.

In addition, the audio processing unit 110 performs A / D conversion on the analog signal input from the microphone 111 at an appropriate sampling frequency to generate a digital signal in the PCM format.

In addition, the game device 100 is configured to include a DVD-ROM drive that reads programs and data from the DVD-ROM instead of the memory cassette 106, and is similar to the memory cassette 106 on the DVD-ROM. You may have a function. The interface 104 may also be configured to read data from an external memory medium other than the memory cassette 106.

Alternatively, the game device 100 may be configured to perform functions such as the ROM 102, the RAM 103, the memory cassette 106, and the like by using a large capacity external storage device such as a hard disk.

The item selection device and the audio processing device according to the present embodiment are realized on a portable game device, but can also be implemented on a general computer. A general computer, like the game device 100, has a CPU, RAM, ROM, NIC, etc., has an image processing unit having a simpler function than the game device 100, and has a hard disk as an external storage device. In addition, flexible disks, magneto-optical disks, magnetic tapes, and the like can be used. In addition, a keyboard, a mouse, or the like is used as the input device instead of the input unit. After the program is installed, the program can be executed to function as an item selection device and a sound processing device.

In the following, the item selection device will be described, and then the audio processing device will be described. Unless otherwise noted, the item selection device and the audio processing device will be explained by the game device 100 shown in FIG. The item selection device and the audio processing device can be appropriately substituted with elements of a general computer as needed, and these embodiments are also included in the scope of the present invention.

(Item selector)

3 is a block diagram showing a schematic configuration of an item selecting apparatus 200 according to the present embodiment. As shown in FIG. 3, the item selection device 200 includes a storage unit 201, a display unit 202, a detection unit 203, an item output unit 204, a moving unit 205, and the like. In addition, as will be described later, the item selection device 200 may include a contact member 206, as shown in FIG. 8. Hereinafter, each component of the item selection device 200 will be described with reference to this drawing.

The storage unit 201 stores a table whose elements are items to be selected. The table is configured in a two-dimensional array having at least one row and column. For example, in the case where a plurality of pieces of music are selected and reproduced using the game device 100, the table may be configured using each of the pieces of music as an element. In the case of a one-dimensional table having N elements, the table may be divided into groups for each A group to form a two-dimensional table of rows A and N / A (or N / A and A). do. The memory cassette 106, the RAM 103, and the like cooperate together to function as the storage unit 201.

The display unit 202 acquires the configuration information of the table from the storage unit 201 and generates image data (see 300) of the table as shown in FIG. 4A. In the drawings, the table includes five rows and four columns of elements, but the size of the table is not limited thereto. In this embodiment, considering the case where the size of the table is larger than the size of the touch screen plane, a window 310 (see 310) having an area of a predetermined size covering a part of the image data of the table is prepared. The size and position of this window are stored in the storage unit 201. The position of the window is updated to move based on the user's operation instruction. The display unit 202 acquires the position of the window stored in the storage unit 201 and displays the elements of the table included in the area covered by the window as shown in FIG. 4B, for example.

Note that the list of items displayed in FIG. 4B does not coincide with the boundary of the element and the boundary of the display area, but may be adjusted so as to coincide.

Thus, since the area | region covered by the window is the area | region displayed by the display part 202, a window and an area are used by the same meaning after that unless there is particular notice. The CPU 101, the RAM 103, the image processing unit 107, the touch screen 108, and the like cooperate to function as the display unit 202.

The position of the window is, for example, the X direction from the origin O of the table (e.g., the upper left of the table), to the coordinate value of the origin O 'of that window (e.g., O' from O) and Number of pixels in the Y direction). Moreover, if the magnitude | size of the X direction of the displayed table is W, the magnitude | size w of the X direction of the cell (refer to 301 of FIG. 4A) in which each element is displayed is W / N (N: number of columns), Y of the table If the size of the direction is L, the size l in the Y direction of the cell is L / M (M: number of rows).

The detection unit 203 detects whether there is a contact on the touch screen at predetermined time intervals. The position of the touch screen plane touched by the user is displayed as a coordinate value when the upper left corner of the touch screen is the origin, for example. In addition, the detection unit 203 detects that the touch screen is not touched when the touch screen is not touched. The touch screen 108, the CPU 101, and the like cooperate together to function as the detection unit 203.

When the item output unit 204 determines that the user initiates contact with the touch screen based on the detection result of the detection unit 203 and releases the touch screen almost without moving the contact position, the item output unit 204 makes the contact. Outputs the item displayed at the position as the selection result. Here, "without moving" means that all of the positions detected from the start of the contact until release are all within a predetermined range. The CPU 101 and the like function as the item output unit 204.

When the moving unit 205 determines that the user has moved the touch screen to sweep (swiping) the touch screen according to the detection result of the detecting unit 203, the contact immediately before releasing the touch screen. Get the speed of the move. And based on the said speed, it updates so that the position of the window stored in the memory | storage part 201 may move. When the display unit 202 displays an area of the table covered by the moving window, the table is scrolled and displayed.

In addition, while the touch screen is being touched by the user, the moving unit 205 moves the position of the window so that an item such as an item displayed at the touch position when the touch screen is started is displayed at the touch position. . That is, the item which is in contact when starting a contact is fixed to a contact position. Therefore, when the table is scrolling, if the touch screen is touched again, the content displayed at the contact position is fixed at the contact position, and scrolling may be stopped. The moving unit 205 functions as the moving unit 205 in cooperation with the CPU 101, the RAM 103, the memory cassette 106, and the like.

(Action processing)

The processing operation of the item selection device 200 having the above configuration will be described with reference to FIG.

When the power supply of the item selection device 200 is turned on, the CPU 101 executes the IPL, whereby an initialization process such as reading a program recorded in the memory cassette 106 into the RAM 103 is performed. When the item selection processing is started in accordance with the progress of the program, first, the display unit 202 displays a selection screen showing a part of the items to be selected as a list (step S400). In the present embodiment, the item to be selected is information (such as a piece of music) for specifying the piece of music for which other simulation is performed.

At this time, the display unit 202 acquires the configuration information of the table from the storage unit 201 and generates the image data of the table as described above. Since the position of a window is arrange | positioned at the origin of a table in an initial state, the display part 202 displays the area | region covered by the width W of the window in the X direction and the length L of the window in the X direction from the origin of the table.

Subsequently, when the detection unit 203 detects the coordinate value, the CPU 101 accumulates in the storage unit 201 (step S401). The accumulated coordinate values are later used to calculate the speed of change of the contact position.

Subsequently, the CPU 101 determines whether the state detected by the detection unit 203 has changed (step S402). That is, when the last detected thing is the coordinate value, when the release state (that is, the state in which the touch screen is not in contact) is detected this time, or when a coordinate value different from the previous time is detected, it is determined that the state has changed. Or, when the one detected last time is the released state, if any one of the coordinate values is detected this time, it is determined that the state has changed. While the state is not changed (step S402: NO), the CPU 101 returns to step S402 again and waits for the state to change.

When the CPU 101 determines that the detected state has changed (step S402: YES), the CPU 101 then determines whether the detected state is a released state or a coordinate value (step S403).

When what is detected this time is a coordinate value (step S403: No), CPU101 next determines whether the state detected last time is a release state or another coordinate value (step S410). If the state detected last time is the released state, it is determined that the contact has been started by the user (step S410: YES), and the CPU 101 determines the coordinate value and the position of the window on the touch screen when the contact is started. Is stored temporarily (step S411). The process then returns to Step S400.

On the other hand, if the detected value this time is a coordinate value (step S403: NO), and the state detected last time is not a released state (step S410: NO), a coordinate value different from the previously detected coordinate value is detected this time. Means that. That is, in the contacted state, the contact position is moving. At this time, the moving unit 205 moves the window so that the element displayed at the position touched last time is dragged and displayed at the position touched this time (step S412).

That is, as shown in FIG. 6A, when the state detected last time was the coordinate values p1 and q1 and the state detected this time was the coordinate values p and q, the contact position is in the x direction by (p-p1). And in the Y direction by (q-q1). Therefore, as shown in FIG. 6B, the moving unit 205 moves the position (x1, y1) of the window 310 in the opposite direction by the moved amount of the contact position (that is, (x1- (p-p1), y1). -(q-q1)) In FIG. 6B, the moved window is indicated by (310 '), and the position of the window 310' is (x, y) = (x1- (p-p1). , y1- (q-q1)).

For example, in FIG. 6C, the upper left portion of the element GG is in contact. In this contacted state, when the contact position is moved in the direction of the arrow shown in Fig. 6C, the window position is moved by the same amount of change in the direction opposite to the direction of change of the contact position. As a result, as shown in FIG. 6D, the upper left portion of the element GG is displayed in the contact position similarly to FIG. 6C.

In addition, when the window position is changed by the drag operation and the window area is out of the table area, the window position (x, y) is always rearranged to represent the coordinates in the table area. For example, if the position of the window reaches the end of the table, move the position of the window to the opposite position of the table. That is, when the table is displayed in an area surrounded by the origin (0, 0), (W, 0), (0, L), (W, L), the boundary of the table 300 indicated by X = W is It is treated to be a border line of X = 0, and the border line indicated by Y = L is treated to be a border line of Y = 0.

Therefore, the coordinate value indicating the position of the window is added to 0 for the amount larger than W in the X direction. In addition, about the quantity smaller than 0 in an x direction, it subtracts from W. FIG. On the other hand, the amount larger than L in the Y direction is added to zero. In addition, about the quantity smaller than 0 in a Y direction, it subtracts from L. FIG. In this way, the position of the window is cyclically moved.

For example, in FIG. 7A, the location of window 310 is at (-cx, -dy) from the origin of table 300 (c, d is any integer), (-cx, -dy) If is outside the area of the table 300, it indicates the situation that the position of the window is rearranged to (W-cx, L-dy) within the area of the table. The rearranged window is shown as 310 '.

As a result, the process returns to step S400, and when the display unit 202 displays the area of the table covered by the current window, the element displayed at the position of the touch screen that the user touched moves the contact. It is displayed as if it is being dragged to the position of the moving destination. The size of the window is stored in advance in the storage unit 201.

However, when a part of the area of a window is arrange | positioned out of a table according to the movement of the position of a window in step S412, the display part 202 has the head of each row and column of a table with respect to the area | region of a window deviating from a table. The last element is rounded off to be adjacent.

This is computed as follows, for example. For example, W and L are referred to as the sizes in the X and Y directions of the table 300, respectively, and W 'and L' are referred to as the sizes in the X and Y directions of the window 310, respectively. If the coordinate value of the origin O 'of the window 310 is (x, y), the area of the table to be displayed is (x, y), (x + W') from the origin O of the table 300. , y), (x, y + L '), and (x + W', y + L '). If the coordinate value of any point in the area of the table to be displayed is (s, t), the coordinate value (s ', t' is calculated by calculating (s ', t') = (s mod W, t mod L). Also denotes a coordinate value in the table area even if the window area is arranged beyond the table area.

Thus, for example, as shown in FIG. 7B, when the window 310 is located in the table 300, the elements of the displayed table are as shown in FIG. 7C.

On the other hand, if the one detected this time is in the released state (step S403: YES), the item output unit 204 determines whether or not the coordinate values accumulated in the storage unit 201 in step S400 are within a predetermined range (step S403). S404).

For example, when the accumulated detection positions are all included in an area having a predetermined radius, it is determined that the contact positions are not moving (step S404: YES), and the item output unit 204, for example, accumulates. The item of the table currently displayed in the average coordinate value of the detected detection position is output as a selection result (step S405).

When the selection result is obtained, the item selection processing is finished, and the CPU 101 performs a predetermined processing based on the obtained selection result.

The accumulated coordinate value may be discarded after step S405 (step S420).

On the other hand, when the accumulated detection positions are not all within a predetermined range (step S404: NO), it is determined that the contact position is moving, and the CPU 101 calculates the movement speed of the contact position just before the contact is released. Then, it is determined whether the calculated moving speed is equal to or greater than the predetermined threshold speed (step S406).

For example, the detection coordinates immediately before the contact is released (p1, q1) and the detection coordinates immediately before that are (p2, q2) (these are the latest coordinate values accumulated in the storage unit 201). Obtained by referring to the previous coordinate value) and the time interval detected by the detection unit 203 is T1 seconds, a velocity vector indicating a change in the contact position just before the contact is released is obtained by the following equation:

((pl-p2) / T1, (ql-q2) / T1).

If the larger component of the x component and the y component is equal to or greater than a predetermined threshold speed (step S406: YES), it is determined that the user has performed an operation of sweeping the touch screen, and the moving unit 205 determines the position of the window. Is moved in the direction of the larger component at a speed corresponding to the sweeping speed of the user (step S407).

That is, when the x component is larger than the y component, it means that the direction of the moving speed of the contact position just before the contact is released is close to the x direction. Therefore, the position of the window is moved at a speed of (p1-p2) / T1 in the row direction (left and right directions). On the other hand, when the y component is larger than the x component, it means that the direction of the moving speed is close to the y direction. Therefore, the position of the window is moved at the speed of (q1-q2) / T1 in the column direction (up-down direction). The movement speed in the x and y directions of the calculated contact position may be multiplied by a predetermined coefficient as the window movement speed. Moreover, if the magnitude | size of an x component and a y component is the same, either a row or a column is moved in the predetermined direction at the speed of the component of that direction.

Subsequently, the display unit 202 performs the same process as in step S400, and displays a region covered with a window in the table having the selected item as an element (step S408). Then, the CPU 101 determines whether the contact has been detected (step S409), and if it determines that the contact has been detected (step S409: YES), the process proceeds to step S411 as if the contact was started. On the other hand, while the contact is not detected (S409: NO), the process returns to step S407. As a result, the table is scrolled and displayed on the basis of the contact movement speed just before the sweeping by the display unit 202 in the row direction or the column direction until the next contact is detected.

In addition, as described above with respect to the drag operation, when the window reaches the boundary of the table image, the moving unit 205 causes the position of the window to move cyclically.

On the other hand, when the CPU 101 determines that the movement speed of the contact position is slower than the predetermined speed (step S406: NO), the processing returns to step S400. By the above, the item selection process is completed.

(Voice Processing Equipment)

Next, a description will be given of the audio processing apparatus 1000 for realizing a simulation of guitar performance as a piece of music specified according to the piece of music selected as described above.

9 is a schematic diagram showing a schematic configuration of a speech processing apparatus 1000 according to the present embodiment. As illustrated in FIG. 9, the audio processing apparatus 1000 includes a detection unit 1001, an audio output unit 1002, and the like. In addition, the audio processing apparatus 1000 may be provided with the contact member 206 (refer FIG. 8), as mentioned later. Hereinafter, each component of the audio processing apparatus 1000 according to the present embodiment will be described with reference to the drawings.

The detection unit 1001 is similar to the detection unit 203 of the item selection device, and detects whether or not there is contact on the touch screen at predetermined time intervals. The position of the touch screen plane touched by the user is displayed as a coordinate value when the upper left corner of the touch screen is the origin, for example. In addition, when the touch screen is not touched, it detects that it is not touched. The touch screen 108, the CPU 101, and the like cooperate together to function as the detection unit 1001.

On the basis of the detected result, the audio output unit 1002 determines whether or not the operation performed by the user on the touch screen satisfies a predetermined operation condition, and when it is filled, starts the output of the output voice, Control to stop. The voice processing unit 110 and the like function as the voice output unit 1002.

(Action processing)

The operation processing of the audio processing apparatus 1000 according to the present embodiment having the above configuration will be described below.

The memory cassette 106 stores accompaniment voices, performance voices, timings at which the performance voices are output, and the like, corresponding to attribute information of music pieces such as music pieces.

When a song is selected by the user's instruction, and a game start instruction is made by pressing a predetermined control button or the like, the CPU 101 sends the accompaniment voice corresponding to the song to the memory cassette 106 from the RAM 103. ) And outputs to the speaker 112 or the like via the audio processing unit 110. The accompaniment voice includes, for example, a guide performance voice, such as a guide melody of karaoke, and displays to the user the timing at which the performance voice should be output. The user performs operations on the touch screen in accordance with the guide playing voice so that predetermined operation conditions are satisfied (that is, the voice processing apparatus 1000 determines that it is a valid stroke operation).

Alternatively, the timing of outputting the current playing voice may be visually informed to the user. For example, a marker is displayed which advances in a predetermined direction at predetermined intervals with time, and a timing marker which indicates the timing of outputting the playing voice in the direction in which the marker advances. The user may operate on the touch screen so that a predetermined operation condition is satisfied when the moving marker reaches the timing marker. The timing for outputting each of the performance sounds is displayed as a relative time from the start time when the time at which the accompaniment sound is started is 0, for example, and stored in correspondence with the respective performance sounds.

Hereinafter, with reference to FIG. 10, the flow of the process which the audio processing apparatus 1000 which concerns on this embodiment outputs or stops a sound according to the operation of a touchscreen by a user is demonstrated.

The detection unit 1001 first detects the state of the touch screen 108 in step S500. That is, if there is a contact, the coordinate value indicating the position where the contact was detected is detected, and if there is no contact, it is detected that there is no contact. And if the thing detected by the previous detection part 1001 was a coordinate value, CPU101 will store in the area | region which accumulates the said coordinate value prepared in RAM103 etc. In the initial state, since the state previously detected does not exist, the coordinate value is not accumulated and the flow proceeds to step S501.

In addition, as is apparent from the subsequent processing, the accumulated coordinate values continue to be the trajectory or part of all the detected coordinate values. After the released state is detected or a valid stroke operation is specified, the accumulated coordinate values are discarded, and the accumulation of the detected coordinates is started again from that point in time. The coordinate values thus stored are used later to determine the output volume and the like.

Subsequently, the CPU 101 determines whether all the detected coordinates for the predetermined number are within a predetermined range (for example, from the midpoint or the average value of the accumulated detection coordinates for the predetermined number, all the detection coordinates are within the predetermined range). Or not) (step S501). If all the detection coordinates are in the predetermined range (step S501: YES), it means that the change of a predetermined time contact position stops. If there is something out of the range (step S501: NO), it means that the accumulated detection coordinates for the predetermined number of pieces are changing. If the determination result in step S501 is "no", then the process proceeds to step S502.

The CPU 101 determines whether or not the state detected by the detection unit 1001 has changed from the state previously detected (step S502). That is, when the previous coordinate value is detected, either the present release state (that is, the state where the touch screen 108 is not in contact) is detected, or the condition of whether the previous coordinate value is detected or not satisfied is satisfied. If so, it is determined that the state has changed. Moreover, when a previous release state is detected, when either coordinate value is detected this time, it is determined that a state has changed. While the state is not changed (step S502: NO), the CPU 101 returns to step S500 again and waits for the state to change. The state detected last time is temporarily stored, for example, in the RAM 103 or the like.

If it is determined that the detected state has changed (step S502: YES), the CPU 101 determines whether the next time detected is a released state or a coordinate value (step S503).

If the detected current time is not a released state but a coordinate value (step S503: NO), the last detected state is a released state or a separate coordinate value. First, if the last detected one is in the released state (step S521: YES), the CPU 101 returns the process to step S500. Moreover, in the step S500, when the last detected state is the coordinate value, the previously detected coordinate value is accumulated, so that the coordinate value specified as the position where the contact is started in step S521 is executed later when step S500 is executed. I remember.

On the other hand, when the coordinate value detected this time is a coordinate value different from the coordinate value detected last time (step S521: No), it means that the contact position detected continuously changed. At this time, the CPU 101 calculates a motion vector (movement direction) of the contact position (step S531). The motion vector is obtained by subtracting the coordinate value detected this time from the coordinate value accumulated in step S501 and the coordinate value preceding it (that is, the latest coordinate value accumulated). Then, the motion vector is compared with the motion vector of the contact position obtained by executing the previous step S531 (the result of the step S531 is temporarily stored in the RAM 103), and it is determined whether or not it is almost in the opposite direction. (Step S532).

That is, the CPU 101 calculates the inner product of the motion vector obtained this time and the motion vector of the contact position obtained by executing the previous step S531. If the inner product is smaller than a negative predetermined value, it is determined that the two vectors are almost opposite directions (step S532: YES). On the other hand, if the inner product is larger than the above-described predetermined value, it is determined that the two vectors are not in the opposite direction (step S532: NO).

In addition, when a contact has changed for the first time after the contact is started and a detected contact position continues, "the last direction of movement" does not exist. Therefore, in this case, the moving direction is treated as not being changed in the opposite direction (that is, the processing advances as "no" as a result of the determination in step S532).

As described above, when it is determined that the direction of the stroke is changed in the opposite direction (that is, when "YES" is determined in S532), in this embodiment, it is determined that an effective reciprocating stroke has been performed. When the determination that the effective reciprocating stroke has been performed is obtained at a time within a predetermined range before and after the timing at which the performance voice is to be output, the audio output unit 1002 stores the performance voice stored in correspondence with the timing in the RAM 103. ), The volume is specified by a predetermined method (step S533), and output is started (step S534). On the other hand, even if a valid reciprocating stroke is performed, if it is not performed at a time within a predetermined range before and after the timing at which the performance voice is to be output, the audio output unit 1002 outputs a voice indicating failure in step S534. Output In addition, the time which is continuously output after the output of a performance audio | voice is started is predetermined.

A method of determining the volume of the performance sound output by the audio output unit 1002 when the effective reciprocating stroke operation is performed will be described with reference to FIG. FIG. 11 shows a case where the contact is started from the point A, folded at the point B to perform the stroke in the opposite direction, and the folded at the point C to perform the stroke.

In the example of FIG. 11, the first time that the stroke has changed in the opposite direction is the point in time when the contact position immediately after the point B (that is, the point B ') is detected. Therefore, it is determined that the effective reciprocating stroke operation was performed for the first time when the point B 'is detected. In this case, the volume is determined according to the distance from the point A where the contact is started to the point B just before the moving direction is changed, and the performance sound is output. Similarly, when the point C 'is detected, it is determined that the effective reciprocating stroke operation has been performed again, the volume is determined according to the distance from the point B' to the point C, and the performance sound is output.

The distance from the point A to the point B or the point B 'to the point C may be a linear distance or a length of the trajectory. Here, the length of the trajectory is, for example, the sum of the linear distances between the contact coordinates detected continuously from the point A to the point B or the point B 'to the point C. In this embodiment, even when either of the distances is used, the longer the distance obtained, the larger the volume is output. For example, the obtained distance may be multiplied by a predetermined integer to obtain a volume, or a table (i.e., a table) indicating the volume according to the distance may be prepared, and a corresponding volume may be obtained from the calculated volume with reference to the table. .

In addition, after the accumulated detection coordinates are discarded in step S535, the contact position detected this time is accumulated in the RAM 103 at the time when the processing returns to step S500. Therefore, the position where the contact is started immediately after the effective stroke operation is performed is always stored as the oldest coordinate value of the accumulated detection coordinates. Thereafter, the coordinate values continuously detected are subsequently accumulated. Thereby, the value immediately before it is determined that the effective stroke operation has been performed is stored as the newest coordinate value among the stored detection coordinate values. Therefore, the audio output unit 1002 can obtain the coordinate values necessary for calculating the distance by referring to the accumulated detection coordinate values.

Lastly, for use in the processing described later, the RAM 103 or the like that the reciprocating stroke operation has been performed may be stored (for example, a reciprocating flag indicating whether the reciprocating stroke operation has been performed on the RAM 103 may be prepared. (Step S536).

In this embodiment, as will be described later, it is determined that the effective stroke operation is performed even when the user sweeps in one direction with respect to the touch screen 108. However, even when the user performs the reciprocating stroke operation and starts the stroke in the opposite direction, the performance sound is output. Therefore, after the reciprocating stroke operation, it is unnatural for the user to perform the sweeping operation and output the playing voice again.

Therefore, in the present embodiment, when the released state is detected, whether or not the reciprocating stroke operation has been performed immediately before that is specified with reference to the reciprocating flag (step S504). Therefore, when the reciprocating stroke operation is stored in the reciprocating flag (step S504: YES), the information of the reciprocating flag is discarded (step S505). In addition, the accumulated detection coordinates are discarded (step S506), and the processing returns to step S500. In other words, when the reciprocating stroke operation is performed immediately before the release state is detected, the performance is not output and the processing returns to step S500.

On the other hand, when the reciprocating stroke operation is not performed immediately before the release state is detected (step S504: NO), the CPU 101 calculates the speed of change of the contact position just before the release state is detected (step S507). . If the speed of the change is equal to or greater than the threshold speed (step S507: YES), the CPU 101 determines that the sweeping operation has been effectively performed in one direction (that is, the effective one-way stroke operation has been performed).

In a time within a predetermined range before and after the timing at which the performance voice is to be output, when it is determined in step S507 that an effective one-way stroke operation is performed, the audio output unit 1002 stores the performance voice stored in correspondence with the timing. It acquires from the cassette 106. Then, the volume of the performance voice is specified by a predetermined method (step S508), and the performance voice is output at the volume (step S509). However, when the time at which the stroke operation is determined to be valid is not included in the predetermined range before and after the timing at which the performance voice is to be output, the voice output unit 1002 outputs a voice indicating failure.

In addition, the volume is specified based on the distance from the position where the detection of a contact is started to the contact position just before the release state is detected. Similarly to step S533, the distance between the two points may be a linear distance or a length of a trajectory, and the distance is obtained by acquiring the accumulated detection coordinate values.

In addition, in step S501, if the detection positions for a predetermined number are accumulated, and all of these detection positions are within a predetermined range (step S501: YES), the CPU 101 determines that the change in the contact position is stopped for a predetermined time. do. If there is audio being output when it is determined that the change in the contact position is stopped for a predetermined time, the audio output unit 1002 stops the output (step S541). At this time, a special mute may be output. On the other hand, when the voice is not being output when the change in the contact position is stopped for a predetermined time, the audio output unit 1002 does nothing and proceeds to step S542. In step S542, the CPU 101 discards all the stored detection coordinates and returns the process to step S500.

In this way, the audio processing apparatus 1000 outputs and stops the audio.

(Other Example )

In the above-described embodiment, the speech processing apparatus 1000 assumes that effective stroke operation is performed when the sweeping operation is specified or when the direction of the stroke with respect to the touch screen 108 is changed in the opposite direction. This makes it possible to determine the effective stroke operation regardless of the direction in which the user grips the game device 100, the direction in which the user makes a stroke, or the like.

In this embodiment, the determination line corresponding to the guitar string is introduced to more faithfully reproduce the structure of the guitar sound. In addition, similarly to the above-described embodiment, a description will be given of a speech processing apparatus which absorbs a direction in which a user holds a game device, a direction in which a stroke is performed, and the like and enables simulation of other performances. This determination line may or may not be displayed on the touch screen.

12 shows a functional block diagram of the speech processing apparatus 1000 according to the present embodiment. In the present embodiment, an adjustment unit 1003 is further included in addition to the detection unit 1001 and the audio output unit 1002 included in the above-described embodiment.

However, instead of the determination condition described in the above-described embodiment, the audio output unit 1002 performs effective stroke operation when the trajectory of the continuously detected contact position crosses the predetermined determination line (that is, the predetermined operation). Condition is satisfied).

In addition, the adjustment unit 1003 adjusts the direction and the position of the determination line such that the angle at which the trajectory of the contact position detected subsequently and the trajectory of the contact line intersect is close to the right angle. Here, the determination line is for simulating other strings arranged on the touch screen plane. That is, in this embodiment, the sound processing apparatus 1000 considers that effective stroke operation is performed when the user touches the touch screen 108 and crosses the determination line.

The direction and the position of the determination line are stored in the memory cassette 106 based on the global coordinate system, for example, and read into the RAM 103 when a game start is instructed by the user. In this manner, the memory cassette 106, the RAM 103, the CPU 101, and the like cooperate to function as the adjustment unit 1003.

Hereinafter, operation processing of the audio processing apparatus 1000 shown in FIG. 12 having the above configuration will be described with reference to FIG.

First, in step S601, similarly to step S500 shown in FIG. 10, when the coordinate value is detected, the voice processing apparatus 1000 accumulates the coordinate value. Also in this embodiment, as apparent from the subsequent processing, the coordinate values accumulated in step S601 continue to be tracks or portions of the detected coordinate values. After the released state is detected or a valid stroke operation is specified, the accumulated coordinate values are discarded, and the accumulation of the detected coordinates is started again from that point in time.

Subsequently, when the detected positions of the accumulated number of predetermined cases are all within a predetermined range (step S602: YES), it means that the change of the contact position is stopped for a predetermined time. Therefore, the same processing as in steps S541 and S542 is performed, and the audio output unit 1002 stops the output if there is audio being output at that time (steps S610 and 611).

If all the detection positions for the predetermined number in step S602 are all within a predetermined range (step S602: NO), the CPU 101 then determines whether or not the released state is detected (step S603). When the released state is detected (step S603: Yes), the CPU 101 discards all the detected detection positions (step S604), and returns the processing to step S601.

On the other hand, when the detected thing this time is not a released state but a coordinate value (step S603: NO), the CPU 101 then determines whether the trajectory of the contact position has crossed the determination line (step S620). That is, it is determined whether the line segment connecting the coordinate value detected this time and the coordinate value accumulated last time intersects the determination line.

In this embodiment, when the trajectory of the contact position crosses the determination line (step S620: YES), it is determined that the effective stroke operation is performed. Therefore, similarly to the above-described embodiment, when it is determined that effective stroke operation is performed in a time within a predetermined range before and after the timing at which the performance voice is to be output, the audio output unit 1002 stores the voice output unit 1002 in correspondence with the timing. The performance sound to be obtained is obtained from the memory cassette 106. Then, the volume of the acquired performance voice is specified by a predetermined method (step S621), and the output of the performance voice is started (step S622). However, when it is determined that the locus of the contact position has crossed the determination line at a time not included in the predetermined range before and after the timing at which the performance voice is to be output, the audio output unit 1002 outputs a voice indicating failure. . In addition, similar to the above-described embodiment, the performance sound is output with a predetermined length.

In addition, in the present embodiment, it is determined that the volume of the performance sound has exceeded the current determination line from the coordinate values detected immediately after the time at which it was determined that the previous determination line was crossed the trajectory of the detected contact position. It specifies based on the distance to the coordinate value detected immediately before the viewpoint. Here, similarly to the above-described embodiment, the distance between the two points may be a linear distance or may be a length of a trajectory. Moreover, what is necessary is just to make a volume become large, so that the length of the said distance obtained is large.

Referring to Fig. 14A, the distance specified for calculating the volume is described. 14A shows an example of the trajectory of the coordinate values subsequently detected in the touch screen plane 1081. First, the stroke operation is started at the point P, and the determination line L is crossed between the point Q-1 and the point Q are detected. Further, the stroke operation is continued, folded at the point R, and crossed the determination line while the point S is detected from the point S-1 again.

In the example of FIG. 14A, when the stroke is made from point P to R, point Q is detected and it is determined that the determination line L has been crossed for the first time. However, since the trajectory of the detection coordinates did not exceed the determination line L before that, the point P which is a contact start position is made into the "coordinate value detected immediately after it was determined that it crossed the last determination line," and this time from the point P The volume is specified based on the distance to the coordinate value Q-1 detected just before crossing the determination line. In addition, the distance may be a linear distance or a moving distance (trajectory length).

Subsequently, when the stroke is performed by folding at the point R, when the point S is detected, it is specified that the determination line L is crossed again. In this case, since the "coordinate value detected immediately after crossing the previous determination line" is point Q, the volume is based on the distance from point Q to the coordinate value S-1 detected just before crossing this determination line. Specifies.

In addition, as is apparent from the flowchart shown in Fig. 13, in the present embodiment, the stroke is accumulated from the coordinate values at the starting point. If the trajectory of the contact position crosses the determination line and determines that effective stroke operation has been performed, the detected coordinates accumulated up to that time are discarded in step S624 described later, and continuously detected from the coordinate values initially detected after crossing the determination line. Coordinate values are accumulated. That is, for example, as in the example of FIG. 14A, when the stroke operation is continued from the point Q, the detection coordinates are accumulated again from the point Q.

Therefore, also when the contact crosses the determination line for the first time after the start of the contact, the contact is continued after the crossing of the determination line, and the reference is made to the accumulated coordinate value even when the crossing of the determination line is again performed. Information necessary for calculating the distance can be obtained.

In addition, when the line segment connecting the contact position detected this time and the contact position detected last time does not exceed the determination line (step S620: No), the coordinate value detected this time continues when the process performs step S601, Then, it accumulates as a trace of the detected contact position.

Next, the adjustment unit 1003 adjusts the position of the determination line (step S623). For example, in FIG. 14A, the case where the user performed stroke operation from the point P toward the point R is demonstrated as an example. As described above, when the point Q is detected, it is determined that the determination line has been crossed. Therefore, the adjustment unit 1003 shows the determination line L such that the line segment connecting the point Q detected at the time when it is determined to have crossed the determination line and the point Q-1 detected immediately before it intersects the determination line L perpendicularly. As shown in 14b, the position and direction of the determination line are updated to rotate by the angle θ. The center for rotating the determination line L may be a point where the trajectory of the determination line and the detection coordinates intersect, or may be a predetermined position (for example, the center of the touch screen).

In addition, when the power supply of the game device 100 is turned off, the position and direction of the judgment line at the time of turning off the power are rewritten to the memory cassette 106. Next, the rewritten value is used as the position and direction of the judgment line. Alternatively, the position of the determination line may be updated from the initial state every time the power is turned on.

In this way, by adjusting the determination line so that the direction of the stroke and the determination line intersect vertically, the voice processing device absorbs the direction in which the user grips the game device 100, the direction in which the stroke is performed, and the like so that the user can easily operate the voice. It is possible to customize the 1000 to a user order.

As mentioned above, although the Example of this invention was described, this invention is not limited to the Example mentioned above, A various deformation | transformation and an application are possible. It is also possible to freely combine the components of the above-described embodiments.

For example, the item selection device and the audio processing device may further include a contact member 206, and the user may use the contact member 206 when contacting the touch screen. As shown in FIG. 8A, the contact member 206 is a so-called deformation of the touch pen, and may have other peak shapes so that an operation for sweeping the touch screen can be easily performed. Moreover, the contact member 206 may be provided with the processus | protrusion at the front-end | tip so that the detection part 203 and the detection part 1001 may detect a contact easily. The user grasps the other peak-shaped portion of the contact member 206 as shown in Fig. 8B, and makes the projection portion contact the touch screen.

FIG. 8C shows a situation when a user simulates guitar performance by bringing the contact member 206 into contact with the touch screen plane 1081 using such a contact member 206. The dotted line represents the trajectory of the change in contact position (ie, the stroke). At this time, the determination line L is adjusted by the adjustment unit so as to intersect substantially perpendicularly to the direction of the stroke.

In addition, in the item selecting apparatus according to the above embodiment, when the table is scrolled, if the touch screen is touched, the content displayed at the contact position is fixed at the contact position, so that scrolling can be stopped. However, if the detected coordinates are all within a predetermined range from the start of the contact until the release is performed, the item output unit outputs the item displayed at the coordinates as a selection result. In addition to stopping the, the item displayed in the contacted coordinate value is selected.

Therefore, for example, all the coordinates detected are within a predetermined range from the start of the contact until the release is released, and the time from the start of the contact until release is within a predetermined threshold time. Only in one case, the item output unit may output the item displayed at the coordinates as the selection result. As a result, no selection is made when the same position is in contact with the same position for more than a predetermined threshold time until release, and the position of the window when the user touches is continuously displayed.

In the above embodiment, the moving unit of the item selection device moves the window position by limiting the direction to the vertical direction or the horizontal direction according to the direction of the sweeping operation performed by the user with respect to the touch screen. Otherwise, the window may be moved in the direction opposite to the speed of change of the contact position just before releasing the contact. Thereby, a table scrolls and displays also in directions other than an up-down direction and a left-right direction.

Further, in the above embodiment, the moving unit of the item selection device is configured to move the position of the area in a circular manner when the position of the window reaches the end of the table. Otherwise, it may be moved in the opposite direction.

For example, when a table is displayed in an area surrounded by origin (0, 0), (W, 0), (0, L), (W, L), the position (x, y) of the window is in the area. When it is not included, what is necessary is just to make the value subtracted from W into the coordinate value of an X direction about the quantity larger than W in an X direction. Or for the quantity smaller than 0 in the X direction, what is necessary is just to make the value added to 0 as the coordinate of the X direction. In addition, what is necessary is just to subtract from L about the quantity larger than L in a Y direction, and to add to 0 about the quantity smaller than 0 in a Y direction as a coordinate value of a Y direction.

In the above embodiment, the moving unit of the item selection device moves the position of the window in step S407 using the speed of change of the contact position just before the release state is detected. Otherwise, in step S407, the calculated speed of the contact position just before the release may be multiplied by a predetermined coefficient to gradually reduce the movement speed of the position of the window.

In another embodiment, the adjusting unit of the audio processing device may adjust the determination line based on past stroke operations. For example, at the time of stroke operation, the direction vector (henceforth referred to as an intersection vector) obtained by subtracting the contact position detected immediately before it from the contact position at the point of time determined to intersect with the determination line is obtained. The addition vector obtained by normalizing by adding the intersection vector to the addition vector is temporarily stored. In addition, in the initial state, since the addition vector does not have a value, the calculated intersection vector is an addition vector. Then, the position and direction of the stored determination line are updated so as to perpendicularly intersect the addition vector.

For example, when the stroke shown in Fig. 14A is performed, as shown in Fig. 14C, the intersection vector is a vector extending from point Q-1 to Q and a vector extending from point S-1 to point S, and the addition vector is Vector A.

Thereafter, similarly, at the point of intersection with the decision line, the intersection vector is obtained, added to the addition vector, and normalized. However, when adding, it is necessary to match the direction of the intersection vector and the addition vector, so the dot product of the two vectors is obtained. As a result of the inner product, when a value smaller than a negative predetermined value is obtained, it is assumed that the two direction vectors are almost opposite directions. Therefore, the intersection vector obtained this time is multiplied by the minus, and the direction is matched and added to the addition vector. On the other hand, when the inner product is larger than the predetermined value, the intersection vector may be added to the addition vector as it is. Then, the position and direction of the decision line are updated so as to perpendicularly intersect the addition vector.

In this way, by adding the intersection vector in the stroke operation to the addition vector which is the sum of the past intersection vectors, it is possible to more accurately extract the "wetness" of the user's stroke.

Alternatively, the direction vector from the start position to the end point of the stroke crossing the determination line may be used instead of the intersection vector. Here, the end point of the stroke is a position immediately before releasing the contact when sweeping. On the other hand, when the reciprocating stroke operation is performed, it is the position just before the direction of the stroke becomes the opposite direction.

In the audio processing apparatus according to the above embodiment, the audio output unit determines the volume based on the distance from the time point at which contact is started until the predetermined operation condition is satisfied. Otherwise, the volume may be determined based on the representative value (for example, the average speed) of the speed of the change of the contact points of two continuously detected points among the trajectories of the continuously detected contact positions. In other words, the faster the speed, the larger the volume.

In the audio processing apparatus according to the above embodiment, the time for the audio output unit to continuously output the playing voice is assumed to be predetermined. Otherwise, the length of the performance sound continuously output based on the representative value (for example, the average speed) of the speed of the change of the contact points of two continuously detected points among the trajectories of the continuously detected contact positions. May be determined. In other words, the faster the speed, the longer the playing voice can be output.

In addition, the audio output unit may start to output the audio after a predetermined time elapses after the operation condition is satisfied. For example, when performing a performance in a large place, the voice is transmitted to a person far from the player after a certain delay time has passed since the operation was performed. Therefore, by inserting the delay time in this way, it becomes possible to provide the effect of playing in a wide place even in a portable game machine.

In this case, the audio output unit may return the delay time to a default value when release is detected, and make it shorter each time the operation condition is continuously satisfied. In general, it is considered that a delay is harder to feel when the stroke is continuously performed as compared with the case where the stroke is performed only once. Therefore, in order to emphasize the first delay, if the operation condition is continuously satisfied, the delay time may be made shorter by making the delay time shorter than when the operation condition is satisfied last time.

In addition, in the item selection device and the audio processing device according to the above embodiment, the detection unit may be hardware for detecting the presence or absence of a contact, such as a trackpad or tablet, and the contact position, in addition to the touch screen.

In addition to the game device, the item selection device and the audio processing device according to the embodiment may be implemented in other terminal devices having a touch screen.

In addition, about this application, it claims the priority based on Japanese Patent Application No. 2008-151554, and shall apply all the content of the said basic application to this application.

As described above, according to the present invention, a voice processing device, a voice processing method, and a information suitable for simulating the performance of an instrument while utilizing the characteristics of hardware such as a touch screen and the like, which can detect the presence or absence of a contact and the like, can be detected. It is possible to provide a recording medium and a program.

100: game device 101: CPU
102: ROM 103: RAM
104: interface 105: input unit
106: memory cassette 107: image processing unit
108: touch screen 109: NIC
110: voice processing unit 111: microphone
112: speaker 200: item selection device
201: storage unit 202: display unit
203: detection unit 204: item output unit
205: moving part 206: contact member
1000: sound processing device 1001: detection unit
1002: audio output unit 1003: adjustment unit

Claims (14)

A detection unit (1001) which detects the position when the user is in contact with the surface of the contacted portion and detects the effect when the surface is released; And
If a predetermined operation condition is satisfied, and includes a voice output unit 1002 for starting the output of the predetermined output voice,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Speech processing device 1000, characterized in that to be satisfied with.
The said predetermined operation condition is a thing of Claim 1 instead of being satisfied in the case of said (a) and (b),
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
And an adjusting unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other are at right angles.
2. The method of claim 1, wherein when the release is detected, the audio output unit 1002 inserts a predetermined delay time before starting to output the audio, and decreases the delay time when the predetermined operation condition is satisfied. Voice processing device 1000 characterized in that. The audio output unit 1002 of claim 1,
(d) the distance from the position at which contact is initiated to the contact position detected just before the predetermined operation condition is satisfied; And
(e) The distance from the contact position detected at the time when the predetermined operation condition is satisfied immediately before the contact position detected immediately before the next operation condition is satisfied.
And the volume of the output voice to start the output according to the voice processing apparatus.
According to claim 1, The audio output unit 1002 further outputs the accompaniment voice of the predetermined music,
The accompaniment voice corresponds to a playing timing specified by the elapsed time since starting output and a playing voice to be output at the playing timing.
The voice output unit 1002 is configured to perform the performance timing when the predetermined operation condition is satisfied and when the timing at which one of the conditions is satisfied coincides with any one of the performance timings corresponding to the accompaniment voice. And outputting the playing voice to be outputted as the predetermined output voice.
6. The audio output unit 1002 according to claim 5, wherein the audio output unit 1002 meets either of the performance timings in which the predetermined operation condition is satisfied and the point in time at which either of the conditions is satisfied corresponds to the accompaniment sound. If not, the audio processing apparatus 1000 starts outputting a voice indicating failure as the predetermined output voice. The audio processing device according to claim 1, wherein the audio output unit 1002 stops the output of the disclosed output audio when the contact position detected continuously for a predetermined threshold time is within a predetermined position range. 1000). The method of claim 1, further comprising a contact member 206 for picking up and contacting the surface by the user,
The contact member 206 has a pick shape or a shape in which protrusions are arranged at the tip of the peak shape.
As a voice processing method using the voice processing apparatus 1000 including the detection unit 1001 and the voice output unit 1002,
A detecting step of detecting, by the detecting unit 1001, the position of the user when the user is in contact with the surface of the contacted portion, and of the fact that the user detects the release of the surface; And
The voice output unit 1002 includes a voice output process of starting output of a predetermined output voice when a predetermined operation condition is satisfied,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Speech processing method characterized in that to satisfy.
10. The method of claim 9, wherein the voice processing apparatus 1000 further comprises an adjusting unit 1003,
The predetermined operation conditions are not to be satisfied in the case of (a) and (b),
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
And the adjusting unit (1003) further includes an adjusting step of adjusting the direction of the determination line so that the angle at which the trajectory of the contact position that is subsequently detected and the determination line intersect is close to a right angle.
Computer,
A detection unit (1001) which detects the position when the user is in contact with the surface of the contacted portion and detects the effect when the surface is released; And
When the predetermined operation condition is satisfied, the voice output unit 1002 which starts outputting the predetermined output voice.
As a program to function as,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
An information recording medium, characterized in that a program is stored.
The said predetermined operation condition is a thing of Claim 11 instead of being satisfied in the case of said (a) and (b),
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
The program, the computer,
And an adjusting unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other is at right angles.
Computer,
A detection unit (1001) which detects the position when the user is in contact with the surface of the contacted portion and detects the effect when the surface is released; And
When the predetermined operation condition is satisfied, the voice output unit 1002 which starts outputting the predetermined output voice.
Wherein the predetermined operation condition is
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
The program characterized by satisfying the.
The said predetermined operation condition is a thing of Claim 13 instead of being satisfied in the case of said (a) and (b),
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
The program, the computer,
And an adjustment unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other is at right angles.
KR1020107007589A 2008-06-10 2009-05-29 Audio processing device, audio processing method, and information recording medium KR101168322B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2008151554A JP4815471B2 (en) 2008-06-10 2008-06-10 Audio processing apparatus, audio processing method, and program
JPJP-P-2008-151554 2008-06-10
PCT/JP2009/059894 WO2009150948A1 (en) 2008-06-10 2009-05-29 Audio processing device, audio processing method, information recording medium, and program

Publications (2)

Publication Number Publication Date
KR20100051746A true KR20100051746A (en) 2010-05-17
KR101168322B1 KR101168322B1 (en) 2012-07-24

Family

ID=41416659

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020107007589A KR101168322B1 (en) 2008-06-10 2009-05-29 Audio processing device, audio processing method, and information recording medium

Country Status (5)

Country Link
JP (1) JP4815471B2 (en)
KR (1) KR101168322B1 (en)
CN (1) CN101960513B (en)
TW (1) TW201011615A (en)
WO (1) WO2009150948A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6254391B2 (en) * 2013-09-05 2017-12-27 ローランド株式会社 Sound source control information generation device, electronic percussion instrument, and program
JP6299621B2 (en) * 2015-02-04 2018-03-28 ヤマハ株式会社 Keyboard instrument
WO2017017800A1 (en) * 2015-07-29 2017-02-02 株式会社ワコム Coordinate input device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07109552B2 (en) * 1987-05-29 1995-11-22 ヤマハ株式会社 Electronic musical instrument
JP2000066668A (en) * 1998-08-21 2000-03-03 Yamaha Corp Performing device
JP3566195B2 (en) * 2000-08-31 2004-09-15 コナミ株式会社 GAME DEVICE, GAME PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM
JP3922273B2 (en) * 2004-07-07 2007-05-30 ヤマハ株式会社 Performance device and performance device control program
JP4770419B2 (en) * 2005-11-17 2011-09-14 カシオ計算機株式会社 Musical sound generator and program
JP5351373B2 (en) * 2006-03-10 2013-11-27 任天堂株式会社 Performance device and performance control program
US8003874B2 (en) * 2006-07-03 2011-08-23 Plato Corp. Portable chord output device, computer program and recording medium

Also Published As

Publication number Publication date
CN101960513A (en) 2011-01-26
CN101960513B (en) 2013-05-29
JP4815471B2 (en) 2011-11-16
TW201011615A (en) 2010-03-16
KR101168322B1 (en) 2012-07-24
JP2009300496A (en) 2009-12-24
WO2009150948A1 (en) 2009-12-17

Similar Documents

Publication Publication Date Title
US8360836B2 (en) Gaming device, game processing method and information memory medium
US7435169B2 (en) Music playing apparatus, storage medium storing a music playing control program and music playing control method
KR100900794B1 (en) Method for dance game and the recording media therein readable by computer
JP4410284B2 (en) GAME DEVICE, GAME CONTROL METHOD, AND PROGRAM
JP4848000B2 (en) GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM
US20150103019A1 (en) Methods and Devices and Systems for Positioning Input Devices and Creating Control
JP4797045B2 (en) Item selection device, item selection method, and program
JP3579042B1 (en) GAME DEVICE, GAME METHOD, AND PROGRAM
JP4127561B2 (en) GAME DEVICE, OPERATION EVALUATION METHOD, AND PROGRAM
KR101168322B1 (en) Audio processing device, audio processing method, and information recording medium
JP6184203B2 (en) Program and game device
TWI300002B (en)
CN109739388B (en) Violin playing method and device based on terminal and terminal
JP5279744B2 (en) GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM
JP5210908B2 (en) Moving image generating device, game device, moving image generating method, and program
JP5222978B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
JP2004283264A (en) Game device, its control method, and program
JP2012065833A (en) Game device, game control method, and program
JP4956600B2 (en) GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM
JP5100862B1 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
JP5535127B2 (en) Game device and program
JP2011255018A (en) Game apparatus, game control method, and program
JP4071130B2 (en) Control device, character control method, and program
JP2012024437A (en) Image generating device, image generating method and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20150710

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20160708

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20170707

Year of fee payment: 6