KR20100051746A - Audio processing device, audio processing method, information recording medium, and program - Google Patents
Audio processing device, audio processing method, information recording medium, and program Download PDFInfo
- Publication number
- KR20100051746A KR20100051746A KR1020107007589A KR20107007589A KR20100051746A KR 20100051746 A KR20100051746 A KR 20100051746A KR 1020107007589 A KR1020107007589 A KR 1020107007589A KR 20107007589 A KR20107007589 A KR 20107007589A KR 20100051746 A KR20100051746 A KR 20100051746A
- Authority
- KR
- South Korea
- Prior art keywords
- detected
- predetermined
- voice
- contact position
- satisfied
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/32—Constructional details
- G10H1/34—Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
- G10H1/342—Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments for guitar-like instruments with or without strings and with a neck on which switches or string-fret contacts are used to detect the notes being played
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/091—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
- G10H2220/096—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith using a touch screen
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/135—Musical aspects of games or videogames; Musical instrument-shaped game input interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/155—User input interfaces for electrophonic musical instruments
- G10H2220/161—User input interfaces for electrophonic musical instruments with 2D or x/y surface coordinates sensing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2230/00—General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
- G10H2230/045—Special instrument [spint], i.e. mimicking the ergonomy, shape, sound or other characteristic of a specific acoustic musical instrument category
- G10H2230/075—Spint stringed, i.e. mimicking stringed instrument features, electrophonic aspects of acoustic stringed musical instruments without keyboard; MIDI-like control therefor
- G10H2230/135—Spint guitar, i.e. guitar-like instruments in which the sound is not generated by vibrating strings, e.g. guitar-shaped game interfaces
Abstract
The detection unit 1001 detects the presence or absence of contact with the surface of the touch screen and the coordinate value when there is a contact. The audio output unit 1002 determines that effective stroke operation is performed when the rocking sweep is specified or when the direction of the stroke with respect to the touch screen is changed in the opposite direction. When it is determined that the effective stroke operation is performed at the correct timing stored in the audio processing device 1000, the audio output unit 1002 starts outputting the performance audio.
Description
A voice processing device, a voice processing method, an information recording medium, and a program suitable for simulating the performance of a musical instrument while utilizing the characteristics of hardware such as a touch screen that can detect the presence or absence of a contact and a contact position.
Background Art Conventionally, a game simulating guitar performance has been proposed. Such a technique is disclosed in the following
The technique disclosed in
On the other hand, portable game machines having a touch screen are now widely used.
Therefore, for example, it is strongly demanded to realize the technique which simulates the performance of the musical instrument disclosed by
SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and is suitable for simulating the performance of a musical instrument and voice processing while utilizing the characteristics of hardware such as a touch screen and the like that can detect the presence or absence of a contact. It is an object to provide a method, an information recording medium and a program.
In order to achieve the above object, the voice processing device according to the first aspect of the present invention includes a detection unit and a voice output unit.
The detection unit detects the position when the user is in contact with the surface of the to-be-contacted portion, and the effect when the user releases the surface. In a game device in which a speech processing device is realized, the contacted portion is, for example, a touch screen in which a touch sensor is superimposed on a liquid crystal screen, and when a user touches the surface of the touch screen, the detection portion indicates a coordinate value indicating the contact position. Detect In addition, in a state where there is no contact with the surface, that is, in a released state, the detection unit detects that there is no contact. In addition, the detection unit detects at predetermined time intervals, for example.
The audio output unit starts outputting the predetermined output audio when the predetermined operation condition is satisfied. At this time, the predetermined operation condition is
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Shall be satisfied.
That is, "(a) when release is detected immediately after a contact position is detected and the change of the contact position just before a release is detected is more than a predetermined threshold speed", for example, when the user touches the touch screen After that, it is the same as removing the touch screen. This operation by the user is determined to be an effective operation that simulates an operation of stroke of the guitar string in one direction, and the audio output unit starts outputting the output audio.
In addition, "the case where (b) the direction of change of the contact position detected continuously becomes the opposite direction within a predetermined error range" means, for example, the user making the contact position reciprocate while touching a touchscreen. The same case is assumed. The direction of change of the contact position may be almost the opposite direction within a predetermined range, even if not completely opposite. For example, when the direction of change of a contact position changes, the angle which the direction of the change of the contact position just before a change, and the direction of the change of the contact position immediately after a change should just be in a predetermined range.
Such operation by the user that the direction of change in the continuously detected contact position becomes the opposite direction within a predetermined error range may be performed after the down stroke of the reciprocating stroke, that is, the other strings. It is determined that it is a valid operation to simulate the operation (or vice versa), and the audio output unit starts outputting the output audio.
According to such a voice processing apparatus, output of a predetermined output voice is started when release is detected based on a predetermined condition or at the moment when the direction of the stroke is changed. Therefore, the user can enjoy other simulations without worrying about the direction in which the user holds the game device on which the audio processing device is mounted or the direction in which the stroke is performed.
In addition, the said predetermined | prescribed operation condition is made into being satisfied in the case of said (a) and (b),
(c) It may be satisfied when the trajectory of the subsequently detected contact position crosses the predetermined determination line.
Here, the determination line is a line arranged at a predetermined position on the surface of the touch screen. When the user touches the touch screen and crosses the determination line while the user touches the touch screen, the determination line starts outputting the predetermined output voice at the time when the determination line is over. do. That is, the determination line corresponds to the guitar string, and by introducing the determination line, it becomes possible to more faithfully simulate the structure of the sound of the guitar.
In addition, the sound processing apparatus may further include an adjusting unit. The adjusting unit may adjust the direction of the determination line such that the trajectory of the contact position detected subsequently and the angle at which the determination line intersect is close to the right angle.
In other words, there are individual differences in the direction of lifting the game device in which the present audio processing device is realized by the user and the direction of the stroke performed on the touch screen. In order to absorb this individual difference, the adjustment unit adjusts the direction of the determination line so as to be close to the right angle with respect to the direction of the stroke performed by the user.
The audio output unit may insert a predetermined delay time before starting to output the audio when release is detected, and reduce the delay time when the predetermined operation condition is satisfied.
In other words, if the operation condition is satisfied, the audio output unit may start outputting the audio after a predetermined delay time has elapsed. When performing in a large place, a voice is transmitted to a person who is far from the player after a certain delay time has passed since the operation was performed. Therefore, by inserting the delay time in this way, it becomes possible to provide the effect of playing in a wide place even in a portable game machine.
In general, it is considered that the delay is less likely to be felt when the stroke is continuously performed as compared with the case where the stroke is performed only once. Therefore, in order to emphasize the first delay, when the release is detected, the delay time is returned to the default value, and if the operation condition is continuously satisfied, the delay time is shortened compared with the case where the operation condition is satisfied last time. You may make it short.
In addition, the audio output unit,
(d) the distance from the position at which the contact is initiated to the contact position detected just before the predetermined operation condition is satisfied, and
(e) The distance from the contact position detected at the time when the predetermined operation condition is satisfied immediately before the contact position detected immediately before the next operation condition is satisfied.
Based on this, the volume of the output voice which should start the output may be determined.
In other words, when the user makes a small stroke with respect to the touch screen, the operation condition is satisfied from the position where the contact is started on the touch screen so that a small output voice is outputted and a large output audio is output when the user makes a large stroke. The volume is determined based on the distance to the contact position detected just before the end. Alternatively, when the user performs a reciprocating stroke with respect to the touch screen, the sound volume can be controlled in the same manner so as to detect the volume immediately before the predetermined operation condition is satisfied next from the contact position detected at the time when the predetermined operation condition is satisfied. The volume is determined based on the distance to the contact position. At this time, the distance between the two contact positions at which the distance is measured may be a linear distance or a length of a trajectory.
The audio output unit further outputs an accompaniment voice of a predetermined piece of music, and the accompaniment voice corresponds to a performance timing specified by the elapsed time since starting output and a performance voice to be outputted to the performance timing.
In other words, in a game simulating guitar performance, an accompaniment voice is output, and the user plays in accordance with the accompaniment voice. The game memorizes the correct timing at which the user should perform the play and the playing voice to be output at that time.
In addition, the audio output unit should be output at the performance timing when the predetermined operation condition is satisfied and when the timing at which one of the conditions is satisfied coincides with any one of the performance timings corresponding to the accompaniment voice. An output voice to be played may be started as the predetermined output voice.
That is, in a game simulating guitar performance, the user performs a stroke operation (effective stroke operation) that satisfies the operation condition at which the audio output is started, with respect to the accompaniment on the touch screen. When the user performs a stroke operation on the touch screen at the correct timing (that is, the timing of the user's operation coincides with the timing to be played), the correct playing voice is output.
In addition, the audio output unit is a voice indicating failure when the predetermined operation condition is satisfied, and the timing at which either of the above conditions is satisfied does not coincide with any of the performance timings corresponding to the accompaniment voice. May be started as the predetermined output voice.
In other words, when the user performs a stroke operation on the touch screen at an incorrect timing that does not match the accompaniment, the audio output unit outputs a voice indicating failure.
The audio output unit may stop the output of the above-described output audio when the contact position detected continuously for a predetermined threshold time is within a predetermined position range. That is, "the contact position detected continuously in predetermined threshold time is in a predetermined position range" means that a stroke was stopped. Therefore, when such a condition is satisfied, the audio output unit stops the output. Moreover, arbitrary positions should just continue to contact in a predetermined range.
The contact member may further include a contact member for picking up and contacting the surface, wherein the contact member has a peak shape or a shape in which protrusions are arranged at a tip end of the peak shape. That is, the contact member is a so-called touch pen, and the handle portion has a shape of other peak, and the tip portion may be provided with a projection. By having the shape of the guitar peak, the user can easily perform the stroke operation and can feel the same realism as operating the guitar. Moreover, a contact position becomes easy to be detected by providing a processus | protrusion in a front-end | tip part.
A sound processing method according to another aspect of the present invention is a sound processing method using a sound processing apparatus including a detection unit and a sound output unit. In the detection step, the detection unit detects the position of the user when the user comes into contact with the surface of the contacted part. If the surface is released, the effect thereof is detected.
In addition, in the audio output process, if the predetermined operation condition is satisfied, the audio output unit starts outputting the predetermined output audio. Here, the predetermined operation condition is
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Shall be satisfied.
In addition, the audio processing apparatus further includes an adjusting unit, and the predetermined operation conditions are not to be satisfied in the case of (a) and (b) above,
(c) It may be satisfied when the trajectory of the subsequently detected contact position crosses the predetermined determination line.
At this time, the adjustment part may be equipped with the adjustment process which adjusts the direction of the said determination line so that the angle | interval of the trace | position of the contact position detected continuously and the said determination line may be close to a right angle.
In addition, the program recorded by the information recording medium according to another aspect of the present invention is configured so that the computer functions as the above-mentioned sound processing apparatus. A program recorded by an information recording medium according to another aspect of the present invention is configured to cause a computer to execute the above voice processing method.
Moreover, the program which concerns on another aspect of this invention is comprised so that a computer may function as the said audio processing apparatus. A program according to another aspect of the present invention is configured to cause a computer to execute the above voice processing method.
In addition, the program of the present invention can be recorded on a computer-readable information recording medium such as a compact disk, a flexible disk, a hard disk, a magneto-optical disk, a digital video disk, a magnetic tape, a semiconductor memory, and the like. The program can be distributed and sold via a computer communication network independently of the computer on which the program is executed. The information recording medium can be distributed and sold independently of the computer.
According to the present invention, there is provided a speech processing apparatus, a speech processing method, an information recording medium, and a program suitable for simulating the performance of a musical instrument while utilizing the characteristics of hardware such as a touch screen that can detect the presence or absence of a contact and the contact position. can do.
1 is a schematic diagram showing a schematic configuration of a typical game device in which an item selection device according to an embodiment is realized;
2 is a view showing an appearance of a typical game device in which the item selection device according to one embodiment is realized;
3 is a functional block diagram of an item selecting apparatus according to an embodiment;
4A is a diagram showing a table, a window, and their relationship in an item selection device according to an embodiment;
4B illustrates a situation in which a table element covered with a window is displayed on the touch screen;
5 is a flowchart for describing a processing operation of an item selection device according to an embodiment;
6A is a diagram for explaining a window moving direction with respect to the movement of a contact position of a touch screen;
FIG. 6B is a diagram for explaining the movement direction of a window with respect to the movement of the touch position of the touch screen; FIG.
FIG. 6C shows a situation of change of the area of the displayed table with respect to the movement of the contact position; FIG.
FIG. 6D shows a situation of change of the area of the displayed table with respect to the movement of the contact position; FIG.
FIG. 7A illustrates a situation in which the window is rearranged when the position of the window is outside the area of the table; FIG.
FIG. 7B illustrates a situation in which the elements of the table are traversed and displayed when the window reaches the end of the table;
FIG. 7C illustrates a situation in which the elements of the table are traversed and displayed when the window reaches the end of the table; FIG.
8A illustrates an example of a peak-shaped touch pen;
8B is a view showing a situation where a user grabs a touch pen having a peak shape;
FIG. 8C is a diagram showing a situation in which a user is simulating a guitar performance on a touch screen using the touch pen shown in FIG. 8A;
9 is a functional block diagram of a speech processing apparatus according to an embodiment;
10 is a flowchart for explaining the processing operation of the audio processing device according to the embodiment;
11 shows an example of the trajectory of a contact position;
12 is a functional block diagram of a speech processing device according to another embodiment;
13 is a flowchart for explaining a processing operation of a speech processing device according to another embodiment;
14A shows an example of the trajectory of the contact position;
14B is a diagram showing a situation of adjusting the position and direction of the determination line;
Fig. 14C is a diagram showing a method for obtaining the direction of a stroke when adjusting the judgment line.
The game apparatus according to the present embodiment is largely divided as will be described later, and functions as an item selection apparatus and an audio processing apparatus. That is, the user first selects a piece of music to be simulated by the game device as the item selection device from the music piece list. Next, in the game apparatus as the audio processing apparatus, the guitar performance is simulated using the selected piece of music.
1 is a schematic diagram showing a schematic configuration of a typical portable game device in which an item selection device and a sound processing device according to an embodiment of the present invention are realized. 2 shows an external view of the portable game device. The following description will be made with reference to this drawing.
The
By mounting a memory cassette 106 (described later in detail) storing a program and data for a game in a slot (not shown) connected to the
The
The
The
The
The
The
The
The image calculation processor can execute a superimposition operation of two-dimensional images, a transmission operation such as alpha blending, and various saturation operations at high speed.
Further, polygon information arranged in a three-dimensional virtual space and to which various texture information is added is rendered by the Z buffer method to look down on the polygon arranged in the three-dimensional virtual space from a predetermined viewpoint position. It is also possible to perform a high speed operation of obtaining a rendered image.
In addition, by the cooperative operation of the
Moreover, the said
In addition, in accordance with the instruction input by the user via the
The
The current date and time information can also be obtained by connecting to a SNTP server in the Internet via the
The
When the voice data recorded in the
In addition, the
In addition, the
Alternatively, the
The item selection device and the audio processing device according to the present embodiment are realized on a portable game device, but can also be implemented on a general computer. A general computer, like the
In the following, the item selection device will be described, and then the audio processing device will be described. Unless otherwise noted, the item selection device and the audio processing device will be explained by the
(Item selector)
3 is a block diagram showing a schematic configuration of an
The
The
Note that the list of items displayed in FIG. 4B does not coincide with the boundary of the element and the boundary of the display area, but may be adjusted so as to coincide.
Thus, since the area | region covered by the window is the area | region displayed by the
The position of the window is, for example, the X direction from the origin O of the table (e.g., the upper left of the table), to the coordinate value of the origin O 'of that window (e.g., O' from O) and Number of pixels in the Y direction). Moreover, if the magnitude | size of the X direction of the displayed table is W, the magnitude | size w of the X direction of the cell (refer to 301 of FIG. 4A) in which each element is displayed is W / N (N: number of columns), Y of the table If the size of the direction is L, the size l in the Y direction of the cell is L / M (M: number of rows).
The
When the
When the moving
In addition, while the touch screen is being touched by the user, the moving
(Action processing)
The processing operation of the
When the power supply of the
At this time, the
Subsequently, when the
Subsequently, the
When the
When what is detected this time is a coordinate value (step S403: No), CPU101 next determines whether the state detected last time is a release state or another coordinate value (step S410). If the state detected last time is the released state, it is determined that the contact has been started by the user (step S410: YES), and the
On the other hand, if the detected value this time is a coordinate value (step S403: NO), and the state detected last time is not a released state (step S410: NO), a coordinate value different from the previously detected coordinate value is detected this time. Means that. That is, in the contacted state, the contact position is moving. At this time, the moving
That is, as shown in FIG. 6A, when the state detected last time was the coordinate values p1 and q1 and the state detected this time was the coordinate values p and q, the contact position is in the x direction by (p-p1). And in the Y direction by (q-q1). Therefore, as shown in FIG. 6B, the moving
For example, in FIG. 6C, the upper left portion of the element GG is in contact. In this contacted state, when the contact position is moved in the direction of the arrow shown in Fig. 6C, the window position is moved by the same amount of change in the direction opposite to the direction of change of the contact position. As a result, as shown in FIG. 6D, the upper left portion of the element GG is displayed in the contact position similarly to FIG. 6C.
In addition, when the window position is changed by the drag operation and the window area is out of the table area, the window position (x, y) is always rearranged to represent the coordinates in the table area. For example, if the position of the window reaches the end of the table, move the position of the window to the opposite position of the table. That is, when the table is displayed in an area surrounded by the origin (0, 0), (W, 0), (0, L), (W, L), the boundary of the table 300 indicated by X = W is It is treated to be a border line of X = 0, and the border line indicated by Y = L is treated to be a border line of Y = 0.
Therefore, the coordinate value indicating the position of the window is added to 0 for the amount larger than W in the X direction. In addition, about the quantity smaller than 0 in an x direction, it subtracts from W. FIG. On the other hand, the amount larger than L in the Y direction is added to zero. In addition, about the quantity smaller than 0 in a Y direction, it subtracts from L. FIG. In this way, the position of the window is cyclically moved.
For example, in FIG. 7A, the location of
As a result, the process returns to step S400, and when the
However, when a part of the area of a window is arrange | positioned out of a table according to the movement of the position of a window in step S412, the
This is computed as follows, for example. For example, W and L are referred to as the sizes in the X and Y directions of the table 300, respectively, and W 'and L' are referred to as the sizes in the X and Y directions of the
Thus, for example, as shown in FIG. 7B, when the
On the other hand, if the one detected this time is in the released state (step S403: YES), the
For example, when the accumulated detection positions are all included in an area having a predetermined radius, it is determined that the contact positions are not moving (step S404: YES), and the
When the selection result is obtained, the item selection processing is finished, and the
The accumulated coordinate value may be discarded after step S405 (step S420).
On the other hand, when the accumulated detection positions are not all within a predetermined range (step S404: NO), it is determined that the contact position is moving, and the
For example, the detection coordinates immediately before the contact is released (p1, q1) and the detection coordinates immediately before that are (p2, q2) (these are the latest coordinate values accumulated in the storage unit 201). Obtained by referring to the previous coordinate value) and the time interval detected by the
((pl-p2) / T1, (ql-q2) / T1).
If the larger component of the x component and the y component is equal to or greater than a predetermined threshold speed (step S406: YES), it is determined that the user has performed an operation of sweeping the touch screen, and the moving
That is, when the x component is larger than the y component, it means that the direction of the moving speed of the contact position just before the contact is released is close to the x direction. Therefore, the position of the window is moved at a speed of (p1-p2) / T1 in the row direction (left and right directions). On the other hand, when the y component is larger than the x component, it means that the direction of the moving speed is close to the y direction. Therefore, the position of the window is moved at the speed of (q1-q2) / T1 in the column direction (up-down direction). The movement speed in the x and y directions of the calculated contact position may be multiplied by a predetermined coefficient as the window movement speed. Moreover, if the magnitude | size of an x component and a y component is the same, either a row or a column is moved in the predetermined direction at the speed of the component of that direction.
Subsequently, the
In addition, as described above with respect to the drag operation, when the window reaches the boundary of the table image, the moving
On the other hand, when the
(Voice Processing Equipment)
Next, a description will be given of the
9 is a schematic diagram showing a schematic configuration of a
The
On the basis of the detected result, the
(Action processing)
The operation processing of the
The
When a song is selected by the user's instruction, and a game start instruction is made by pressing a predetermined control button or the like, the
Alternatively, the timing of outputting the current playing voice may be visually informed to the user. For example, a marker is displayed which advances in a predetermined direction at predetermined intervals with time, and a timing marker which indicates the timing of outputting the playing voice in the direction in which the marker advances. The user may operate on the touch screen so that a predetermined operation condition is satisfied when the moving marker reaches the timing marker. The timing for outputting each of the performance sounds is displayed as a relative time from the start time when the time at which the accompaniment sound is started is 0, for example, and stored in correspondence with the respective performance sounds.
Hereinafter, with reference to FIG. 10, the flow of the process which the
The
In addition, as is apparent from the subsequent processing, the accumulated coordinate values continue to be the trajectory or part of all the detected coordinate values. After the released state is detected or a valid stroke operation is specified, the accumulated coordinate values are discarded, and the accumulation of the detected coordinates is started again from that point in time. The coordinate values thus stored are used later to determine the output volume and the like.
Subsequently, the
The
If it is determined that the detected state has changed (step S502: YES), the
If the detected current time is not a released state but a coordinate value (step S503: NO), the last detected state is a released state or a separate coordinate value. First, if the last detected one is in the released state (step S521: YES), the
On the other hand, when the coordinate value detected this time is a coordinate value different from the coordinate value detected last time (step S521: No), it means that the contact position detected continuously changed. At this time, the
That is, the
In addition, when a contact has changed for the first time after the contact is started and a detected contact position continues, "the last direction of movement" does not exist. Therefore, in this case, the moving direction is treated as not being changed in the opposite direction (that is, the processing advances as "no" as a result of the determination in step S532).
As described above, when it is determined that the direction of the stroke is changed in the opposite direction (that is, when "YES" is determined in S532), in this embodiment, it is determined that an effective reciprocating stroke has been performed. When the determination that the effective reciprocating stroke has been performed is obtained at a time within a predetermined range before and after the timing at which the performance voice is to be output, the
A method of determining the volume of the performance sound output by the
In the example of FIG. 11, the first time that the stroke has changed in the opposite direction is the point in time when the contact position immediately after the point B (that is, the point B ') is detected. Therefore, it is determined that the effective reciprocating stroke operation was performed for the first time when the point B 'is detected. In this case, the volume is determined according to the distance from the point A where the contact is started to the point B just before the moving direction is changed, and the performance sound is output. Similarly, when the point C 'is detected, it is determined that the effective reciprocating stroke operation has been performed again, the volume is determined according to the distance from the point B' to the point C, and the performance sound is output.
The distance from the point A to the point B or the point B 'to the point C may be a linear distance or a length of the trajectory. Here, the length of the trajectory is, for example, the sum of the linear distances between the contact coordinates detected continuously from the point A to the point B or the point B 'to the point C. In this embodiment, even when either of the distances is used, the longer the distance obtained, the larger the volume is output. For example, the obtained distance may be multiplied by a predetermined integer to obtain a volume, or a table (i.e., a table) indicating the volume according to the distance may be prepared, and a corresponding volume may be obtained from the calculated volume with reference to the table. .
In addition, after the accumulated detection coordinates are discarded in step S535, the contact position detected this time is accumulated in the
Lastly, for use in the processing described later, the
In this embodiment, as will be described later, it is determined that the effective stroke operation is performed even when the user sweeps in one direction with respect to the
Therefore, in the present embodiment, when the released state is detected, whether or not the reciprocating stroke operation has been performed immediately before that is specified with reference to the reciprocating flag (step S504). Therefore, when the reciprocating stroke operation is stored in the reciprocating flag (step S504: YES), the information of the reciprocating flag is discarded (step S505). In addition, the accumulated detection coordinates are discarded (step S506), and the processing returns to step S500. In other words, when the reciprocating stroke operation is performed immediately before the release state is detected, the performance is not output and the processing returns to step S500.
On the other hand, when the reciprocating stroke operation is not performed immediately before the release state is detected (step S504: NO), the
In a time within a predetermined range before and after the timing at which the performance voice is to be output, when it is determined in step S507 that an effective one-way stroke operation is performed, the
In addition, the volume is specified based on the distance from the position where the detection of a contact is started to the contact position just before the release state is detected. Similarly to step S533, the distance between the two points may be a linear distance or a length of a trajectory, and the distance is obtained by acquiring the accumulated detection coordinate values.
In addition, in step S501, if the detection positions for a predetermined number are accumulated, and all of these detection positions are within a predetermined range (step S501: YES), the
In this way, the
(Other Example )
In the above-described embodiment, the
In this embodiment, the determination line corresponding to the guitar string is introduced to more faithfully reproduce the structure of the guitar sound. In addition, similarly to the above-described embodiment, a description will be given of a speech processing apparatus which absorbs a direction in which a user holds a game device, a direction in which a stroke is performed, and the like and enables simulation of other performances. This determination line may or may not be displayed on the touch screen.
12 shows a functional block diagram of the
However, instead of the determination condition described in the above-described embodiment, the
In addition, the adjustment unit 1003 adjusts the direction and the position of the determination line such that the angle at which the trajectory of the contact position detected subsequently and the trajectory of the contact line intersect is close to the right angle. Here, the determination line is for simulating other strings arranged on the touch screen plane. That is, in this embodiment, the
The direction and the position of the determination line are stored in the
Hereinafter, operation processing of the
First, in step S601, similarly to step S500 shown in FIG. 10, when the coordinate value is detected, the
Subsequently, when the detected positions of the accumulated number of predetermined cases are all within a predetermined range (step S602: YES), it means that the change of the contact position is stopped for a predetermined time. Therefore, the same processing as in steps S541 and S542 is performed, and the
If all the detection positions for the predetermined number in step S602 are all within a predetermined range (step S602: NO), the
On the other hand, when the detected thing this time is not a released state but a coordinate value (step S603: NO), the
In this embodiment, when the trajectory of the contact position crosses the determination line (step S620: YES), it is determined that the effective stroke operation is performed. Therefore, similarly to the above-described embodiment, when it is determined that effective stroke operation is performed in a time within a predetermined range before and after the timing at which the performance voice is to be output, the
In addition, in the present embodiment, it is determined that the volume of the performance sound has exceeded the current determination line from the coordinate values detected immediately after the time at which it was determined that the previous determination line was crossed the trajectory of the detected contact position. It specifies based on the distance to the coordinate value detected immediately before the viewpoint. Here, similarly to the above-described embodiment, the distance between the two points may be a linear distance or may be a length of a trajectory. Moreover, what is necessary is just to make a volume become large, so that the length of the said distance obtained is large.
Referring to Fig. 14A, the distance specified for calculating the volume is described. 14A shows an example of the trajectory of the coordinate values subsequently detected in the
In the example of FIG. 14A, when the stroke is made from point P to R, point Q is detected and it is determined that the determination line L has been crossed for the first time. However, since the trajectory of the detection coordinates did not exceed the determination line L before that, the point P which is a contact start position is made into the "coordinate value detected immediately after it was determined that it crossed the last determination line," and this time from the point P The volume is specified based on the distance to the coordinate value Q-1 detected just before crossing the determination line. In addition, the distance may be a linear distance or a moving distance (trajectory length).
Subsequently, when the stroke is performed by folding at the point R, when the point S is detected, it is specified that the determination line L is crossed again. In this case, since the "coordinate value detected immediately after crossing the previous determination line" is point Q, the volume is based on the distance from point Q to the coordinate value S-1 detected just before crossing this determination line. Specifies.
In addition, as is apparent from the flowchart shown in Fig. 13, in the present embodiment, the stroke is accumulated from the coordinate values at the starting point. If the trajectory of the contact position crosses the determination line and determines that effective stroke operation has been performed, the detected coordinates accumulated up to that time are discarded in step S624 described later, and continuously detected from the coordinate values initially detected after crossing the determination line. Coordinate values are accumulated. That is, for example, as in the example of FIG. 14A, when the stroke operation is continued from the point Q, the detection coordinates are accumulated again from the point Q.
Therefore, also when the contact crosses the determination line for the first time after the start of the contact, the contact is continued after the crossing of the determination line, and the reference is made to the accumulated coordinate value even when the crossing of the determination line is again performed. Information necessary for calculating the distance can be obtained.
In addition, when the line segment connecting the contact position detected this time and the contact position detected last time does not exceed the determination line (step S620: No), the coordinate value detected this time continues when the process performs step S601, Then, it accumulates as a trace of the detected contact position.
Next, the adjustment unit 1003 adjusts the position of the determination line (step S623). For example, in FIG. 14A, the case where the user performed stroke operation from the point P toward the point R is demonstrated as an example. As described above, when the point Q is detected, it is determined that the determination line has been crossed. Therefore, the adjustment unit 1003 shows the determination line L such that the line segment connecting the point Q detected at the time when it is determined to have crossed the determination line and the point Q-1 detected immediately before it intersects the determination line L perpendicularly. As shown in 14b, the position and direction of the determination line are updated to rotate by the angle θ. The center for rotating the determination line L may be a point where the trajectory of the determination line and the detection coordinates intersect, or may be a predetermined position (for example, the center of the touch screen).
In addition, when the power supply of the
In this way, by adjusting the determination line so that the direction of the stroke and the determination line intersect vertically, the voice processing device absorbs the direction in which the user grips the
As mentioned above, although the Example of this invention was described, this invention is not limited to the Example mentioned above, A various deformation | transformation and an application are possible. It is also possible to freely combine the components of the above-described embodiments.
For example, the item selection device and the audio processing device may further include a
FIG. 8C shows a situation when a user simulates guitar performance by bringing the
In addition, in the item selecting apparatus according to the above embodiment, when the table is scrolled, if the touch screen is touched, the content displayed at the contact position is fixed at the contact position, so that scrolling can be stopped. However, if the detected coordinates are all within a predetermined range from the start of the contact until the release is performed, the item output unit outputs the item displayed at the coordinates as a selection result. In addition to stopping the, the item displayed in the contacted coordinate value is selected.
Therefore, for example, all the coordinates detected are within a predetermined range from the start of the contact until the release is released, and the time from the start of the contact until release is within a predetermined threshold time. Only in one case, the item output unit may output the item displayed at the coordinates as the selection result. As a result, no selection is made when the same position is in contact with the same position for more than a predetermined threshold time until release, and the position of the window when the user touches is continuously displayed.
In the above embodiment, the moving unit of the item selection device moves the window position by limiting the direction to the vertical direction or the horizontal direction according to the direction of the sweeping operation performed by the user with respect to the touch screen. Otherwise, the window may be moved in the direction opposite to the speed of change of the contact position just before releasing the contact. Thereby, a table scrolls and displays also in directions other than an up-down direction and a left-right direction.
Further, in the above embodiment, the moving unit of the item selection device is configured to move the position of the area in a circular manner when the position of the window reaches the end of the table. Otherwise, it may be moved in the opposite direction.
For example, when a table is displayed in an area surrounded by origin (0, 0), (W, 0), (0, L), (W, L), the position (x, y) of the window is in the area. When it is not included, what is necessary is just to make the value subtracted from W into the coordinate value of an X direction about the quantity larger than W in an X direction. Or for the quantity smaller than 0 in the X direction, what is necessary is just to make the value added to 0 as the coordinate of the X direction. In addition, what is necessary is just to subtract from L about the quantity larger than L in a Y direction, and to add to 0 about the quantity smaller than 0 in a Y direction as a coordinate value of a Y direction.
In the above embodiment, the moving unit of the item selection device moves the position of the window in step S407 using the speed of change of the contact position just before the release state is detected. Otherwise, in step S407, the calculated speed of the contact position just before the release may be multiplied by a predetermined coefficient to gradually reduce the movement speed of the position of the window.
In another embodiment, the adjusting unit of the audio processing device may adjust the determination line based on past stroke operations. For example, at the time of stroke operation, the direction vector (henceforth referred to as an intersection vector) obtained by subtracting the contact position detected immediately before it from the contact position at the point of time determined to intersect with the determination line is obtained. The addition vector obtained by normalizing by adding the intersection vector to the addition vector is temporarily stored. In addition, in the initial state, since the addition vector does not have a value, the calculated intersection vector is an addition vector. Then, the position and direction of the stored determination line are updated so as to perpendicularly intersect the addition vector.
For example, when the stroke shown in Fig. 14A is performed, as shown in Fig. 14C, the intersection vector is a vector extending from point Q-1 to Q and a vector extending from point S-1 to point S, and the addition vector is Vector A.
Thereafter, similarly, at the point of intersection with the decision line, the intersection vector is obtained, added to the addition vector, and normalized. However, when adding, it is necessary to match the direction of the intersection vector and the addition vector, so the dot product of the two vectors is obtained. As a result of the inner product, when a value smaller than a negative predetermined value is obtained, it is assumed that the two direction vectors are almost opposite directions. Therefore, the intersection vector obtained this time is multiplied by the minus, and the direction is matched and added to the addition vector. On the other hand, when the inner product is larger than the predetermined value, the intersection vector may be added to the addition vector as it is. Then, the position and direction of the decision line are updated so as to perpendicularly intersect the addition vector.
In this way, by adding the intersection vector in the stroke operation to the addition vector which is the sum of the past intersection vectors, it is possible to more accurately extract the "wetness" of the user's stroke.
Alternatively, the direction vector from the start position to the end point of the stroke crossing the determination line may be used instead of the intersection vector. Here, the end point of the stroke is a position immediately before releasing the contact when sweeping. On the other hand, when the reciprocating stroke operation is performed, it is the position just before the direction of the stroke becomes the opposite direction.
In the audio processing apparatus according to the above embodiment, the audio output unit determines the volume based on the distance from the time point at which contact is started until the predetermined operation condition is satisfied. Otherwise, the volume may be determined based on the representative value (for example, the average speed) of the speed of the change of the contact points of two continuously detected points among the trajectories of the continuously detected contact positions. In other words, the faster the speed, the larger the volume.
In the audio processing apparatus according to the above embodiment, the time for the audio output unit to continuously output the playing voice is assumed to be predetermined. Otherwise, the length of the performance sound continuously output based on the representative value (for example, the average speed) of the speed of the change of the contact points of two continuously detected points among the trajectories of the continuously detected contact positions. May be determined. In other words, the faster the speed, the longer the playing voice can be output.
In addition, the audio output unit may start to output the audio after a predetermined time elapses after the operation condition is satisfied. For example, when performing a performance in a large place, the voice is transmitted to a person far from the player after a certain delay time has passed since the operation was performed. Therefore, by inserting the delay time in this way, it becomes possible to provide the effect of playing in a wide place even in a portable game machine.
In this case, the audio output unit may return the delay time to a default value when release is detected, and make it shorter each time the operation condition is continuously satisfied. In general, it is considered that a delay is harder to feel when the stroke is continuously performed as compared with the case where the stroke is performed only once. Therefore, in order to emphasize the first delay, if the operation condition is continuously satisfied, the delay time may be made shorter by making the delay time shorter than when the operation condition is satisfied last time.
In addition, in the item selection device and the audio processing device according to the above embodiment, the detection unit may be hardware for detecting the presence or absence of a contact, such as a trackpad or tablet, and the contact position, in addition to the touch screen.
In addition to the game device, the item selection device and the audio processing device according to the embodiment may be implemented in other terminal devices having a touch screen.
In addition, about this application, it claims the priority based on Japanese Patent Application No. 2008-151554, and shall apply all the content of the said basic application to this application.
As described above, according to the present invention, a voice processing device, a voice processing method, and a information suitable for simulating the performance of an instrument while utilizing the characteristics of hardware such as a touch screen and the like, which can detect the presence or absence of a contact and the like, can be detected. It is possible to provide a recording medium and a program.
100: game device 101: CPU
102: ROM 103: RAM
104: interface 105: input unit
106: memory cassette 107: image processing unit
108: touch screen 109: NIC
110: voice processing unit 111: microphone
112: speaker 200: item selection device
201: storage unit 202: display unit
203: detection unit 204: item output unit
205: moving part 206: contact member
1000: sound processing device 1001: detection unit
1002: audio output unit 1003: adjustment unit
Claims (14)
If a predetermined operation condition is satisfied, and includes a voice output unit 1002 for starting the output of the predetermined output voice,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Speech processing device 1000, characterized in that to be satisfied with.
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
And an adjusting unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other are at right angles.
(d) the distance from the position at which contact is initiated to the contact position detected just before the predetermined operation condition is satisfied; And
(e) The distance from the contact position detected at the time when the predetermined operation condition is satisfied immediately before the contact position detected immediately before the next operation condition is satisfied.
And the volume of the output voice to start the output according to the voice processing apparatus.
The accompaniment voice corresponds to a playing timing specified by the elapsed time since starting output and a playing voice to be output at the playing timing.
The voice output unit 1002 is configured to perform the performance timing when the predetermined operation condition is satisfied and when the timing at which one of the conditions is satisfied coincides with any one of the performance timings corresponding to the accompaniment voice. And outputting the playing voice to be outputted as the predetermined output voice.
The contact member 206 has a pick shape or a shape in which protrusions are arranged at the tip of the peak shape.
A detecting step of detecting, by the detecting unit 1001, the position of the user when the user is in contact with the surface of the contacted portion, and of the fact that the user detects the release of the surface; And
The voice output unit 1002 includes a voice output process of starting output of a predetermined output voice when a predetermined operation condition is satisfied,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
Speech processing method characterized in that to satisfy.
The predetermined operation conditions are not to be satisfied in the case of (a) and (b),
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
And the adjusting unit (1003) further includes an adjusting step of adjusting the direction of the determination line so that the angle at which the trajectory of the contact position that is subsequently detected and the determination line intersect is close to a right angle.
A detection unit (1001) which detects the position when the user is in contact with the surface of the contacted portion and detects the effect when the surface is released; And
When the predetermined operation condition is satisfied, the voice output unit 1002 which starts outputting the predetermined output voice.
As a program to function as,
The predetermined operation condition is,
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
An information recording medium, characterized in that a program is stored.
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
The program, the computer,
And an adjusting unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other is at right angles.
A detection unit (1001) which detects the position when the user is in contact with the surface of the contacted portion and detects the effect when the surface is released; And
When the predetermined operation condition is satisfied, the voice output unit 1002 which starts outputting the predetermined output voice.
Wherein the predetermined operation condition is
(a) the release is detected immediately after the contact position is detected, and the change in the contact position immediately before the release is detected is equal to or greater than a predetermined threshold speed, or,
(b) When the direction of change in the contact position which is subsequently detected is in the opposite direction within a predetermined error range
The program characterized by satisfying the.
(c) It shall be satisfied when the trajectory of the contact position detected subsequently exceeds the predetermined determination line,
The program, the computer,
And an adjustment unit (1003) for adjusting the direction of the determination line so that the angle at which the trajectory of the detected contact position and the determination line intersect each other is at right angles.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008151554A JP4815471B2 (en) | 2008-06-10 | 2008-06-10 | Audio processing apparatus, audio processing method, and program |
JPJP-P-2008-151554 | 2008-06-10 | ||
PCT/JP2009/059894 WO2009150948A1 (en) | 2008-06-10 | 2009-05-29 | Audio processing device, audio processing method, information recording medium, and program |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20100051746A true KR20100051746A (en) | 2010-05-17 |
KR101168322B1 KR101168322B1 (en) | 2012-07-24 |
Family
ID=41416659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020107007589A KR101168322B1 (en) | 2008-06-10 | 2009-05-29 | Audio processing device, audio processing method, and information recording medium |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP4815471B2 (en) |
KR (1) | KR101168322B1 (en) |
CN (1) | CN101960513B (en) |
TW (1) | TW201011615A (en) |
WO (1) | WO2009150948A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6254391B2 (en) * | 2013-09-05 | 2017-12-27 | ローランド株式会社 | Sound source control information generation device, electronic percussion instrument, and program |
JP6299621B2 (en) * | 2015-02-04 | 2018-03-28 | ヤマハ株式会社 | Keyboard instrument |
WO2017017800A1 (en) * | 2015-07-29 | 2017-02-02 | 株式会社ワコム | Coordinate input device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07109552B2 (en) * | 1987-05-29 | 1995-11-22 | ヤマハ株式会社 | Electronic musical instrument |
JP2000066668A (en) * | 1998-08-21 | 2000-03-03 | Yamaha Corp | Performing device |
JP3566195B2 (en) * | 2000-08-31 | 2004-09-15 | コナミ株式会社 | GAME DEVICE, GAME PROCESSING METHOD, AND INFORMATION STORAGE MEDIUM |
JP3922273B2 (en) * | 2004-07-07 | 2007-05-30 | ヤマハ株式会社 | Performance device and performance device control program |
JP4770419B2 (en) * | 2005-11-17 | 2011-09-14 | カシオ計算機株式会社 | Musical sound generator and program |
JP5351373B2 (en) * | 2006-03-10 | 2013-11-27 | 任天堂株式会社 | Performance device and performance control program |
US8003874B2 (en) * | 2006-07-03 | 2011-08-23 | Plato Corp. | Portable chord output device, computer program and recording medium |
-
2008
- 2008-06-10 JP JP2008151554A patent/JP4815471B2/en active Active
-
2009
- 2009-05-29 WO PCT/JP2009/059894 patent/WO2009150948A1/en active Application Filing
- 2009-05-29 KR KR1020107007589A patent/KR101168322B1/en active IP Right Grant
- 2009-05-29 CN CN200980106327.4A patent/CN101960513B/en active Active
- 2009-06-02 TW TW98118247A patent/TW201011615A/en unknown
Also Published As
Publication number | Publication date |
---|---|
CN101960513A (en) | 2011-01-26 |
CN101960513B (en) | 2013-05-29 |
JP4815471B2 (en) | 2011-11-16 |
TW201011615A (en) | 2010-03-16 |
KR101168322B1 (en) | 2012-07-24 |
JP2009300496A (en) | 2009-12-24 |
WO2009150948A1 (en) | 2009-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8360836B2 (en) | Gaming device, game processing method and information memory medium | |
US7435169B2 (en) | Music playing apparatus, storage medium storing a music playing control program and music playing control method | |
KR100900794B1 (en) | Method for dance game and the recording media therein readable by computer | |
JP4410284B2 (en) | GAME DEVICE, GAME CONTROL METHOD, AND PROGRAM | |
JP4848000B2 (en) | GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM | |
US20150103019A1 (en) | Methods and Devices and Systems for Positioning Input Devices and Creating Control | |
JP4797045B2 (en) | Item selection device, item selection method, and program | |
JP3579042B1 (en) | GAME DEVICE, GAME METHOD, AND PROGRAM | |
JP4127561B2 (en) | GAME DEVICE, OPERATION EVALUATION METHOD, AND PROGRAM | |
KR101168322B1 (en) | Audio processing device, audio processing method, and information recording medium | |
JP6184203B2 (en) | Program and game device | |
TWI300002B (en) | ||
CN109739388B (en) | Violin playing method and device based on terminal and terminal | |
JP5279744B2 (en) | GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM | |
JP5210908B2 (en) | Moving image generating device, game device, moving image generating method, and program | |
JP5222978B2 (en) | GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM | |
JP2004283264A (en) | Game device, its control method, and program | |
JP2012065833A (en) | Game device, game control method, and program | |
JP4956600B2 (en) | GAME DEVICE, GAME PROCESSING METHOD, AND PROGRAM | |
JP5100862B1 (en) | GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM | |
JP5535127B2 (en) | Game device and program | |
JP2011255018A (en) | Game apparatus, game control method, and program | |
JP4071130B2 (en) | Control device, character control method, and program | |
JP2012024437A (en) | Image generating device, image generating method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E90F | Notification of reason for final refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20150710 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20160708 Year of fee payment: 5 |
|
FPAY | Annual fee payment |
Payment date: 20170707 Year of fee payment: 6 |