KR20130104587A - Karaoke system and operating method providing automatic sound mixing effect - Google Patents

Karaoke system and operating method providing automatic sound mixing effect Download PDF

Info

Publication number
KR20130104587A
KR20130104587A KR1020120026204A KR20120026204A KR20130104587A KR 20130104587 A KR20130104587 A KR 20130104587A KR 1020120026204 A KR1020120026204 A KR 1020120026204A KR 20120026204 A KR20120026204 A KR 20120026204A KR 20130104587 A KR20130104587 A KR 20130104587A
Authority
KR
South Korea
Prior art keywords
accompaniment
song
sound
audio processor
effect
Prior art date
Application number
KR1020120026204A
Other languages
Korean (ko)
Inventor
박근
Original Assignee
글로엔텍 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 글로엔텍 주식회사 filed Critical 글로엔텍 주식회사
Priority to KR1020120026204A priority Critical patent/KR20130104587A/en
Publication of KR20130104587A publication Critical patent/KR20130104587A/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/02Synthesis of acoustic waves
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/051Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction or detection of onsets of musical sounds or notes, i.e. note attack timings

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

Disclosed are a karaoke system for providing an automatic sound synthesis effect and a method of operating the same. According to an embodiment of the karaoke system, a first audio processor for processing the music data to output the accompaniment signal to perform the accompaniment operation according to the user's selection, and at the time of accompaniment of the continuous first and second songs, A second audio processor for processing sound effect data and outputting sound effects in order to give an effect sound between the accompaniment of the first and second songs, and first and second audio signals related to at least one of the first and second songs. And a control unit configured to detect a second viewpoint and control the second audio processor to output the sound effect during a section between the first and second viewpoints according to the detection result.

Figure P1020120026204

Description

Karaoke system and operation method providing automatic sound synthesis effect

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a karaoke system and a method of operating the same, and more particularly, to a karaoke system for providing an automatic sound synthesis effect and a method of operating the same.

An image flexible half period, generally referred to as a karaoke system, includes an apparatus for reproducing an optical disc storing karaoke information including a plurality of accompaniment information, image information, index information, and the like. As the karaoke information is stored in a high capacity optical disk such as a DVD, thousands or tens of thousands of songs can be stored, or the accompaniment operation can be performed by storing songs through an internal storage device such as a hard disk. Increasing the capacity of a storage medium such as an optical disk or a hard disk can increase the satisfaction of the number of songs contained in each disk of a karaoke user. In addition, as an interface means for connecting an external storage medium storing karaoke information is provided in the karaoke system, accompaniment of songs through an external storage medium other than information stored in the karaoke system may be performed.

There is a demand for providing an optimum accompaniment effect to a user using the karaoke system. In order to entertain the user, the user is interested in various factors such as accompaniment mode, speed, scoring, and lighting effects. However, from the user's point of view, the dancing time is mainly performed along with the accompaniment and the song at the place where the karaoke system is provided, and in this case, the continuous accompaniment needs to be performed without interruption or interruption of the song accompaniment. However, in the case of karaoke system, a large number of songs are reserved according to the user's selection, and accompaniment is performed sequentially. There will be a break, and there will be a few seconds of silent period just before the accompaniment of the song, and also a few seconds of silent period after the accompaniment of the song has started, which will lead to a fairly long unaccompanied period, thereby leading to dancing time. There is a problem that can not be enjoyed properly.

The present invention has been made to solve the above problems, and provides a karaoke system and a method of operating the same, which provides an automatic sound synthesis effect suitable for a dancing party by automatically minimizing an unaccompaniment section while performing accompaniment for a plurality of songs. It aims to do it.

In addition, the present invention provides an appropriate sound effect for the dancing time between the accompaniment of the successive songs, and also provides an automatic sound synthesizing effect suitable for the dancing party by allowing proper overlapping playback in consideration of the silent section of the successive songs. An object of the present invention is to provide a karaoke system and a method of operating the same.

In order to achieve the above object, the karaoke system according to an embodiment of the present invention, the first audio processing unit for processing the music data to output the accompaniment signal to perform the accompaniment operation according to the user's selection, In accompaniment of the first and second songs, a second audio processor for processing sound effect data and outputting the sound effect to give an effect sound between the accompaniment of the first and second songs, and the first and second songs. And a control unit configured to detect the first and second viewpoints related to at least one of the silent periods, and to control the second audio processor to output the sound effect during the interval between the first and second viewpoints according to the detection result. Characterized in that.

On the other hand, the operation method of the karaoke system according to an embodiment of the present invention, receiving the effect sound output mode request, performing the accompaniment for the first song according to the user's selection by the first audio processor, Outputting an effect sound by a second audio processor in response to detecting a first time before the accompaniment of the first song ends, and accompaniment of a subsequent second song as the accompaniment to the first song ends. Performing the first audio processing unit and terminating the output of the effect sound by the second audio processing unit in response to detecting the second time after the start of the accompaniment of the second song.

On the other hand, the karaoke system according to another embodiment of the present invention, when accompaniment of the first song and the second song in accordance with the user's selection, processing the music data to perform the accompaniment operation for the first song A first audio processor for outputting a first accompaniment signal, a second audio processor for processing music data to output a second accompaniment signal to perform an accompaniment operation on the second song, and a silent section of the first song A first time point is detected first, a second time point related to a silent section of the second song is second detected, and accompaniment of the first song by the first audio processing unit ends according to the first and second detection results; And a controller for controlling the accompaniment of the second song by the second audio processor.

According to the karaoke system and the operation method of the present invention as described above, since the unaccompaniment section can be automatically minimized while performing the accompaniment for a plurality of songs, it is possible to provide an accompaniment effect suitable for a dancing party. have.

In addition, according to the karaoke system and the method of operation of the present invention, there is an effect that can provide a sound effect suitable for the dancing time between the accompaniment of a plurality of songs without changing a separate DJ or song information.

1 is a block diagram illustrating a karaoke system according to an embodiment of the present invention.
Fig. 2 is a diagram showing the timing of processing for the selected N-th tune, N + 1-th tune, and effect sound.
3 is a block diagram illustrating an embodiment of the control unit of FIG. 1.
4 is a flowchart illustrating a method of operating a karaoke system according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a first example of performing a dancing time of the karaoke system of FIG. 1.
FIG. 6 is a diagram illustrating a second example of performing a dancing time of the karaoke system of FIG. 1.
FIG. 7 is a diagram illustrating a concrete example of the accompaniment operation illustrated in FIG. 6.

In order to fully understand the present invention, operational advantages of the present invention, and objects achieved by the practice of the present invention, reference should be made to the accompanying drawings and the accompanying drawings which illustrate preferred embodiments of the present invention.

BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, the present invention will be described in detail with reference to the preferred embodiments of the present invention with reference to the accompanying drawings. Like reference symbols in the drawings denote like elements.

1 is a block diagram illustrating a karaoke system according to an embodiment of the present invention. As shown in FIG. 1, the karaoke system 1000 includes a main processor 1100 and other devices for controlling the operation of the entire system. The karaoke system 1000 may include, as other devices, a key input unit 1210 for receiving a user's selection signal, an external command for adjusting the pitch and speed of various accompaniment, and a microphone connected to the karaoke system 1000. It may include a microphone input unit 1220 for receiving a voice signal of and a database unit 1230 for storing karaoke information including music and sound effects for accompaniment of various songs. In addition, an external media interface 1320 for connecting to an external storage medium 1310 may be provided in the karaoke system 1000. The karaoke information stored in the database unit 1230 or the external media interface 1320 may include music data and sound effect data for accompaniment of songs.

On the other hand, the main processor 1100 includes various components for performing accompaniment, for example, karaoke information stored in the controller 1110, the database unit 1230, or the external media interface 1320 for controlling the overall operation of the processor. Two or more audio processing units 1120 and 1130 for processing audio and a video processing unit 1140 for processing video of karaoke information may be included. In addition, the main processor 1100 may include an audio mixing unit 1150 that outputs accompaniment by mixing audio signals output from the two or more audio processing units 1120 and 1130. Although not shown in FIG. 1, a user's voice signal input through the microphone input unit 1220 may be further provided to the audio mixing unit 1150, and the audio mixing unit 1150 may further mix and output the voice. have. In addition, although two audio processing units (hereinafter, first and second audio processing units 1120 and 1130) are illustrated in FIG. 1, more audio processing units may be included in the main processor 1100.

An operation of providing an accompaniment effect suitable for a dancing party according to an embodiment of the present invention is as follows.

When a plurality of songs are selected by a user by a reservation, the main processor 1100 controls the driving of the first and second audio processors 1120 and 1130 using the selected song information to control the first and second audio processors ( Accompaniment operation for the song selected by 1120 and 1130 is performed. Accompaniment for the N-th song is called the first accompaniment and accompaniment for the next N + 1-th song is called the second accompaniment in the order in which the accompaniment is performed.

The first audio processor 1120 performs an accompaniment operation on the N-th song under the control of the controller 1110. Based on the end of the accompaniment for the N-th song, there is a silent section during the previous predetermined section. The controller 1110 detects a time point (hereinafter, referred to as a first time point) before a predetermined time from the time point at which the N-th song ends, with reference to the karaoke information for performing the current accompaniment. The time point (first time point) detected by the controller 1110 may be the same as the time point at which the silent section of the N-th song starts, or may be slightly earlier or later than the time point at which the silent section starts. Do.

The controller 1110 controls the second audio processor 1130 to be driven as the first point of time is sensed, and the second audio processor 1130 processes sound effect data of karaoke information under the control of the controller 1110. Occurs. That is, the second audio processor 1130 processes the sound effect data while the first audio processor 1120 processes the silent section of the N-th song to generate the sound effect.

After the first audio processor 1120 completes the accompaniment for the N-th song, the first audio processor 1120 immediately follows the accompaniment for the N + 1-th song, or at predetermined intervals. . There is a silent section for a predetermined period after the start of the N + 1th music, and accordingly, the second audio processor 1130 continues processing the sound effect data.

The controller 1110 detects a time point (hereinafter, referred to as a second time point) after a predetermined time from the time point at which the N + 1th music starts, with reference to karaoke information for performing current accompaniment. The time point (second time point) detected by the controller 1110 may be the same as the time point at which the silent section of the N + 1th song ends, or the time point slightly earlier or slightly later than the time point at which the silent section ends. You may.

The second audio processor 1130 continues the processing of the effect sound data until the controller 1110 detects the second viewpoint. As the second time point is detected, the second audio processor 1130 stops processing sound effect data under the control of the controller 1110. That is, since the silent period of the N + 1th song ends and the actual accompaniment is performed after the second time point, the period from when the actual accompaniment is performed in the Nth song to the time when the actual accompaniment is performed in the N + 1th song is performed. The sound effects are properly output during the accompaniment section (or the silent section), so that when the user selects a plurality of songs from the user's point of view, the accompaniment effect is seamlessly received.

In the meantime, the controller 1110 may further detect additional viewpoints in addition to detecting the above-described first and second viewpoints. As an example, the third time point after a predetermined period has elapsed since the first time point is detected, and the fourth time point before the predetermined time period may be further detected from the second time point. In outputting the sound effect, the sound is faded in during the initial period of the sound effect, and the sound effect is gradually increased so that the sound can be output while the sound is faded out. Control to terminate. That is, a fade in effect is given to the sound effect in the section between the first time point and the third view point, and a fade out effect is given to the sound effect in the section between the second time point and the fourth view point.

A detailed operation of the karaoke system according to the embodiment of the present invention described above will be described with reference to FIGS. 1 and 2. Fig. 2 is a diagram showing the timing of the processing for the Nth music, the N + 1th music, and the effect sound.

As shown in FIG. 2, the first audio processor 1120 performs accompaniment on the N-th song. The controller 1110 detects a time point (a first time point) before a predetermined section T1 from the time point at which the N-th song ends, and controls the second audio processor 1130 at the first time point to control the second audio. The processor 1130 processes the sound effect. As in the above-described embodiment, the fade-in effect may be given during an initial predetermined period of the sound effect output, and in FIG. 2, the fade-in effect is applied during the period corresponding to T1-1 to start the output of the sound effect.

Thereafter, as the first audio processor 1120 completes the accompaniment for the N-th song, the first audio processor 1120 starts the accompaniment for the N + 1 th song under the control of the controller 1110. Irrespective of the end of the accompaniment of the N-th song and the start of the accompaniment of the N + 1th song, the effect sound processing and output by the second audio processor 1130 continue. The controller 1110 detects a time point (a second time point) after a predetermined period T2 from the time when the accompaniment of the N + 1th music starts, and the effect sound processing and output of the second audio processor 1130 are described above. Continue to the second time point. In addition, as in the above-described embodiment, the fade out effect may be given during the last predetermined period of the output of the sound effect, and in FIG. 2, the fade out effect is given during the period corresponding to T2-1 to end the sound effect.

As shown in FIG. 2, the second audio processor 1130 may include a sum of a predetermined period T1 at which the accompaniment for the N-th song ends and a predetermined period T2 at which the accompaniment for the N + 1th song starts. Perform sound effects processing and output during (T1 + T2). That is, the first audio processor 1120 and the second audio processor 1130 simultaneously perform an audio processing operation during a predetermined period T1 + T2. The predetermined section T1 may represent a silent section near the end point of the N-th song, and the predetermined section T2 may indicate a silent section near the start point of the N + 1th song. Accordingly, in performing the accompaniment of the continuous song, the sound effect is output by the sound effect processing operation of the second audio processor 1130 during the unaccompaniment period that may occur in the process of changing the song. In addition, as in the above-described embodiment, the first time point detected by the controller 1110 is not necessarily a time point at which the silent section immediately before the accompaniment of the N-th song ends, and is slightly smaller than the time point at which the silent section starts. An earlier time or a later time may be detected. This may be similarly or similarly applied to the second time point detected by the controller 1110.

3 is a block diagram illustrating an embodiment of the control unit of FIG. 1. As illustrated in FIG. 3, the controller 1110 may include a mode determiner 1111, a timing detector 1112, an audio processor drive controller 1113, a sound effect selection controller 1114, and a fade effect controller 1115. Can be.

The mode determiner 1111 determines whether the karaoke system operates in the dancing party mode by receiving a user input. The mode determination result is provided to other functional blocks in the controller 1110 to allow the karaoke system to operate in the dancing party mode.

The timing detector 1112 performs a detection operation for various views as disclosed in the above-described embodiment in the dancing party mode of the karaoke system. As an example, during the accompaniment operation of the N-th song, a predetermined section time point is detected before the end of the N-th song, and the detection result is generated. A predetermined section time point after the time point is detected and the detection result is generated. In addition, when the fade in or fade out effect is applied to the effect sound output, an operation for detecting an end time of the fade in effect and a start time of the fade out effect may be further performed.

The audio processor driving control unit 1113 controls the driving of the first and second audio processing units 1120 and 1130 of FIG. 1 based on the determination result of the mode determination unit 1111 and the timing detection result of the timing detector 1112. do. As an example, the first audio processor 1120 is controlled so that the first audio processor 1120 accompanies the N-th song, and the second audio processor 1130 is controlled according to a result of detecting the start time of the sound effect. The audio processor 1130 processes and outputs the effect sound. In addition, after the accompaniment of the N-th song is finished, the first audio processor 1120 controls to perform the accompaniment of the N + 1-th song. When the fade in and fade out effects are given, a result of detecting the corresponding point of time is also provided to the fade effect controller 1115, and the fade effect controller 1115 controls the second audio processor 1130 based on this to output the effect sound. Allows for fade in and fade out effects. Although the audio processor driving controller 1113 and the fade effect controller 1115 are illustrated as different functional blocks, the function performed by the fade effect controller 1115 may be implemented by the audio processor driving controller 1113. .

The sound effect selection control unit 1114 performs an operation of selecting a sound effect corresponding to the music on which the accompaniment is performed. For example, a plurality of music data and sound effect data for accompaniment may be stored in the karaoke system or stored in an external storage medium, and the sound effect selection control unit 1114 may provide an appropriate sound effect in response to each music data. In response to a song currently being accompanied, a selection operation of an effect sound may be performed. As an example, as illustrated in FIG. 1, the sound effect data may be classified into a plurality of types (Type1 to TypeM), and various kinds of information (eg, genre, singer gender, speed, etc.) of a song in which accompaniment is performed are used. Make sure that different songs have different effects.

4 is a flowchart illustrating a method of operating a karaoke system according to an embodiment of the present invention.

As shown in FIG. 4, first, an input for requesting the dancing party mode is received from the outside (S11). Accordingly, the karaoke system performs the operation in the dancing party mode, and performs the accompaniment for the first song according to the selection song from the user (S12). In the case of the user's selection, a plurality of selections can be made through the reservation function, and the accompaniment is performed to suit the dancing time. The karaoke system includes at least two audio processing units, and for example, the first song is processed by the first audio processing unit.

While the accompaniment for the first piece of music is in progress, the first time T1 before the end of the first piece of music is detected (S13). As the first time T1 is detected, an audio processor other than the first audio processor that is currently performing the accompaniment of the first song, and the second audio processor performs processing on the effect sound to start outputting the effect sound. (S14). The time point of the first time T1 before the end of the first song may correspond to a time point at which a silent section occurs immediately before the accompaniment of the first song ends, so that an appropriate sound effect may be generated during the silent section of the first song. The output allows the user to keep the dancing time.

Thereafter, as the first song ends, the first audio processor performs an accompaniment operation on the next second song (S15). While the accompaniment to the second song is in progress, the second time T2 after the start of the second song is detected (S16). The second audio processor continues to process the sound effect, and as the second time T2 is detected, the second audio processor terminates the output of the sound effect (S17).

FIG. 5 is a diagram illustrating a first example of performing a dancing time of the karaoke system of FIG. 1. As shown in FIG. 5, the first audio processor sequentially processes a plurality of selected songs to perform accompaniment, and the second audio processor imparts an effect sound between the selected songs, and accompanies a continuous song. Minimize silent periods

The second audio processor outputs a first sound effect between the accompaniment for the first song and the accompaniment for the second song. In outputting the first sound effect, the first sound is output during the first section T11 where the silent section occurs before the end point of the first song and during the second section T12 where the silent section occurs until after the start point of the second song. 1 Make the sound sound. Also, when the accompaniment of the second song starts after a certain period after the accompaniment of the first song ends, the section in which the first effect sound is output may be a section corresponding to T11 + T12 + T13. In fact, the start point of the silent section of the first song and the start point of the output of the first effect sound are shown to be the same, and the end point of the silent section of the second song and the end point of the output of the first effect sound are shown to be the same. However, embodiments of the present invention need not be limited thereto. As an example, the start point of the output of the first sound effect may be performed at a time slightly earlier or later than the start time of the silent section of the first song, and the end point of the output of the first sound effect is a silent section of the second song. It may be performed at a slightly earlier or later time point than the end time of.

The accompaniment operation for the selected song is performed the same or similarly in the accompaniment for subsequent songs. That is, a second effect sound is output between the accompaniment for the second song and the accompaniment for the third song, and the third effect sound is output between the accompaniment for the third song and the accompaniment for the fourth song. In FIG. 5, the silent sections T11 occurring near the end of the accompaniment of each song are shown to be the same, and the silent sections T12 occurring near the beginning of the accompaniment of each song are shown to be the same. The lengths of the silent sections generated in each song may be different, and in this case, the sections in which the respective sound effects are output may also be different.

FIG. 6 is a diagram illustrating a second example of performing a dancing time of the karaoke system of FIG. 1. As shown in FIG. 6, according to an embodiment of the present invention, in order to provide a dancing time function using a karaoke system, accompaniment operations on at least two songs without sound effects are simultaneously performed according to a detection result at a predetermined time point. By doing so, the dancing time function can be provided.

First, the first audio processor starts accompaniment for the first song. A time point of a predetermined section T21 is detected before the end of the accompaniment of the first song, and at a time point of the predetermined section T21 before the end of the first song, the second audio processor starts accompaniment for the second song. . That is, during the predetermined period T21, the first song and the second song are data processed in parallel by different audio processing units.

Preferably, the predetermined section T21 may include a silent section near the end point of the first song, and may also include a silent section near the start point of the second song. For example, the accompaniment operation for the second song may be performed before the silent section of the first song starts, and thus, a slightly earlier point in time when the silent section of the first song starts, or the start of the silent section, or At a later time, the silent section near the beginning of the second song is finished. In this case, the silent section generated by the first and second songs is not actually output so that a phenomenon such as an accompaniment breaks in the middle does not occur.

The accompaniment operation as described above may be performed similarly or similarly to the following third and fourth songs. That is, in performing the accompaniment of the second song and the third song, accompaniment is simultaneously performed by different audio processing units for the second song and the third song during the predetermined period T22, and during the predetermined period T23, the accompaniment is performed. Accompaniment is performed simultaneously by the audio processing unit in which the three songs and the fourth song are different.

In addition, according to a similar method to the fade in and fade out effects to the sound effect in the previous embodiment, the continuity of the accompaniment is enhanced by giving the fade in and fade out effects to the first and second songs which are selected and accompaniment. do. For example, in performing the accompaniment of the first song and the accompaniment of the second song simultaneously, a fade-out effect of the accompaniment of the first song is given before the silent section near the end of the accompaniment of the first song starts. In addition, a silent section near the start time of the accompaniment of the second song passes and gives a fade-in effect of the accompaniment of the second song. The accompaniment timing of the second song is adjusted to overlap at least a portion of a section in which the accompaniment fade-out effect of the first song occurs and a section in which the accompaniment fade-in effect of the second song occurs. Accordingly, the silent section that may be generated by accompaniment of the second song after the end of the first song accompaniment is minimized.

FIG. 7 is a diagram illustrating a concrete example of the accompaniment operation illustrated in FIG. 6. As shown in FIG. 7A, the accompaniment for the N-th song is performed by the first audio processor. The silent section T31 near the end of the N-th song among the accompaniment for the N-th song is detected, and the accompaniment output section T33 of a predetermined section before the start point of the silent section T31 is detected. In addition, the silent section T32 near the start of the N + 1th music is detected.

At a certain time during the accompaniment for the N-th song, the accompaniment for the N + 1-th song by the second audio processing unit is started. In the accompaniment for the N-th song, the fade-in effect is given during the accompaniment output period T33 of the predetermined section, and the fade-in effect for the N-th song starts when the accompaniment for the N + 1th song starts. To be performed some time before. Preferably, the end point of the silent section near the start point of the N + 1 th song may be the same as the start point of the fade in effect for the N th song, or the start point of the N + 1 th song. A fade in effect for the Nth song may be started at a time slightly earlier or later than the end of the silent section.

In the section T33, a fade out effect is given to the N-th song, and at the same time, a fade-in effect is given to the N + 1th song. Accordingly, the accompaniment of the N-th song is output during the silent period near the start point of the N + 1th song, and the accompaniment of the N + 1-th song is output during the silent period near the end point of the N-th song. In addition, in the period T33 where the accompaniment of the N-th song and the N + 1-th song overlaps, the N-th song fades out and the N + 1-th song fades in, so it may occur in the period T33. Reduce the effects of duplicate accompaniment.

On the other hand, in Figure 7 (b), as an example of performing the accompaniment performing time of the N-th song and the N + 1-th song, a portion of the silent section T31 and the N + 1-th near the end of the N-th song An example in which accompaniment is performed by overlapping a portion of the silent section T32 near the beginning of a song is shown.

While the accompaniment for the N-th song is performed, the accompaniment for the N + 1-th song is started at the time point of the predetermined section T34 before the start of the silent section T31 near the end point of the N-th song. In this case, the silent section T31 of the N-th song and the portion T35 of the silent section T32 of the N + 1th song overlap. However, the N-th song is accompanied during some other sections of the silent section T32 of the N + 1th song, and the N + 1-th song is accompanied during some other sections of the silent section T31 of the N-th song. It is possible to minimize the silent period that can occur during the continuous accompaniment of a plurality of songs, so that the user using the karaoke system can minimize the interval where the accompaniment is stopped in the dancing time mode.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

Claims (6)

In a karaoke system that performs accompaniment using karaoke information,
A first audio processor for processing music data and outputting an accompaniment signal to perform an accompaniment operation according to a user's selection;
A second audio processor for processing sound effect data and outputting sound effects in order to give an effect sound between the accompaniment of the first and second songs when accompaniment of the first and second songs continues; And
Detecting the first and second viewpoints associated with at least one silent section of the first and second songs, and outputting the sound effect during the section between the first and second viewpoints according to the detection result; Karaoke system comprising a control unit for controlling the audio processing unit.
The method of claim 1,
The first time point is a time point at which a silent section starts a predetermined time from the end of the accompaniment of the first song,
The second time point is a time point at which the silent section after a predetermined time from the start of the accompaniment of the second song ends.
The period in which the effect sound is output, karaoke system, characterized in that it comprises at least both the silent period of the first song and the second song.
The method of claim 1,
In response to a plurality of songs, a plurality of types of sound effect data are received.
Karaoke system, characterized in that the sound effect data to give the effect sound is selected by determining the characteristics of the music data of at least one of the first song and the second song.
The apparatus of claim 1,
A mode determination unit that determines whether an effect sound output mode between the continuous first and second songs is set;
A timing detector detecting the first time point and the second time point; And
And an audio processor driving control unit for controlling driving of the first audio processing unit and the second audio processing unit according to the timing detection result.
Receiving a sound effect output mode request;
Performing accompaniment for the first song by the first audio processor in accordance with the user's selection;
Outputting an effect sound by a second audio processor in response to detecting a first time before the accompaniment of the first song ends;
Performing accompaniment for a subsequent second song by a first audio processor as the accompaniment for the first song ends; And
And ending the output of the effect sound by the second audio processor in response to detecting the second time after the accompaniment of the second song is started.
In a karaoke system that performs accompaniment using karaoke information,
A first audio processor configured to process music data and output a first accompaniment signal when accompaniment of the first song and the second song is performed according to a user's selection;
A second audio processor configured to process music data and output a second accompaniment signal to perform an accompaniment operation on the second song; And
First detecting a first viewpoint related to the silent section of the first song, a second detecting a second viewpoint related to the silent section of the second song, and according to the first and second detection results; And a controller for controlling the accompaniment of the second song by the second audio processor before the accompaniment of the first song by the processor is started.
KR1020120026204A 2012-03-14 2012-03-14 Karaoke system and operating method providing automatic sound mixing effect KR20130104587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120026204A KR20130104587A (en) 2012-03-14 2012-03-14 Karaoke system and operating method providing automatic sound mixing effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120026204A KR20130104587A (en) 2012-03-14 2012-03-14 Karaoke system and operating method providing automatic sound mixing effect

Publications (1)

Publication Number Publication Date
KR20130104587A true KR20130104587A (en) 2013-09-25

Family

ID=49453392

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120026204A KR20130104587A (en) 2012-03-14 2012-03-14 Karaoke system and operating method providing automatic sound mixing effect

Country Status (1)

Country Link
KR (1) KR20130104587A (en)

Similar Documents

Publication Publication Date Title
JP4866359B2 (en) Recording / reproducing apparatus, recording / reproducing method, recording / reproducing program, and computer-readable recording medium
US7525037B2 (en) System and method for automatically beat mixing a plurality of songs using an electronic equipment
JP5197202B2 (en) Content playback apparatus, method and program
US20040057695A1 (en) Information reproducing apparatus and information reproducing method
US8032241B2 (en) Apparatus for playing back audio files and method of navigating through audio files using the apparatus
KR20070028457A (en) Audio reproduction device
US6594212B2 (en) Reproducing device with cross-fading operation
KR20130104587A (en) Karaoke system and operating method providing automatic sound mixing effect
JP2005044409A (en) Information reproducing device, information reproducing method, and information reproducing program
US8199618B2 (en) Buffer control system for reducing buffer delay time between the playback of tracks and method thereof
JP2006244648A (en) Audio player and scanning reproduction control method
KR101078367B1 (en) Synchronous apparatus of image data and sound data for karaoke player and method for the same
JP6244219B2 (en) Playback apparatus and method, and computer program
JP2005266571A (en) Method and device for variable-speed reproduction, and program
JP2003058192A (en) Music data reproducing device
WO2005093750A1 (en) Digital dubbing device
JP2006018991A (en) Audio reproduction device
JP2004095019A (en) Information reproduction device and information reproduction method
JP6105779B2 (en) Recording apparatus, recording method, and computer program for recording control
KR20140092863A (en) Methods, systems, devices and computer program products for managing playback of digital media content
JP3882798B2 (en) Audio playback apparatus and audio playback method
JP2016051141A (en) Karaoke system
JP5904601B2 (en) Recording apparatus, recording method, and computer program for recording control
TWI237208B (en) Laser CD player with capable services for multi-users simultaneously and its method
JP2014010849A (en) Control method for reproduction control device, reproduction control device and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right