KR20170068795A - Method for Visualization of Sound and Mobile terminal using the same - Google Patents

Method for Visualization of Sound and Mobile terminal using the same Download PDF

Info

Publication number
KR20170068795A
KR20170068795A KR1020150175711A KR20150175711A KR20170068795A KR 20170068795 A KR20170068795 A KR 20170068795A KR 1020150175711 A KR1020150175711 A KR 1020150175711A KR 20150175711 A KR20150175711 A KR 20150175711A KR 20170068795 A KR20170068795 A KR 20170068795A
Authority
KR
South Korea
Prior art keywords
information
image information
sound
stored
acoustic
Prior art date
Application number
KR1020150175711A
Other languages
Korean (ko)
Inventor
구자영
권우현
Original Assignee
주식회사 디파이어커뮤니케이션
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 디파이어커뮤니케이션 filed Critical 주식회사 디파이어커뮤니케이션
Priority to KR1020150175711A priority Critical patent/KR20170068795A/en
Publication of KR20170068795A publication Critical patent/KR20170068795A/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • H04M1/72522

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of visualizing sound and a mobile terminal using the method are disclosed. A visualization method of the present sound includes: collecting and storing acoustic information; Storing a plurality of pieces of image information so that specific image information is visualized according to the sound information when the stored sound information is outputted, and selecting first image information among the plurality of images; Causing a format of the stored acoustic information to be converted; Generating second image information such that the first image information is transformed according to the converted acoustic information; And when the stored sound information is outputted, the first image information and the second image information are continuously visualized. Thereby, it is possible to analyze various characteristics such as the speech habits of the speaker included in the acoustic information, visualize the image information in various ways according to the analyzed result, and thereby provide only the acoustic information The features of the acoustic information are reflected, and the image information added with the esthetics can be provided together.

Description

TECHNICAL FIELD [0001] The present invention relates to a visualization method of a sound and a mobile terminal using the method.

The present invention relates to a method of visualizing sound and a mobile terminal using the same, and more particularly, to a method of visualizing sound by collecting and analyzing sound through a mobile terminal, visualizing image information according to characteristics of each sound, Terminal.

Along with the recent development of technology, various attempts have been made in the expression of multimedia and the like.

For example, displaying image information on a screen and outputting sound information corresponding to image information through a speaker, or outputting image information corresponding to sound information through a screen .

However, according to the conventional technology for displaying the image information corresponding to the sound information on the screen, it is a simple conversion such as changing the color displayed on the screen according to the size and tone of the sound included in the sound information.

In addition to simply converting the sound information into image information corresponding to the sound information, an attempt to variously utilize the sound information is disclosed in Korean Patent Laid-Open No. 10-2015-0055262, However, the present invention also has a limitation in that the image information displayed on the screen is simply converted according to the size or tone of the sound included in the sound information.

Accordingly, not only the image information is simply converted according to the size or tone of the sound included in the sound information, but various characteristics such as the speaker's utterance habit included in the sound information are reflected, There is a need for a technology that can convert and display image information in various ways.

Korean Patent Laid-Open No. 10-2015-0055262: Visualization Display Method of Sound Using Mobile Device (Applicant: Seo Won Young)

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems, and it is an object of the present invention to analyze various characteristics such as a speaking habit of a speaker included in acoustic information, to visually change image information in various ways according to the analyzed result And a mobile terminal using the method.

According to an aspect of the present invention, there is provided a method of visualizing sound, the method including: collecting and storing acoustic information; Storing a plurality of pieces of image information so that specific image information is visualized according to the sound information when the stored sound information is outputted, and selecting first image information among the plurality of images; Causing a format of the stored acoustic information to be converted; Generating second image information such that the first image information is transformed according to the converted acoustic information; And when the stored sound information is outputted, the first image information and the second image information are continuously visualized.

Here, the storing step collects and stores the format of the sound information in a digital manner, and the converting step may convert the format of the sound information stored in the digital manner into a first waveform (Waveform) have.

The method of visualizing sound according to the present embodiment may further comprise the steps of: when it is determined that deletion of a specific portion of the stored sound information is required by the user, And editing the stored sound information if it is determined that at least two or more of the selected pieces of sound information are required to be synthesized.

The generating of the second image information may include generating a plurality of patterns such that the first image information is transformed according to a specific pattern. Generating a plurality of second image information according to the plurality of generated patterns; . ≪ / RTI >

Also, the plurality of second image information may include at least one of:

Wherein at least one of shape, color, and motion of the object included in the first image information is generated so as to be deformed, and is generated as image information differently modified according to each pattern corresponding to the generated plurality of patterns, May be added.

The generating of the plurality of patterns may include using a relative size (dB) information obtainable from the sound information, pitch information of a sound range, and tone information, First conversion information obtained by converting a first waveform of the sound information by a Fast Fourier Transform method and an inflection point of a first waveform of the sound information by using the first detection information obtained by detecting the amplitude and the frequency, And the plurality of patterns can be generated using at least one or more pieces of second detection information obtained by the detection.

The step of generating the plurality of patterns may further include a step of, when the acoustic information includes the speech of the speaker, detecting the repetitive change in the pitch of the respiration interval or the sound range of the speaker, The plurality of patterns can be generated using the utterance information of the plurality of patterns.

And the visualization step includes the step of including the same object that is not moving in the first image information and the second image information, wherein the object of the second image information includes at least one of a shape, a color, The first image information and the second image information may be continuously visualized so that the object is visualized according to the outputted sound information when the stored sound information is outputted.

According to another aspect of the present invention, there is provided a mobile terminal for visualizing sounds, comprising: an acoustic information collector for collecting acoustic information; A storage unit for storing a plurality of pieces of image information and the collected sound information; Selecting the first image information among the stored plurality of image information so that specific image information is visualized according to the sound information when the stored sound information is output and converting the format of the stored sound information, A controller for generating second image information such that image information is transformed according to the converted acoustic information; An acoustic information output unit for outputting the stored acoustic information; And a display unit for continuously visualizing the first image information and the second image information when the acoustic information is outputted.

Thereby, it is possible to analyze various characteristics such as the speech habits of the speaker included in the acoustic information, visualize the image information in various ways according to the analyzed result, and thereby provide only the acoustic information The features of the acoustic information are reflected, and the image information added with the esthetics can be provided together.

1 is a diagram illustrating a mobile terminal for visualizing sound according to an embodiment of the present invention.
2 is a flowchart illustrating a method of visualizing a sound according to an embodiment of the present invention.
3 is a diagram illustrating a screen of a mobile terminal for visualizing sounds according to an exemplary embodiment of the present invention.
4 is a view for explaining a process of analyzing acoustic information to visualize a sound in a visualization method of sound according to an embodiment of the present invention.
5A and 5B are views illustrating a process of generating second image information to visualize a sound in a method of visualizing sound according to an embodiment of the present invention.
6A and 6B are views illustrating second image information generated in the method of visualizing sound according to an embodiment of the present invention.
7A and 7B are diagrams for explaining second image information generated in the sound visualization method according to an embodiment of the present invention.
8A and 8B are views illustrating a process of generating second image information for visualizing sound in a sound visualization method according to another embodiment of the present invention.

Hereinafter, the present invention will be described in detail with reference to the drawings. The embodiments described below are provided by way of example so that those skilled in the art will be able to fully understand the spirit of the present invention. The present invention is not limited to the embodiments described below and may be embodied in other forms.

1 is a diagram illustrating a mobile terminal 100 for visualizing sounds according to an embodiment of the present invention.

The mobile terminal 100 for visualizing the sound according to the present embodiment is provided for analyzing various features such as the speaker's utterance habits included in the sound information and variously changing and visualizing the image information according to the analyzed result .

For this, a sound information collection unit 110, a storage unit 120, a control unit 130, an acoustic information output unit 140, and a display unit 150 are included.

The acoustic information collection unit 110 is provided for collecting acoustic information.

More specifically, the sound information collecting unit 110 may be implemented as a microphone for collecting and recording sound at the mobile terminal 100.

The storage unit 120 is provided for storing the collected sound information.

Here, the storage unit 120 stores not only the acquired sound information, but also an application prepared for visualizing a plurality of first image information, sound according to the present embodiment.

The control unit 130 is provided for selecting first image information among a plurality of pieces of image information stored so that specific already-image information is visualized according to the sound information when the stored sound information is output, and generating second image information.

The control unit 130 may convert the format of the stored sound information.

The acoustic information output unit 140 is provided for outputting the stored acoustic information.

When the sound information is output, the display unit 150 is provided to sequentially visualize the first image information and the second image information.

Here, the acoustic information may include not only a voice such as a human voice or speech, but also a sound generated by the vibration of the object, and the image information may be expressed by using a line or color to express the shape or image of the object in a plane or cubic Means a picture or an image.

2 is a flowchart illustrating a method of visualizing a sound according to an embodiment of the present invention.

Hereinafter, the overall process of visualizing sound will be described with reference to FIG.

The visualization method of sound according to this embodiment is a method of analyzing the characteristics of the sound information and visualizing the specific image together with the outputted sound information according to the characteristics of the analyzed sound information. In order to visualize the sound, Acoustic information should be collected and stored (S110).

Here, the acoustic information means information in which the sound is digitally stored.

The user may collect the sound information by using the mobile terminal 100 so that the stored sound information may be used, but the previously collected and stored sound information may be used, May be replaced with loading from the storage unit 120.

The image information visualized according to the acoustic information means first image information collected and stored in the same manner as the acoustic information and second image information in which the first image information is modified according to the characteristics of the acoustic information.

The first image information can be selected from any of the stored images.

The process of analyzing the characteristics of the acoustic information and the second image information generated according to the process will be described later.

Meanwhile, when the step of collecting and storing the acoustic information is completed, the user can select the first image information (S120).

Here, the first image information may be image information of any one of the image information previously collected and stored as described above.

When the user selects the first image information, it is possible to select whether to edit the sound information (S130).

If it is determined that the user needs to delete a specific portion of the stored sound information (S130-Y), the specific portion of the stored sound information may be deleted (S140).

In addition, when it is determined that at least two or more pieces of the sound information among the plurality of pieces of sound information are required to be synthesized (S130-Y), the user selects at least two pieces of sound information and synthesizes the selected pieces of sound information (S140).

At this time, the user can convert the format configured in a digital format to a second waveform that visually expresses the sound wave of the sound information in order to more easily edit the sound information.

For example, if the acoustic information to be edited is popular, the user converts the format of the acoustic information from the digital system to the second waveform, and reproduces the musical instrument sound (accompaniment music) The sound information can be edited more easily than by digitally structuring the sound information by dividing the second waveform of each of the singer's voices and removing the voice of the instrument or the singer according to the purpose.

Then, the edited acoustic information is converted into a digital format for outputting acoustic information.

On the other hand, when it is determined that editing of the sound information stored by the user is not necessary (S130-N), or when the editing of the stored sound information is completed (S140), the format of the sound information can be converted from the digital method to the first waveform (S150).

Here, the first waveform differs from the second waveform in which the acoustic information is expressed by a predetermined time interval in order to analyze the characteristics of the acoustic information, and is merely a visual representation of the sound wave of the acoustic information.

Then, the user can analyze the sound information format-converted into the first waveform to generate a plurality of patterns necessary for generating the second image information by modifying the first image (S160), and according to the generated plurality of patterns The second image information can be generated (S170).

In addition, when the second image information is generated, the user outputs the sound information (S180), and when the sound information is output, the first image and the second image can be consecutively visualized while the sound information is output (S190 ).

FIGS. 3A through 3D are views illustrating a screen of a mobile terminal 100 for visualizing sounds according to an exemplary embodiment of the present invention.

Hereinafter, a process of using the method of visualizing sound according to an embodiment of the present invention in the mobile terminal 100 will be described.

3A, the user may click on the record button 305 to collect the sound information and use the stored sound information. However, if the user clicks the audio retrieve button 310, The process of collecting and storing the sound information can be replaced with the loading of the sound information from the storage unit 120. [

3B, when the recording button 305 is clicked as shown in FIG. 3A, the recording page is output as shown in FIG. 3B, and the recording start button 320 and the recording start button 320 are pressed. The recording end button 325 can be used to record and store the sound information.

When the step of collecting and storing the sound information is completed as described above, the user selects one of the stored images by clicking the image load button 315 as shown in FIG. 3A, You can call it.

In addition, when the user selects the first image information, the user can select whether to edit the sound information by clicking the edit button 330 as shown in FIG. 3B.

At this time, the user can edit the format configured in a digital format to edit the specific section more easily, into a second waveform that is visually expressed by the sound wave of the sound information, and edit the second waveform, The information can be edited for each section by clicking the section edit button 335 as shown in FIG. 3C.

On the other hand, when it is determined that editing of the acoustic information stored by the user is not necessary, or when the editing of the stored acoustic information is completed, the user clicks the analyze button 340 to access the analysis page To generate or edit a pattern, and to generate second image information based on the generated pattern.

Specifically, the user can click the pattern creation button 350 to create a pattern, and the pattern editing button 355 can be clicked to edit the generated pattern.

Then, the user can generate the second image information based on the generated pattern or the edited pattern by clicking the image creation button 360.

Thereby, when the second image information is generated, the user outputs the sound information (S180), and when the sound information is output, the first image and the second image can be consecutively visualized while the sound information is output.

4A and 4B are diagrams for explaining a process of analyzing acoustic information to visualize a sound in a visualization method of sound according to an embodiment of the present invention. 6A and 6B are diagrams for explaining a process of generating second image information for visualizing a sound in a visualization method of sound, Is a drawing that is shown for explaining image information. 7A and 7B are diagrams for explaining second image information generated in the sound visualization method according to an embodiment of the present invention.

Hereinafter, the process of analyzing the sound information to visualize sound in the sound visualization method according to the present embodiment will be described in more detail with reference to FIGS. 4 to 7B.

As described above, in the method of visualizing the sound, when it is determined that the editing of the sound information stored by the user is not necessary, or when the editing of the stored sound information is completed, It can be converted into the first waveform.

4, the user may analyze the sound information format-converted into the first waveform through the mobile terminal 100 to transform the first image into a plurality of patterns necessary for generating the second image information Can be generated.

When the first pattern and the second pattern are generated, the control unit 130 of the mobile terminal 100 modifies the first image information according to the first pattern and the second pattern based on the first image information selected by the user, 2 image information.

Here, the second image information may be generated in a plurality of ways such that at least one of the shape, color, and motion of the object included in the first image information selected by the user is transformed according to the first pattern and the second pattern, Each identification number can be added so that they can be individually identified.

More specifically, as shown in FIG. 5A, the first image selected by the user can be generated as the respective second image information by the first pattern and the second pattern, wherein each second image information is different An identification number is added, and at least one of shape, color, and operation of the object included in the image information is generated so as to be modified but different from each other as shown in FIG. 5B.

In the present embodiment, it is assumed that the first pattern and the second pattern are generated when the height of the waveform rises or falls for convenience of explanation. However, the pattern generation process can be performed in a more complicated and various ways .

More specifically, for example, generating a plurality of patterns not only utilizes relative size (dB) information obtainable from acoustic information, pitch information of pitch and pitch information, but also amplitude and / A first detection information obtained by detecting the frequency, a first detection information obtained by converting the waveform of the acoustic information into a Fast Fourier Transform (Fast Fourier Transform) method, and a second detection obtained by detecting the inflection point of the waveform of the sound information The plurality of patterns may be generated using at least one or more pieces of information.

For example, in a case where the speech information of the speaker is included in the acoustic information, it is also possible to detect repeated changes of the pitch of the speech range or the pitch of the speech by using the speech information of the speaker, Can be generated.

As another example, it is possible to divide the sound information by a predetermined time or interval unit, detect the inflection point of the first waveform existing in each of the divided pieces of sound information, digitize the detected result, A plurality of patterns can be generated using the detected result by a method of detecting a certain pattern.

As another example, it is possible to divide the sound information by a predetermined time or interval unit, to detect the amplitude and the frequency of the first waveform of each divided sound information, to detect the amplitude of the first waveform of the detected sound information A plurality of patterns can be generated by using the detected result in a manner that a size is quantified and a certain pattern is detected from each digitized result.

In this case, if the magnitude of the second image information generated based on the respective numerical results is proportional to the numerical result, as shown in FIGS. 6A to 6B, the variation of the amplitude of the first waveform of the acoustic information The second image information can be generated.

When the acoustic information is outputted through the process, a plurality of second images are continuously visualized together with the first image information while the acoustic information is outputted, so that an optical illusion such as an object having motion can be expressed.

Also, at this time, an optical illusion effect such that the speed of motion of the object is adjusted by adjusting the speed at which a plurality of second image information generated according to the frequency of the first waveform is continuously visualized can be additionally expressed.

Here, the waveform shown in FIG. 4 shows only one embodiment of the first waveform of the visualized sound, and the waveform shown in FIG. 4 does not limit the specific embodiment of the present invention, Although the first image information and the second image information shown in FIGS. 5 to 6 are shown in a simplified diagram for the convenience of explanation, the first image information and the second image information of the present invention are more complex and various forms Can be implemented.

Specifically, for example, in the case where the first image information is deformed according to a plurality of patterns generated by using the above-described methods, for example, as shown in Fig. 7A, The second image information can be generated as shown in FIG. 7B by modifying the growth rate and growth direction of the branches of the first image information, respectively.

The visualization method of the sound according to the present embodiment can also use various digital image information such as a raw file, a jrg file, a jpg file, and a png file.

8A and 8B are views illustrating a process of generating second image information for visualizing sound in a sound visualization method according to another embodiment of the present invention.

Hereinafter, the second image information in the visualization method of sound according to the present embodiment, which is different from the process of generating the second image information in the sound visualization method according to the embodiment described above with reference to FIGS. 8A to 8B, Will be described.

The process of generating the second image information in the sound visualization method according to the present embodiment is also the same as the above-described process of generating a plurality of patterns, and the process of generating a plurality of patterns will be omitted.

The process of generating the second image information in the method of visualizing sound according to the present embodiment is not limited to generating the second image information in which the first image information is different according to each pattern when a plurality of patterns are generated, When the patterns are repeatedly arranged as shown in FIG. 8A, the patterns are reflected on the first image information arranged in the order of the arranged patterns, so that the first image information is deformed to generate the second image information can do.

For example, the first image information may reflect the first pattern arranged first to generate the second image information, the generated second image information may be stored, and the stored second image information may be stored in the second 2 pattern is reflected and the new second image information is generated repeatedly. The generated second image information is stored, and the stored second image information is reflected again to reflect the respective patterns New second image information is generated in which the pattern is reflected in the existing second image information and is modified.

Accordingly, a plurality of second image information different from each other can be generated by being deformed once each of the patterns is reflected on the basis of the first image information, 2 image information according to an embodiment of the present invention, it is possible to generate a larger number of second image information than the method of visualizing sound according to an embodiment, in which the first image information and the plurality of second images are continuously It has the merit of being able to express more variously in visualization.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is clearly understood that the same is by way of illustration and example only and is not to be construed as limiting the scope of the invention as defined by the appended claims. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

100: mobile terminal
110: Acoustic information collecting unit
120:
130:
140: Acoustic information output unit
150:

Claims (9)

Collecting and storing acoustic information;
Storing a plurality of pieces of image information so that specific image information is visualized according to the sound information when the stored sound information is outputted, and selecting first image information among the plurality of images;
Causing a format of the stored acoustic information to be converted;
Generating second image information such that the first image information is transformed according to the converted acoustic information; And
Wherein the first image information and the second image information are continuously visualized when the stored sound information is output.
The method according to claim 1,
Wherein,
And a controller for collecting and storing the format of the sound information in a digital manner,
Wherein,
Wherein the format of the digitally stored acoustic information is converted to a first waveform.
3. The method of claim 2,
Wherein when the stored sound information is a plurality of pieces of information, it is determined that deletion of a specific portion of the stored sound information is required by the user, or at least two pieces of the sound information among the stored plurality of pieces of sound information are selected by the user, And editing the stored sound information if it is determined that synthesis of at least two pieces of sound information is necessary.
The method according to claim 1,
Wherein the generating of the second image information comprises:
Generating a plurality of patterns such that the first image information is deformed according to a specific pattern; And
Generating a plurality of second image information according to the plurality of generated patterns; And displaying the visualized sound.
5. The method of claim 4,
Wherein the plurality of second image information comprises:
Wherein at least one of a shape, a color, and an operation of an object included in the first image information is generated so as to be transformed,
Wherein the generated image information is generated as image information differently modified according to each pattern so as to correspond to each of the plurality of generated patterns, and each identification number is added.
5. The method of claim 4,
Wherein the generating the plurality of patterns comprises:
(DB) information that can be obtained from the sound information, pitch information of the sound range, and tone information,
First detection information obtained by detecting an amplitude and a frequency of a first waveform of the acoustic information; first conversion information obtained by converting a first waveform of the acoustic information by a Fast Fourier Transform method; Wherein said plurality of patterns are generated by using at least one or more pieces of second detection information obtained by detecting an inflection point of a first waveform of information.
The method according to claim 6,
Wherein the generating the plurality of patterns comprises:
And generating the plurality of patterns by using the speaker's utterance information obtained by detecting repetitive changes in pitch of the speaker's breath interval or pitch of the voice when the speaker's voice includes the speaker's voice Wherein the visualization method comprises the steps of:
The method according to claim 1,
Wherein the visualization step comprises:
Wherein the first image information includes at least one of the first image information and the second image information, and the second image information includes at least one of the shape, the color, and the operation of the first image information. And the first image information and the second image information are continuously visualized so that the object is visualized as moving according to the outputted sound information when the stored sound information is outputted.
An acoustic information collecting unit for collecting acoustic information;
A storage unit for storing a plurality of pieces of image information and the collected sound information;
Selecting the first image information among the stored plurality of image information so that specific image information is visualized according to the sound information when the stored sound information is output and converting the format of the stored sound information, A controller for generating second image information such that image information is transformed according to the converted acoustic information;
An acoustic information output unit for outputting the stored acoustic information; And
And a display unit for continuously visualizing the first image information and the second image information when the sound information is outputted.
KR1020150175711A 2015-12-10 2015-12-10 Method for Visualization of Sound and Mobile terminal using the same KR20170068795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150175711A KR20170068795A (en) 2015-12-10 2015-12-10 Method for Visualization of Sound and Mobile terminal using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150175711A KR20170068795A (en) 2015-12-10 2015-12-10 Method for Visualization of Sound and Mobile terminal using the same

Publications (1)

Publication Number Publication Date
KR20170068795A true KR20170068795A (en) 2017-06-20

Family

ID=59281247

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150175711A KR20170068795A (en) 2015-12-10 2015-12-10 Method for Visualization of Sound and Mobile terminal using the same

Country Status (1)

Country Link
KR (1) KR20170068795A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022164134A1 (en) * 2021-01-29 2022-08-04 조관희 Method for generating image pattern on basis of music, and system for operating same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022164134A1 (en) * 2021-01-29 2022-08-04 조관희 Method for generating image pattern on basis of music, and system for operating same

Similar Documents

Publication Publication Date Title
TWI433027B (en) An adaptive user interface
JP5895740B2 (en) Apparatus and program for performing singing synthesis
KR101521451B1 (en) Display control apparatus and method
JP6791258B2 (en) Speech synthesis method, speech synthesizer and program
EP1688912A2 (en) Voice synthesizer of multi sounds
KR101374353B1 (en) Sound processing apparatus
Selamtzis et al. Analysis of vibratory states in phonation using spectral features of the electroglottographic signal
Menexopoulos et al. The state of the art in procedural audio
KR20170068795A (en) Method for Visualization of Sound and Mobile terminal using the same
US8314321B2 (en) Apparatus and method for transforming an input sound signal
KR101471602B1 (en) Sound processing apparatus and sound processing method
WO2022227037A1 (en) Audio processing method and apparatus, video processing method and apparatus, device, and storage medium
JP4390289B2 (en) Playback device
JP4720974B2 (en) Audio generator and computer program therefor
Sharma et al. Towards understanding and verbalizing spatial sound phenomena in electronic music
KR101218336B1 (en) visualizing device for audil signal
US20150243066A1 (en) System for visualizing acoustic information
WO2019229936A1 (en) Information processing system
JP6834370B2 (en) Speech synthesis method
Vickery Through the Eye of the Needle: Compositional Applications for Visual/Sonic Interplay
Emmerson 6 Analysing Non-Score-Based Music
JP6390112B2 (en) Music information processing apparatus, program, and method
US10863047B2 (en) Converting media using mobile devices
US20230014604A1 (en) Electronic device for generating mouth shape and method for operating thereof
JP6676852B2 (en) Waveform control device, method, and program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E601 Decision to refuse application
E801 Decision on dismissal of amendment