CN111640445A - Audio difference detection method, device, equipment and readable storage medium - Google Patents
Audio difference detection method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN111640445A CN111640445A CN202010405107.XA CN202010405107A CN111640445A CN 111640445 A CN111640445 A CN 111640445A CN 202010405107 A CN202010405107 A CN 202010405107A CN 111640445 A CN111640445 A CN 111640445A
- Authority
- CN
- China
- Prior art keywords
- audio
- difference
- comparison
- preset
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 45
- 238000010586 diagram Methods 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 21
- 238000001228 spectrum Methods 0.000 claims description 15
- 230000009467 reduction Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 4
- 230000003321 amplification Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an audio difference detection method, an audio difference detection device, audio difference detection equipment and a readable storage medium, wherein the audio difference detection method is used for conveniently and effectively comparing two types of audio by acquiring reference information for comparing standard audio with reference audio; the standard audio and the voice oscillogram of the comparison audio are automatically overlapped and compared according to the reference information, so that a user can quickly realize effective comparison of the oscillograms of the two types of audio, the efficiency of audio comparison operation is improved, and the user experience is improved; by further determining the similar grade of the two types of audio and independently outputting the difference part and the specific difference data, the user can quickly acquire the detailed information of the difference of the two types of audio, and the efficiency of acquiring the audio difference information is further improved.
Description
Technical Field
The present invention relates to the field of audio processing technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for detecting an audio difference.
Background
With the development of scientific technology and the great improvement of hardware computing capability, the development of audio recognition technology is also perfected day by day and is widely applied to various fields. In the field of public security investigation, it is often necessary to compare suspected audio. The clerk usually needs to repeatedly listen and distinguish the different parts of the compared audio in the process of comparison and authentication. However, the manual voice atlas for comparing the audio frequency is intuitively compared, or the mode of directly distinguishing and listening to the comparison audio frequency is too complicated, so that the difference between the comparison audio frequencies is difficult to be quickly determined, and the technical problem of low audio frequency difference comparison efficiency is caused.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an audio difference detection method, aiming at solving the technical problem of low audio difference comparison efficiency.
To achieve the above object, the present invention provides an audio difference detection method applied to an audio difference detection device, the audio difference detection method including the steps of:
receiving an audio comparison instruction, and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
acquiring a first voice oscillogram and a second voice oscillogram respectively corresponding to the standard audio and the comparison audio, and performing overlapping comparison on the first voice oscillogram and the second voice oscillogram based on the reference information;
and determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the contrast audio according to a preset threshold value.
Optionally, the preset threshold comprises a preset first threshold and a preset second threshold,
the step of determining the similarity level of the standard audio and the contrast audio according to a preset threshold comprises the following steps:
judging whether the overlapping rate of the first voice oscillogram and the second voice oscillogram exceeds a preset first threshold value or not;
if the similarity level does not exceed a preset first threshold, determining that the similarity level is low similarity;
if the preset first threshold value is exceeded, judging whether the preset second threshold value is exceeded, wherein the preset first threshold value is smaller than the preset second threshold value;
and if the preset second threshold value is not exceeded, determining that the similarity grade is moderate similarity.
Optionally, after the step of determining whether the preset second threshold is exceeded, the method further includes:
if the standard audio frequency exceeds a preset second threshold value, performing fast Fourier transform on the standard audio frequency and the comparison audio frequency to respectively generate a first spectrogram and a second spectrogram;
comparing the first spectrogram with the second spectrogram to obtain a characteristic difference, and judging whether the characteristic difference meets a preset spectrogram characteristic condition;
if not, determining that the similarity grade is highly similar;
and if so, marking the comparison audio as the dubbing audio of the standard audio.
Optionally, the step of determining whether the feature difference between the first spectrogram and the second spectrogram meets a preset spectrogram feature condition includes:
judging whether the synchronization rate of the formant edge frequency between the first spectrogram and the second spectrogram reaches a preset third threshold value or not;
if the feature difference reaches a preset third threshold value, judging that the feature difference meets a preset speech spectrum feature condition;
and if the feature difference does not meet the preset third threshold, judging that the feature difference does not meet the preset speech spectrum feature condition.
Optionally, the step of outputting the difference part of the first voice waveform map and the second voice waveform map and the corresponding difference data includes:
intercepting and displaying a difference part comparison map of the first voice oscillogram and the second voice oscillogram;
and acquiring an amplitude difference value and a time difference value between the first voice oscillogram and the second voice oscillogram, and correspondingly displaying the amplitude difference value and the time difference value in the difference part comparison map, wherein the difference data comprises the amplitude difference value and the time difference value.
Optionally, after the step of determining the similarity level between the standard audio and the reference audio according to a preset threshold, the method further includes:
and intercepting a target audio part corresponding to the difference part comparison map in the standard audio and the comparison audio, and associating the target audio part with the difference part comparison map.
Optionally, before the step of obtaining the first speech waveform diagram and the second speech waveform diagram corresponding to the standard audio and the comparison audio respectively, the method further includes:
and performing noise reduction processing on the standard audio and the contrast audio.
In addition, to achieve the above object, the present invention also provides an audio difference detecting apparatus, including:
the audio information acquisition model is used for receiving an audio comparison instruction and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
the voice waveform comparison module is used for acquiring a first voice waveform diagram and a second voice waveform diagram which respectively correspond to the standard audio and the comparison audio, and performing overlapping comparison on the first voice waveform diagram and the second voice waveform diagram based on the reference information;
and the similar grade determining module is used for determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the comparison audio according to a preset threshold value.
Further, the similarity level determination module includes:
the first threshold judging unit is used for judging whether the overlapping rate of the first voice oscillogram and the second voice oscillogram exceeds a preset first threshold or not;
the low degree similarity judging unit is used for determining the similarity grade as low degree similarity if the preset first threshold value is not exceeded;
the second threshold judging unit is used for judging whether the preset second threshold is exceeded or not if the preset first threshold is exceeded, wherein the preset first threshold is smaller than the preset second threshold;
and the moderate similarity judging unit is used for determining that the similarity grade is moderate similarity if the preset second threshold is not exceeded.
Further, the similarity level determination module includes:
the voice spectrogram image generating unit is used for performing fast Fourier transform on the standard audio and the contrast audio if a preset second threshold value is exceeded, and respectively generating a first voice spectrogram and a second voice spectrogram;
the speech spectrum feature judgment unit is used for comparing the first speech spectrogram with the second speech spectrogram to obtain a feature difference and judging whether the feature difference meets a preset speech spectrum feature condition;
a height similarity determination unit, configured to determine that the similarity levels are height similarities if the similarity levels are not satisfied;
and the dubbing audio judgment unit is used for marking the comparison audio as the dubbing audio of the standard audio if the comparison audio meets the requirement.
Further, the similarity level determination module includes:
a third threshold value judging unit, configured to judge whether a synchronization rate of a formant edge frequency between the first spectrogram and the second spectrogram reaches a preset third threshold value;
the condition satisfaction judging unit is used for judging that the feature difference meets the preset speech spectrum feature condition if a preset third threshold value is reached;
and the condition dissatisfaction judging unit is used for judging that the characteristic difference does not meet the preset speech spectrum characteristic condition if the preset third threshold is not reached.
Further, the similarity level determination module includes:
the difference comparison display unit is used for intercepting and displaying a difference part comparison map of the first voice oscillogram and the second voice oscillogram;
and the difference value display unit is used for acquiring an amplitude difference value and a time difference value between the first voice oscillogram and the second voice oscillogram, and correspondingly displaying the amplitude difference value and the time difference value in the difference part comparison map, wherein the difference data comprises the amplitude difference value and the time difference value.
Further, the audio difference detecting apparatus further includes:
and the difference audio association module is used for intercepting a target audio part corresponding to the difference part comparison map in the standard audio and the comparison audio and associating the target audio part with the difference part comparison map.
Further, the voice waveform comparison module further comprises:
and the audio noise reduction unit is used for carrying out noise reduction processing on the standard audio and the contrast audio.
Further, to achieve the above object, the present invention also provides an audio difference detecting apparatus comprising: a memory, a processor and an audio difference detection program stored on the memory and executable on the processor, the audio difference detection program when executed by the processor implementing the steps of the audio difference detection method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer readable storage medium having an audio difference detection program stored thereon, which when executed by a processor, implements the steps of the audio difference detection method as described above.
The invention provides an audio difference detection method, an audio difference detection device, audio difference detection equipment and a computer-readable storage medium. The audio difference detection method comprises the steps of receiving an audio comparison instruction, and obtaining standard audio, comparison audio and reference information which are determined based on the audio comparison instruction; acquiring a first voice oscillogram and a second voice oscillogram respectively corresponding to the standard audio and the comparison audio, and performing overlapping comparison on the first voice oscillogram and the second voice oscillogram based on the reference information; and determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the contrast audio according to a preset threshold value. By the mode, the reference information for comparing the standard audio with the reference audio is obtained, so that the two types of audio can be conveniently and effectively compared; the standard audio and the voice oscillogram of the comparison audio are automatically overlapped and compared according to the reference information, so that a user can effectively compare the oscillograms of the two types of audio by one key, the efficiency of audio comparison operation is improved, and the user experience is improved; by further determining the similar grade of the two types of audio and independently outputting the difference part and the specific difference data, a user can quickly acquire the detailed information of the difference of the two types of audio, the efficiency of acquiring the audio difference information is further improved, and the technical problem of low audio difference comparison efficiency is solved.
Drawings
FIG. 1 is a schematic diagram of an apparatus architecture of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an audio difference detection method according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of an audio difference detection method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 3) player and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, an audio circuit, a WiFi module, and the like.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an audio difference detection program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke an audio difference detection program stored in the memory 1005 and perform the following operations:
receiving an audio comparison instruction, and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
acquiring a first voice oscillogram and a second voice oscillogram respectively corresponding to the standard audio and the comparison audio, and performing overlapping comparison on the first voice oscillogram and the second voice oscillogram based on the reference information;
and determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the contrast audio according to a preset threshold value.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
judging whether the overlapping rate of the first voice oscillogram and the second voice oscillogram exceeds a preset first threshold value or not;
if the similarity level does not exceed a preset first threshold, determining that the similarity level is low similarity;
if the preset first threshold value is exceeded, judging whether the preset second threshold value is exceeded, wherein the preset first threshold value is smaller than the preset second threshold value;
and if the preset second threshold value is not exceeded, determining that the similarity grade is moderate similarity.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
if the standard audio frequency exceeds a preset second threshold value, performing fast Fourier transform on the standard audio frequency and the comparison audio frequency to respectively generate a first spectrogram and a second spectrogram;
comparing the first spectrogram with the second spectrogram to obtain a characteristic difference, and judging whether the characteristic difference meets a preset spectrogram characteristic condition;
if not, determining that the similarity grade is highly similar;
and if so, marking the comparison audio as the dubbing audio of the standard audio.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
judging whether the synchronization rate of the formant edge frequency between the first spectrogram and the second spectrogram reaches a preset third threshold value or not;
if the feature difference reaches a preset third threshold value, judging that the feature difference meets a preset speech spectrum feature condition;
and if the feature difference does not meet the preset third threshold, judging that the feature difference does not meet the preset speech spectrum feature condition.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
intercepting and displaying a difference part comparison map of the first voice oscillogram and the second voice oscillogram;
and acquiring an amplitude difference value and a time difference value between the first voice oscillogram and the second voice oscillogram, and correspondingly displaying the amplitude difference value and the time difference value in the difference part comparison map, wherein the difference data comprises the amplitude difference value and the time difference value.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
and intercepting a target audio part corresponding to the difference part comparison map in the standard audio and the comparison audio, and associating the target audio part with the difference part comparison map.
Further, the processor 1001 may call the audio difference detection program stored in the memory 1005, and also perform the following operations:
and performing noise reduction processing on the standard audio and the contrast audio.
Based on the above hardware structure, various embodiments of the audio difference detection method of the present invention are proposed.
With the development of scientific technology and the great improvement of hardware computing capability, the development of audio recognition technology is also perfected day by day and is widely applied to various fields. In the field of public security investigation, it is often necessary to compare suspected audio. The clerk usually needs to repeatedly listen and distinguish the different parts of the compared audio in the process of comparison and authentication. However, the manual visual comparison of the voice maps of the compared audios or the direct audio recognition and listening mode are too complicated, and it is difficult to quickly determine the difference between the compared audios, thereby causing the technical problem of low audio difference comparison efficiency.
In order to solve the above problems, the present invention provides an audio difference detection method, which obtains reference information for comparing a standard audio with a reference audio, and is convenient for effectively comparing the two types of audio; the standard audio and the voice oscillogram of the comparison audio are automatically overlapped and compared according to the reference information, so that a user can effectively compare the oscillograms of the two types of audio by one key, the efficiency of audio comparison operation is improved, and the user experience is improved; by further determining the similar grade of the two types of audio and independently outputting the difference part and the specific difference data, a user can quickly acquire the detailed information of the difference of the two types of audio, the efficiency of acquiring the audio difference information is further improved, and the technical problem of low audio difference comparison efficiency is solved. The audio difference detection method is applied to the terminal.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of an audio difference detection method.
A first embodiment of the present invention provides an audio difference detection method, including the steps of:
step S10, receiving an audio comparison instruction, and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
in this embodiment, the audio comparison instruction is used to create an audio comparison task on the terminal, and the comparison audio is compared with the standard audio as a reference. The instruction can be initiated to the terminal by the user in real time according to the actual condition, and can also be automatically initiated by the terminal according to a preset program. The number of the reference tones may be one or more, and this embodiment is not particularly limited thereto. The standard audio can be designated by a user or automatically determined by the terminal according to a preset program. It should be noted that, a single standard audio may be assigned by one audio comparison task, and different audios may also be assigned as standard audios to perform multiple comparisons, so as to obtain multiple comparison results. The reference information is used to determine the comparison start position of the standard audio and the comparison audio, and may be a designated time point or a designated syllable, etc. Specifically, the user currently imports audio of 5 minutes and 23 seconds duration and audio of 5 minutes duration into the computer. And the suspected audio time length of the audio with the time length of 5 minutes is the recorded audio of the audio with the time length of 5 minutes and 23 seconds. In order to further confirm the information about the audio, the user creates an audio comparison task in the computer, namely, sends an audio comparison instruction to the computer. And the computer receives the audio comparison instruction, acquires the standard audio with the time length of 5 minutes and 23 seconds in the instruction, and the comparison audio with the time length of 5 minutes and the appointed comparison starting time point.
Step S20, obtaining a first speech waveform diagram and a second speech waveform diagram corresponding to the standard audio and the reference audio respectively, and comparing the first speech waveform diagram and the second speech waveform diagram in an overlapping manner based on the reference information;
in this embodiment, the speech waveform is a time domain waveform of the speech signal, the abscissa of the time domain waveform is time, and the ordinate of the time domain waveform is amplitude. The first voice oscillogram is a time domain oscillogram corresponding to a voice signal of a standard audio frequency, and the second voice oscillogram is a time domain oscillogram corresponding to a voice signal of a contrast audio frequency. Specifically, the setting in the specific embodiment in step S10 is continued. The computer can rapidly draw a first voice oscillogram and a second voice oscillogram respectively corresponding to a standard audio with the time length of 5 minutes and 23 seconds and a comparison audio with the time length of 5 minutes in a current audio comparison task by means of a software tool, and the technology is a means in the prior art and is not repeated herein. The computer overlaps the first voice waveform image and the second voice waveform image according to the starting time points of the two audios designated by the user as a reference, and highlights the difference part in the two images. For example, the first voice waveform diagram and the second voice waveform diagram have three waveform differences, namely a segment from 1 minute 38 seconds to 1 minute 49 seconds, a segment from 3 minutes 42 seconds to 3 minutes 47 seconds and a segment from 4 minutes 17 seconds to 4 minutes 22 seconds. The computer can amplify the three different parts and highlight the three different parts with different colors, the amplification scale and the display color can be set according to the actual situation and can be automatically adjusted by the user, and the embodiment does not specifically limit the amplification scale and the display color.
Step S30, determining and outputting a difference portion between the first speech waveform diagram and the second speech waveform diagram and corresponding difference data, and determining a similarity level between the standard audio and the comparison audio according to a preset threshold.
In the present embodiment, a preset threshold is used to determine the similarity level of the control audio with respect to the standard audio, and a plurality of thresholds may be set so as to subdivide a plurality of similarity levels. For ease of judgment, the threshold value is usually set in the form of a percentage. The similarity level includes a plurality of level settings, for example, the level settings may be set to be low-level similarity, medium-level similarity, high-level similarity, etc., and the level settings may be flexibly set according to the actual situation, and the comparison in this embodiment is not specifically limited. The difference part is displayed in the form of a comparison graph which can be a waveform graph segment, and the difference data can comprise difference percentage, difference amplitude, difference duration and the like. Specifically, the similarity levels are set to three levels of low similarity, medium similarity and high similarity, the threshold corresponding to low similarity is 30%, the threshold corresponding to medium similarity is 60%, and the threshold corresponding to high similarity is 90%, and the setting in the embodiment in step S20 is used. The computer automatically intercepts waveform picture segments corresponding to three difference time segments of 1 minute 38 seconds to 1 minute 49 seconds, 3 minutes 42 seconds to 3 minutes 47 seconds and 4 minutes 17 seconds to 4 minutes 22 seconds in the first voice waveform diagram and the second voice waveform diagram and generates a difference waveform diagram segment comparison diagram according to time sequence, and specific start-stop time, difference percentage, difference amplitude, difference duration and the like can be displayed below each pair of waveform diagram segments in the comparison diagram, so that a user can quickly know more specific difference information.
In the embodiment, a standard audio, a comparison audio and reference information determined based on an audio comparison instruction are acquired by receiving the audio comparison instruction; acquiring a first voice oscillogram and a second voice oscillogram respectively corresponding to the standard audio and the comparison audio, and performing overlapping comparison on the first voice oscillogram and the second voice oscillogram based on the reference information; and determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the contrast audio according to a preset threshold value. By the mode, the reference information for comparing the standard audio with the reference audio is obtained, so that the two types of audio can be conveniently and effectively compared; the standard audio and the voice oscillogram of the comparison audio are automatically overlapped and compared according to the reference information, so that a user can effectively compare the oscillograms of the two types of audio by one key, the efficiency of audio comparison operation is improved, and the user experience is improved; by further determining the similar grade of the two types of audio and independently outputting the difference part and the specific difference data, a user can quickly acquire the detailed information of the difference of the two types of audio, the efficiency of acquiring the audio difference information is further improved, and the technical problem of low audio difference comparison efficiency is solved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a second embodiment of an audio difference detection method.
Based on the first embodiment shown in fig. 2, a second embodiment of the audio difference detection method of the present invention is proposed. In the present embodiment, step S30 includes:
step S31, determining whether the overlapping rate of the first voice waveform diagram and the second voice waveform diagram exceeds a preset first threshold;
in this embodiment, it should be noted that the similarity level is divided into three levels from low to high: low degree of similarity, moderate degree of similarity, and high degree of similarity. The preset first threshold is used to determine whether the similarity level between the first speech waveform diagram and the second speech waveform diagram is low, and may be flexibly set according to the actual situation, which is not specifically limited in this embodiment. Specifically, the first preset threshold is set to 30%. The computer obtains the overlapping rate of the first voice oscillogram and the second voice oscillogram and judges whether the overlapping rate exceeds a preset first threshold value by 30 percent.
Step S32, if the similarity level does not exceed a preset first threshold, determining that the similarity level is low similarity;
in this embodiment, if the terminal determines that the overlapping rate of the first speech waveform diagram and the second speech waveform diagram does not exceed the preset first threshold value. Specifically, if the user wants to determine whether the comparison audio is the dubbing audio of the standard audio. The actual overlapping rate obtained by the computer is 25%, and the actual overlapping rate does not reach 30%, so that the similarity level of the comparison audio to the standard audio is judged to be low-degree similarity, and the possibility that the comparison audio is the dubbing audio is eliminated.
Step S33, if the first threshold is exceeded, determining whether the second threshold is exceeded, wherein the first threshold is smaller than the second threshold;
in this embodiment, if the terminal determines that the overlapping rate of the first speech waveform diagram and the second speech waveform diagram exceeds the preset first threshold, it needs to further determine whether the overlapping rate still exceeds the preset second threshold. The preset second threshold is used to determine whether the comparison audio is moderately similar to the standard audio in the similar level, and is flexibly set according to an actual situation, which is not specifically limited in this embodiment. It should be noted that the preset second threshold is certainly greater than the preset first threshold. Specifically, the preset first threshold value is set to 30%, and the preset second threshold value is set to 60%.
In step S34, if the preset second threshold is not exceeded, it is determined that the similarity level is moderate similarity.
In this embodiment, if the terminal determines that the overlapping rate of the first speech waveform diagram and the second speech waveform diagram is greater than the preset first threshold but not greater than the preset second threshold, the similarity level may be determined to be moderately similar. Specifically, if the user wants to determine whether the comparison audio is the dubbing audio of the standard audio. The actual overlap ratio obtained by the computer is 45%, which exceeds the preset first threshold value by 30%, but does not reach the preset second threshold value by 60%, and then the similarity grade of the comparison audio to the standard audio is determined to be moderate similarity, and the possibility that the comparison audio is the dubbing audio is also eliminated.
Further, in this embodiment, after step S33, the method further includes:
step S35, if the standard audio frequency exceeds a preset second threshold value, performing fast Fourier transform on the standard audio frequency and the comparison audio frequency to respectively generate a first spectrogram and a second spectrogram;
in this embodiment, the abscissa of the spectrogram is time, the ordinate is frequency, and the coordinate point value is voice data energy, i.e., an energy value, and the energy value is represented by color depth. The first spectrogram is a spectrogram corresponding to a voice signal of the standard audio, and the second spectrogram is a spectrogram corresponding to a voice signal of the contrast audio. If the terminal judges that the overlapping rate of the first voice oscillogram and the second voice oscillogram exceeds the preset second threshold, the similarity between the standard audio and the comparison audio needs to be judged in more detail through the characteristics on the frequency domain because the information which can be known in the time domain is limited, so as to ensure the accuracy of the judgment of the similarity level. To acquire frequency domain information of a voice signal, Fast Fourier Transform (FFT) is first performed on the voice signal. Specifically, the step of generating the corresponding spectrogram according to the voice signal is the prior art, and is not described herein again.
Step S36, comparing the first spectrogram and the second spectrogram to obtain a characteristic difference, and judging whether the characteristic difference meets a preset spectrogram characteristic condition;
in this embodiment, the predetermined speech spectrum characteristic condition may be whether the energy difference reaches a predetermined threshold, or whether the synchronization rate of the formant edge frequency reaches a predetermined threshold. The characteristic difference may be an energy value difference, a formant frequency difference, or the like. And the computer performs overlapping comparison on the first spectrogram and the second spectrogram based on the reference information to acquire the characteristic difference between the first spectrogram and the second spectrogram and judges whether the characteristic difference meets the preset spectrogram characteristic condition.
Step S37, if not, determining the similarity grade is highly similar;
in this embodiment, if the computer determines that the feature difference between the first spectrogram and the second spectrogram does not satisfy the predetermined spectrogram feature condition, the similarity level of the comparison audio with respect to the standard audio may be determined to be highly similar.
And step S38, if yes, marking the comparison audio as the dubbing audio of the standard audio.
In this embodiment, if the computer determines that the feature difference between the first spectrogram and the second spectrogram meets the predetermined spectral feature condition, it may be determined that the similarity between the comparison audio and the standard audio meets the criterion of the dubbing audio, and the comparison audio is marked as the dubbing audio of the standard audio.
Further, not shown in the figure, in the present embodiment, the step S36 includes:
step a, judging whether the synchronization rate of the formant edge frequency between the first spectrogram and the second spectrogram reaches a preset third threshold value;
in this embodiment, the preset third threshold applied to the frequency domain has no limited size relationship with the preset first threshold and the preset second threshold applied to the time domain, and may be flexibly set according to the actual situation, which is not specifically limited in this embodiment. The formants refer to regions where energy is relatively concentrated in the frequency spectrum of sound, and the sound is filtered by the cavity when passing through the resonant cavity, so that energy of different frequencies in the frequency domain is redistributed, one part is strengthened due to the resonance of the resonant cavity, and the other part is attenuated. Since the energy distribution is not uniform, the strong part is like a peak, and is called a formant. In speech acoustics, formants determine the quality of vowels, while in computer music, they are important parameters for determining the tone color and quality, so the degree of similarity of audio can be determined by using the parameters related to formants. The computer determines whether the synchronization rate of the formant edge frequencies between the first spectrogram and the second spectrogram reaches a preset third threshold, for example, 90%. The formant edge frequency may include an upper edge frequency and a lower edge frequency of the formant.
B, if the preset third threshold value is reached, judging that the characteristic difference meets the preset speech spectrum characteristic condition;
in this embodiment, if the computer determines that the synchronization rate of the formant edge frequencies between the first spectrogram and the second spectrogram reaches the preset third threshold, it may be determined that the feature difference between the first spectrogram and the second spectrogram satisfies the preset spectrogram feature condition.
And c, if the feature difference does not reach the preset third threshold, judging that the feature difference does not meet the preset speech spectrum feature condition.
In this embodiment, if the computer determines that the synchronization rate of the formant edge frequency between the first spectrogram and the second spectrogram does not reach the preset third threshold, it may be determined that the feature difference between the first spectrogram and the second spectrogram does not satisfy the preset spectrogram feature condition.
In this embodiment, the specific similarity level of the comparison audio with respect to the standard audio is further determined from the time domain information of the audio by setting the first threshold and the second threshold, so that qualitative similarity determination is quickly and intuitively provided for a user, and the efficiency of similarity determination is improved; by acquiring the frequency domain information of the audio and setting the third threshold, the similarity degree of the audio can be more accurately judged, and the accuracy of judging the similarity degree of the audio is improved; the accuracy of audio similarity judgment is further improved by setting the speech spectrum characteristic conditions according to the relevant information of the formants.
Further, not shown in the drawings, a third embodiment of the audio difference detection method of the present invention is proposed based on the first embodiment shown in fig. 2. In the present embodiment, step S30 includes:
d, intercepting and displaying a difference part comparison map of the first voice oscillogram and the second voice oscillogram;
in this embodiment, the computer intercepts the difference oscillogram segments in the first and second speech oscillograms, displays the corresponding part of the standard audio at the top, aligns the standard audio at the bottom according to the horizontal axis timeline, and arranges the assembly charts in sequence according to the time sequence for the user to watch.
And e, acquiring an amplitude difference value and a time difference value between the first voice oscillogram and the second voice oscillogram, and correspondingly displaying the amplitude difference value and the time difference value in the difference part comparison map, wherein the difference data comprises the amplitude difference value and the time difference value.
In this embodiment, the difference data is an amplitude difference value and a time difference value. The computer obtains the amplitude difference value and the time difference value between the first voice oscillogram and the second voice oscillogram through calculation, and displays the amplitude difference value and the time difference value in the difference part comparison map correspondingly so as to facilitate comparison and analysis of a user.
Further, in this embodiment, after step S30, the method further includes:
and f, intercepting a target audio part corresponding to the difference part comparison map in the standard audio and the comparison audio, and associating the target audio part with the difference part comparison map.
In this embodiment, the computer separately intercepts the difference audio segment between the standard audio and the comparison audio, and associates with the difference part comparison map, so that the user can directly recognize the corresponding audio segment when clicking the difference part comparison map.
Further, in this embodiment, before step S20, the method further includes:
and g, carrying out noise reduction treatment on the standard audio and the contrast audio.
In this embodiment, before generating the corresponding speech waveform, the computer may perform noise reduction on the standard audio and the reference audio by using a Convolutional Neural Network (CNN) to reduce errors.
In the embodiment, the difference part comparison map is further generated by separately intercepting and displaying detailed difference data, so that a user can more intuitively and quickly acquire difference information between the standard audio and the comparison audio, and the difference judgment efficiency is further improved; the difference part comparison map is associated with the corresponding audio clip, so that the user can directly distinguish and listen to the corresponding audio clip when clicking the difference part comparison map, and the distinguishing and listening time of the user is saved; the standard audio and the contrast audio are denoised before the voice oscillogram is generated, so that the interference of noise is eliminated, and the accuracy of the finally obtained difference result is improved.
The present invention also provides an audio difference detecting apparatus, including:
the audio information acquisition model is used for receiving an audio comparison instruction and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
the voice waveform comparison module is used for acquiring a first voice waveform diagram and a second voice waveform diagram which respectively correspond to the standard audio and the comparison audio, and performing overlapping comparison on the first voice waveform diagram and the second voice waveform diagram based on the reference information;
and the similar grade determining module is used for determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the comparison audio according to a preset threshold value.
The invention also provides audio difference detection equipment.
The audio difference detection device comprises a processor, a memory and an audio difference detection program stored on the memory and executable on the processor, wherein the audio difference detection program, when executed by the processor, implements the steps of the audio difference detection method as described above.
The method implemented when the audio difference detection program is executed may refer to various embodiments of the audio difference detection method of the present invention, and will not be described herein again.
The invention also provides a computer readable storage medium.
The computer-readable storage medium of the present invention has stored thereon an audio difference detection program which, when executed by a processor, implements the steps of the audio difference detection method as described above.
The method implemented when the audio difference detection program is executed may refer to various embodiments of the audio difference detection method of the present invention, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. An audio difference detection method, comprising:
receiving an audio comparison instruction, and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
acquiring a first voice oscillogram and a second voice oscillogram respectively corresponding to the standard audio and the comparison audio, and performing overlapping comparison on the first voice oscillogram and the second voice oscillogram based on the reference information;
and determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the contrast audio according to a preset threshold value.
2. The audio difference detection method according to claim 1, wherein the preset threshold value includes a preset first threshold value and a preset second threshold value,
the step of determining the similarity level of the standard audio and the contrast audio according to a preset threshold comprises the following steps:
judging whether the overlapping rate of the first voice oscillogram and the second voice oscillogram exceeds a preset first threshold value or not;
if the similarity level does not exceed a preset first threshold, determining that the similarity level is low similarity;
if the preset first threshold value is exceeded, judging whether the preset second threshold value is exceeded, wherein the preset first threshold value is smaller than the preset second threshold value;
and if the preset second threshold value is not exceeded, determining that the similarity grade is moderate similarity.
3. The audio difference detection method of claim 2, wherein the step of determining whether the predetermined second threshold is exceeded further comprises:
if the standard audio frequency exceeds a preset second threshold value, performing fast Fourier transform on the standard audio frequency and the comparison audio frequency to respectively generate a first spectrogram and a second spectrogram;
comparing the first spectrogram with the second spectrogram to obtain a characteristic difference, and judging whether the characteristic difference meets a preset spectrogram characteristic condition;
if not, determining that the similarity grade is highly similar;
and if so, marking the comparison audio as the dubbing audio of the standard audio.
4. The audio difference detection method of claim 3, wherein the step of determining whether the feature difference between the first spectrogram and the second spectrogram satisfies a predetermined spectrogram feature condition comprises:
judging whether the synchronization rate of the formant edge frequency between the first spectrogram and the second spectrogram reaches a preset third threshold value or not;
if the feature difference reaches a preset third threshold value, judging that the feature difference meets a preset speech spectrum feature condition;
and if the feature difference does not meet the preset third threshold, judging that the feature difference does not meet the preset speech spectrum feature condition.
5. The audio difference detection method of claim 1, wherein the step of determining and outputting the difference portion of the first and second speech waveform maps and corresponding difference data comprises:
intercepting and displaying a difference part comparison map of the first voice oscillogram and the second voice oscillogram;
and acquiring an amplitude difference value and a time difference value between the first voice oscillogram and the second voice oscillogram, and correspondingly displaying the amplitude difference value and the time difference value in the difference part comparison map, wherein the difference data comprises the amplitude difference value and the time difference value.
6. The audio difference detection method according to claim 5, wherein the step of determining the similarity level of the standard audio and the reference audio according to a preset threshold further comprises:
and intercepting a target audio part corresponding to the difference part comparison map in the standard audio and the comparison audio, and associating the target audio part with the difference part comparison map.
7. The audio difference detection method according to claim 1, wherein before the step of obtaining the first speech waveform diagram and the second speech waveform diagram corresponding to the standard audio and the comparison audio respectively, the method further comprises:
and performing noise reduction processing on the standard audio and the contrast audio.
8. An audio difference detection apparatus, characterized in that the audio difference detection apparatus comprises:
the audio information acquisition model is used for receiving an audio comparison instruction and acquiring standard audio, comparison audio and reference information determined based on the audio comparison instruction;
the voice waveform comparison module is used for acquiring a first voice waveform diagram and a second voice waveform diagram which respectively correspond to the standard audio and the comparison audio, and performing overlapping comparison on the first voice waveform diagram and the second voice waveform diagram based on the reference information;
and the similar grade determining module is used for determining and outputting the difference part of the first voice oscillogram and the second voice oscillogram and corresponding difference data, and determining the similar grade of the standard audio and the comparison audio according to a preset threshold value.
9. An audio difference detection device, characterized in that the audio difference detection device comprises: memory, a processor and an audio difference detection program stored on the memory and executable on the processor, the audio difference detection program when executed by the processor implementing the steps of the audio difference detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon an audio difference detection program which, when executed by a processor, implements the steps of the audio difference detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405107.XA CN111640445A (en) | 2020-05-13 | 2020-05-13 | Audio difference detection method, device, equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405107.XA CN111640445A (en) | 2020-05-13 | 2020-05-13 | Audio difference detection method, device, equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111640445A true CN111640445A (en) | 2020-09-08 |
Family
ID=72332034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010405107.XA Pending CN111640445A (en) | 2020-05-13 | 2020-05-13 | Audio difference detection method, device, equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111640445A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192494A (en) * | 2021-04-15 | 2021-07-30 | 辽宁石油化工大学 | Intelligent English language identification and output system and method |
CN114429770A (en) * | 2022-04-06 | 2022-05-03 | 北京普太科技有限公司 | Sound data testing method and device of tested equipment |
TWI794059B (en) * | 2022-03-21 | 2023-02-21 | 英業達股份有限公司 | Audio signal processing method and audio signal processing device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516534A (en) * | 2017-08-31 | 2017-12-26 | 广东小天才科技有限公司 | Voice information comparison method and device and terminal equipment |
CN109065023A (en) * | 2018-08-23 | 2018-12-21 | 广州势必可赢网络科技有限公司 | A kind of voice identification method, device, equipment and computer readable storage medium |
CN109979466A (en) * | 2019-03-21 | 2019-07-05 | 广州国音智能科技有限公司 | A kind of vocal print identity identity identification method, device and computer readable storage medium |
JP2019123604A (en) * | 2018-01-18 | 2019-07-25 | 株式会社Pfu | Double feed detection device, double feed detection method and control program |
CN110164454A (en) * | 2019-05-24 | 2019-08-23 | 广州国音智能科技有限公司 | A kind of audio identity method of discrimination and device based on resonance peak deviation |
CN110827853A (en) * | 2019-11-11 | 2020-02-21 | 广州国音智能科技有限公司 | Voice feature information extraction method, terminal and readable storage medium |
-
2020
- 2020-05-13 CN CN202010405107.XA patent/CN111640445A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516534A (en) * | 2017-08-31 | 2017-12-26 | 广东小天才科技有限公司 | Voice information comparison method and device and terminal equipment |
JP2019123604A (en) * | 2018-01-18 | 2019-07-25 | 株式会社Pfu | Double feed detection device, double feed detection method and control program |
CN109065023A (en) * | 2018-08-23 | 2018-12-21 | 广州势必可赢网络科技有限公司 | A kind of voice identification method, device, equipment and computer readable storage medium |
CN109979466A (en) * | 2019-03-21 | 2019-07-05 | 广州国音智能科技有限公司 | A kind of vocal print identity identity identification method, device and computer readable storage medium |
CN110164454A (en) * | 2019-05-24 | 2019-08-23 | 广州国音智能科技有限公司 | A kind of audio identity method of discrimination and device based on resonance peak deviation |
CN110827853A (en) * | 2019-11-11 | 2020-02-21 | 广州国音智能科技有限公司 | Voice feature information extraction method, terminal and readable storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192494A (en) * | 2021-04-15 | 2021-07-30 | 辽宁石油化工大学 | Intelligent English language identification and output system and method |
TWI794059B (en) * | 2022-03-21 | 2023-02-21 | 英業達股份有限公司 | Audio signal processing method and audio signal processing device |
CN114429770A (en) * | 2022-04-06 | 2022-05-03 | 北京普太科技有限公司 | Sound data testing method and device of tested equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111640445A (en) | Audio difference detection method, device, equipment and readable storage medium | |
EP3723080B1 (en) | Music classification method and beat point detection method, storage device and computer device | |
US8027743B1 (en) | Adaptive noise reduction | |
US9942652B2 (en) | Terminal device and information output method | |
CN107833581B (en) | Method, device and readable storage medium for extracting fundamental tone frequency of sound | |
CN107247572B (en) | Audio playing method, terminal and computer readable storage medium | |
CN108319829B (en) | Voiceprint verification method and device | |
CN106161705A (en) | Audio frequency apparatus method of testing and device | |
US9992355B2 (en) | Diagnostic apparatus, diagnostic system, and non-transitory computer readable medium | |
CN111028845A (en) | Multi-audio recognition method, device, equipment and readable storage medium | |
CN105338148A (en) | Method and device for detecting audio signal according to frequency domain energy | |
CN103106061A (en) | Voice input method and device | |
US9377990B2 (en) | Image edited audio data | |
WO2017104146A1 (en) | Diagnostic device, diagnostic system, diagnostic method, and program | |
CN112420049A (en) | Data processing method, device and storage medium | |
CN107452398B (en) | Echo acquisition method, electronic device and computer readable storage medium | |
US10089397B2 (en) | Diagnostic device, diagnostic system, diagnostic method, and non-transitory computer-readable medium | |
CN104851423B (en) | Sound information processing method and device | |
US20150073787A1 (en) | Voice filtering method, apparatus and electronic equipment | |
CN110931019A (en) | Public security voice data acquisition method, device, equipment and computer storage medium | |
CN114627889A (en) | Multi-sound-source sound signal processing method and device, storage medium and electronic equipment | |
CN111640421A (en) | Voice comparison method, device, equipment and computer readable storage medium | |
CN110600031B (en) | Playback control method, playback apparatus, and computer-readable storage medium | |
CN109841232B (en) | Method and device for extracting note position in music signal and storage medium | |
JP6307814B2 (en) | Fundamental visualization device, fundamental visualization method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200908 |
|
RJ01 | Rejection of invention patent application after publication |