CN114417372A - Data file encryption method and storage device based on voice band characteristics - Google Patents

Data file encryption method and storage device based on voice band characteristics Download PDF

Info

Publication number
CN114417372A
CN114417372A CN202111625048.8A CN202111625048A CN114417372A CN 114417372 A CN114417372 A CN 114417372A CN 202111625048 A CN202111625048 A CN 202111625048A CN 114417372 A CN114417372 A CN 114417372A
Authority
CN
China
Prior art keywords
voice
parameters
decryption
data file
random factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111625048.8A
Other languages
Chinese (zh)
Inventor
赵立
李仕镇
林振华
翁斌
叶建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gemean Beijing Information Technology Co ltd
Original Assignee
Gemean Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gemean Beijing Information Technology Co ltd filed Critical Gemean Beijing Information Technology Co ltd
Priority to CN202111625048.8A priority Critical patent/CN114417372A/en
Publication of CN114417372A publication Critical patent/CN114417372A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2107File encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Storage Device Security (AREA)

Abstract

The present application relates to the field of data processing technologies, and in particular, to a data file encryption method and storage device based on voice band features. The data file encryption method based on the voice band characteristics comprises the following steps: acquiring audio key information; processing the audio key information to obtain voice content and a voice characteristic model; calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption; and encrypting the data file by using the random factor. The data file is encrypted in the above mode, the whole process is extremely convenient and simple, only an encryption person can decrypt the encrypted file to view the file content due to the uniqueness of the voiceprint, and the safety of the file is greatly ensured.

Description

Data file encryption method and storage device based on voice band characteristics
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data file encryption method and storage device based on voice band features.
Background
With the arrival of the information explosion era, more and more data files are used by all parties, meanwhile, with the increasing awareness of people on data security, the confidentiality of the files is more and more emphasized, the traditional encryption mode for the files is mostly encrypted by character passwords, however, the encryption mode is difficult to meet the requirements of high security and long-term security, and nowadays with the high development of biology and information science, the biometric authentication technology is used as a convenient and advanced information security technology and is widely applied in real life.
Therefore, how to apply the biometric authentication technology to the encryption of the document becomes an important research direction.
Disclosure of Invention
In view of the above problems, the present application provides a data file encryption method based on voice band features, which is used for solving the technical problem that the security of the existing file encryption method by characters is low, and the specific technical scheme is as follows:
a data file encryption method based on voice band characteristics comprises the following steps:
acquiring audio key information;
processing the audio key information to obtain voice content and a voice characteristic model;
calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption;
and encrypting the data file by using the random factor.
Further, the method also comprises the following steps:
responding to a file decryption instruction, acquiring specific decryption audio information input by a decryptor, processing the decryption audio information to obtain a voiceprint of the decryptor, judging whether the voiceprint of the decryptor is the same as the voiceprint of the encryptor, and if so, calculating parameters of a voice characteristic model to obtain a random factor for decryption;
and decrypting the data file by using the random factor for decryption.
Further, the "processing the audio key information to obtain the voice content and the voice feature model" specifically includes the following steps:
performing voice recognition on the audio key information to obtain voice content, and performing voiceprint recognition on the audio key information to obtain a voiceprint atlas;
extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum and track of pitch frame peak.
Further, the method for calculating the parameters of the voice content and the voice feature model through a preset algorithm to obtain a random factor for encryption specifically comprises the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
Further, the "obtaining a random factor for decryption by calculating parameters of the speech feature model" specifically includes the steps of:
and acquiring parameters of the voice characteristic model, and calculating the parameters of the voice characteristic model to obtain random factors for decryption.
In order to solve the technical problem, the storage device is further provided, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for performing:
acquiring audio key information;
processing the audio key information to obtain voice content and a voice characteristic model;
calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption;
and encrypting the data file by using the random factor.
Further, the set of instructions is further for performing:
responding to a file decryption instruction, acquiring specific decryption audio information input by a decryptor, processing the decryption audio information to obtain a voiceprint of the decryptor, judging whether the voiceprint of the decryptor is the same as the voiceprint of the encryptor, and if so, calculating parameters of a voice characteristic model to obtain a random factor for decryption;
and decrypting the data file by using the random factor for decryption.
Further, the set of instructions is further for performing:
the step of processing the audio key information to obtain the voice content and the voice feature model specifically comprises the following steps:
performing voice recognition on the audio key information to obtain voice content, and performing voiceprint recognition on the audio key information to obtain a voiceprint atlas;
extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum and track of pitch frame peak.
Further, the set of instructions is further for performing:
the method comprises the following steps of calculating parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption, and specifically comprises the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
Further, the set of instructions is further for performing:
the "obtaining a random factor for decryption by calculating parameters of the speech feature model" specifically includes the following steps:
and acquiring parameters of the voice characteristic model, and calculating the parameters of the voice characteristic model to obtain random factors for decryption.
The invention has the beneficial effects that: a data file encryption method based on voice band characteristics comprises the following steps: acquiring audio key information; processing the audio key information to obtain voice content and a voice characteristic model; calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption; and encrypting the data file by using the random factor. The data file is encrypted in the above mode, the whole process is extremely convenient and simple, only an encryption person can decrypt the encrypted file to view the file content due to the uniqueness of the voiceprint, and the safety of the file is greatly ensured.
The above description of the present invention is only an overview of the technical solutions of the present application, and in order to make the technical solutions of the present application more clearly understood by those skilled in the art, the present invention may be further implemented according to the content described in the text and drawings of the present application, and in order to make the above objects, other objects, features, and advantages of the present application more easily understood, the following description is made in conjunction with the detailed description of the present application and the drawings.
Drawings
The drawings are only for purposes of illustrating the principles, implementations, applications, features, and effects of particular embodiments of the present application, as well as others related thereto, and are not to be construed as limiting the application.
In the drawings of the specification:
FIG. 1 is a first flowchart of a method for encrypting a data file based on voice band characteristics according to an embodiment;
FIG. 2 is a flowchart illustrating a second method for encrypting a data file based on speech band characteristics according to an embodiment;
fig. 3 is a flowchart of a data file encryption method based on voice band features according to a third embodiment;
FIG. 4 is a fourth flowchart illustrating a data file encryption method based on speech band characteristics according to an embodiment;
fig. 5 is a block diagram of a storage device according to an embodiment.
The reference numerals referred to in the above figures are explained below:
500. a storage device.
Detailed Description
In order to explain in detail possible application scenarios, technical principles, practical embodiments, and the like of the present application, the following detailed description is given with reference to the accompanying drawings in conjunction with the listed embodiments. The embodiments described herein are merely for more clearly illustrating the technical solutions of the present application, and therefore, the embodiments are only used as examples, and the scope of the present application is not limited thereby.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or related to other embodiments specifically defined. In principle, in the present application, the technical features mentioned in the embodiments can be combined in any manner to form a corresponding implementable technical solution as long as there is no technical contradiction or conflict.
Unless defined otherwise, technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the use of relational terms herein is intended only to describe particular embodiments and is not intended to limit the present application.
In the description of the present application, the term "and/or" is a expression for describing a logical relationship between objects, meaning that three relationships may exist, for example a and/or B, meaning: there are three cases of A, B, and both A and B. In addition, the character "/" herein generally indicates that the former and latter associated objects are in a logical relationship of "or".
In this application, terms such as "first" and "second" are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Without further limitation, in this application, the use of "including," "comprising," "having," or other similar expressions in phrases and expressions of "including," "comprising," or "having," is intended to cover a non-exclusive inclusion, and such expressions do not exclude the presence of additional elements in a process, method, or article that includes the recited elements, such that a process, method, or article that includes a list of elements may include not only those elements but also other elements not expressly listed or inherent to such process, method, or article.
As is understood in the examination of the guidelines, the terms "greater than", "less than", "more than" and the like in this application are to be understood as excluding the number; the expressions "above", "below", "within" and the like are understood to include the present numbers. In addition, in the description of the embodiments of the present application, "a plurality" means two or more (including two), and expressions related to "a plurality" similar thereto are also understood, for example, "a plurality of groups", "a plurality of times", and the like, unless specifically defined otherwise.
As mentioned above, how to apply the biometric authentication technology to the encryption of the document becomes an important research direction.
In the biometric authentication technology, voiceprint recognition is an important branch, and is paid attention to by people due to the advantages of convenience, economy, accuracy and the like. And is applied to various fields such as electronic commerce, judicial expertise and the like. The so-called voiceprint recognition, i.e. recognition of a speaker, is a technology for recognizing the identity of the speaker according to parameters reflecting the individual characteristics of the speaker in the voice of the speaker, which does not pay attention to semantic information of the voice but recognizes who the speaker uttered the voice, so in this embodiment, a data file is encrypted by the voice, wherein a plurality of technical subjects such as a voiceprint technology, a cryptography technology, a voice characteristic recognition characteristic technology, a language recognition technology and the like are involved. The following description will be made of specific embodiments thereof:
as shown in fig. 1, a data file encryption method based on voice band characteristics is applicable to a storage device, including but not limited to: personal computers, servers, general purpose computers, special purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, etc. The method specifically comprises the following steps:
step S101: audio key information is obtained. The method specifically comprises the following steps: when an encryptor is to encrypt a file, audio key information is voice-input through an audio device.
Step S102: and processing the audio key information to obtain voice content and a voice characteristic model.
Step S103: and calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption.
Step S104: and encrypting the data file by using the random factor.
A data file encryption method based on voice band characteristics comprises the following steps: acquiring audio key information; processing the audio key information to obtain voice content and a voice characteristic model; calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption; and encrypting the data file by using the random factor. The data file is encrypted in the above mode, the whole process is extremely convenient and simple, only an encryption person can decrypt the encrypted file to view the file content due to the uniqueness of the voiceprint, and the safety of the file is greatly ensured.
Referring to fig. 2, a process of decrypting a data file is described, which further includes the following steps:
step S201: and responding to the file decryption instruction to acquire the specific decryption audio information input by the decryptor.
Step S202: and processing the decrypted audio information to obtain the voiceprint of the decrypter, and judging whether the voiceprint of the decrypter is the same as the voiceprint of the encrypter or not.
Step S203: if the parameters are the same, calculating the parameters of the voice characteristic model to obtain a random factor for decryption.
Step S204: and decrypting the data file by using the random factor for decryption.
The specific implementation process of the steps can be as follows:
when a user decrypts a file, the system prompts the user according to a word combination randomly extracted from a text base, the user inputs decryption key audio information through audio equipment voice and randomly extracts words to prompt the user to feed back content, voiceprints of the decrypted audio are compared through tools such as a perception linear prediction coefficient PLP algorithm and ASV-Subtools, after comparison and verification are passed, relevant parameters of a voice characteristic model are obtained to calculate, and the calculated result is used for file decryption operation to obtain an original file.
The following describes the step "processing the audio key information to obtain the speech content and the speech feature model" specifically with reference to fig. 3:
the method specifically comprises the following steps:
step S301: and carrying out voice recognition on the audio key information to obtain voice content, and carrying out voiceprint recognition on the audio key information to obtain a voice voiceprint atlas. Wherein the existing algorithms can be directly used by both the speech recognition algorithm and the voiceprint recognition algorithm.
Step S302: extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum and track of pitch frame peak. The specific extraction process is as follows:
on the basis of removing discrete cosine transform from logfBank features, an artificial feature filter (Mel filter) is removed, and a pre-emphasis process is carried out, the logarithm of a time-frequency amplitude spectrum obtained after fast Fourier transform and modulus taking is directly taken, and the obtained result is used as the representation of voice features.
In this embodiment, the speech feature model map is preferably a gaussian mixture model, and the effect of speaker recognition independent of text is preferably one of the most commonly used models, because in the speaker recognition system, how to summarize the speech features well and how to match the test speech with the training speech are very complicated and difficult problems to solve, and the GMM converts these problems into the problems of operation and probability calculation for the models, and solves these problems, so the gaussian mixture model is preferred.
In this embodiment, the "calculating the parameters of the speech content and the speech feature model through a preset algorithm to obtain a random factor for encryption" specifically includes the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
In this embodiment, the specific alignment for voiceprints can be as follows:
the order is calculated by using linear prediction and cepstrum coefficients thereof, using a function α ═ lpc (x, P), and using complex cepstrum recursion, and the specific calculation process and results are verified as follows:
the acoustic features can be extracted according to the following 7 representative ways: amplitude (or power), zero-crossing rate, adjacent boundary zone feature vector, linear prediction coefficient feature vector (LPC), LPC cepstrum feature vector (LPCC), Mel cepstrum parameter (MFCC), and functions of the first three formants F1, F2, F3 and LPC coefficients. The name of this function is LPC, which is a function of: where x is a frame of speech signal and P is the order in which the LPC parameters are calculated. Usually x is 240 or 256 points of data and P is 10-12.
Because the frequency response of the sound channel and the spectrum envelope of the original signal are reflected by the sound channel model system function H (z), the complex cepstrum coefficient can be obtained by performing inverse z transformation by using lgH (z). If the LPC cepstrum coefficients reflect that the human channel characteristic data are similar, a corresponding decryption program can be started for decryption.
With reference to fig. 4, the following steps of obtaining a random factor for decryption in the decryption process are described, and the method specifically includes the following steps:
step S401: and acquiring parameters of the voice characteristic model. Mainly by prior art. In this embodiment, the training may use a voiceprint recognition apparatus including a voice training module and a voice matching module based on certain voiceprint features (e.g. MFCC, LPCC, etc.) and a voiceprint recognition algorithm (e.g. GMM, DTW, etc.), wherein each abbreviation in english means as follows: mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Cepstrum Coefficient (LPCC), Gaussian Mixture Model (GMM), and modified Dynamic Time Warping (DTW).
Step S402: and calculating the parameters of the voice characteristic model to obtain a random factor for decryption. The calculation method corresponds to the calculation method in encryption, and is not particularly limited, and various algorithms can be flexibly set by themselves.
An embodiment of a storage device 500 is described below in conjunction with fig. 5:
a storage device 500 having stored therein a set of instructions for performing:
acquiring audio key information;
processing the audio key information to obtain voice content and a voice characteristic model;
calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption;
and encrypting the data file by using the random factor.
The data file is encrypted through the storage device 500, the whole process is extremely convenient and simple, only an encryption person can decrypt the encrypted file to check the file content due to the uniqueness of the voiceprint, and the safety of the file is greatly ensured.
Further, the set of instructions is further for performing:
responding to a file decryption instruction, acquiring specific decryption audio information input by a decryptor, processing the decryption audio information to obtain a voiceprint of the decryptor, judging whether the voiceprint of the decryptor is the same as the voiceprint of the encryptor, and if so, calculating parameters of a voice characteristic model to obtain a random factor for decryption; and decrypting the data file by using the random factor for decryption.
The specific implementation process of the steps can be as follows:
when a user decrypts a file, the system prompts the user according to a word combination randomly extracted from a text base, the user inputs decryption key audio information through audio equipment voice and randomly extracts words to prompt the user to feed back content, voiceprints of the decrypted audio are compared through tools such as a perception linear prediction coefficient PLP algorithm and ASV-Subtools, after comparison and verification are passed, relevant parameters of a voice characteristic model are obtained to calculate, and the calculated result is used for file decryption operation to obtain an original file.
Further, the set of instructions is further for performing:
the step of processing the audio key information to obtain the voice content and the voice feature model specifically comprises the following steps:
performing voice recognition on the audio key information to obtain voice content, and performing voiceprint recognition on the audio key information to obtain a voiceprint atlas;
extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum and track of pitch frame peak. The specific extraction process is as follows:
on the basis of removing discrete cosine transform from logfBank features, an artificial feature filter (Mel filter) is removed, and a pre-emphasis process is carried out, the logarithm of a time-frequency amplitude spectrum obtained after fast Fourier transform and modulus taking is directly taken, and the obtained result is used as the representation of voice features.
In this embodiment, the speech feature model map is preferably a gaussian mixture model, and the effect of speaker recognition independent of text is preferably one of the most commonly used models, because in the speaker recognition system, how to summarize the speech features well and how to match the test speech with the training speech are very complicated and difficult problems to solve, and the GMM converts these problems into the problems of operation and probability calculation for the models, and solves these problems, so the gaussian mixture model is preferred.
Further, the set of instructions is further for performing:
the method comprises the following steps of calculating parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption, and specifically comprises the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
In this embodiment, the specific alignment for voiceprints can be as follows:
the order is calculated by using linear prediction and cepstrum coefficients thereof, using a function α ═ lpc (x, P), and using complex cepstrum recursion, and the specific calculation process and results are verified as follows:
the acoustic features can be extracted according to the following 7 representative ways: amplitude (or power), zero-crossing rate, adjacent boundary zone feature vector, linear prediction coefficient feature vector (LPC), LPC cepstrum feature vector (LPCC), Mel cepstrum parameter (MFCC), and functions of the first three formants F1, F2, F3 and LPC coefficients. The name of this function is LPC, which is a function of: where x is a frame of speech signal and P is the order in which the LPC parameters are calculated. Usually x is 240 or 256 points of data and P is 10-12.
Because the frequency response of the sound channel and the spectrum envelope of the original signal are reflected by the sound channel model system function H (z), the complex cepstrum coefficient can be obtained by performing inverse z transformation by using lgH (z). If the LPC cepstrum coefficients reflect that the human channel characteristic data are similar, a corresponding decryption program can be started for decryption.
Further, the set of instructions is further for performing:
the "obtaining a random factor for decryption by calculating parameters of the speech feature model" specifically includes the following steps:
and acquiring parameters of the voice characteristic model. Mainly by prior art.
And calculating the parameters of the voice characteristic model to obtain a random factor for decryption. The calculation method corresponds to the calculation method in encryption, and is not particularly limited, and various algorithms can be flexibly set by themselves.
Finally, it should be noted that, although the above embodiments have been described in the text and drawings of the present application, the scope of the patent protection of the present application is not limited thereby. All technical solutions which are generated by replacing or modifying the equivalent structure or the equivalent flow according to the contents described in the text and the drawings of the present application, and which are directly or indirectly implemented in other related technical fields, are included in the scope of protection of the present application.

Claims (10)

1. A data file encryption method based on voice band characteristics is characterized by comprising the following steps:
acquiring audio key information;
processing the audio key information to obtain voice content and a voice characteristic model;
calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption;
and encrypting the data file by using the random factor.
2. The method for encrypting the data file based on the voice band characteristics as claimed in claim 1, further comprising the steps of:
responding to the file decryption instruction, and acquiring specific decryption audio information input by a decryptor;
processing the decrypted audio information to obtain a decrypted voiceprint, and judging whether the decrypted voiceprint is the same as the encrypted voiceprint or not;
if the parameters are the same, calculating the parameters of the voice characteristic model to obtain a random factor for decryption;
and decrypting the data file by using the random factor for decryption.
3. The method for encrypting the data file based on the voice band feature of claim 1, wherein the step of processing the audio key information to obtain the voice content and the voice feature model comprises the steps of:
performing voice recognition on the audio key information to obtain voice content, and performing voiceprint recognition on the audio key information to obtain a voiceprint atlas;
extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum of pitch common frame peak.
4. The data file encryption method based on the voice band feature as claimed in claim 1, wherein the step of calculating the parameters of the voice content and the voice feature model through a preset algorithm to obtain a random factor for encryption includes the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
5. The method for encrypting the data file based on the voice band feature of claim 2, wherein the step of calculating the parameters of the voice feature model to obtain the random factor for decryption further comprises the steps of:
and acquiring parameters of the voice characteristic model, and calculating the parameters of the voice characteristic model to obtain random factors for decryption.
6. A storage device having a set of instructions stored therein, the set of instructions being operable to perform:
acquiring audio key information;
processing the audio key information to obtain voice content and a voice characteristic model;
calculating the parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption;
and encrypting the data file by using the random factor.
7. The storage device of claim 6, wherein the set of instructions is further configured to perform:
responding to a file decryption instruction, acquiring specific decryption audio information input by a decryptor, processing the decryption audio information to obtain a voiceprint of the decryptor, judging whether the voiceprint of the decryptor is the same as the voiceprint of the encryptor, and if so, calculating parameters of a voice characteristic model to obtain a random factor for decryption;
and decrypting the data file by using the random factor for decryption.
8. The storage device of claim 6, wherein the set of instructions is further configured to perform:
the step of processing the audio key information to obtain the voice content and the voice feature model specifically comprises the following steps:
performing voice recognition on the audio key information to obtain voice content, and performing voiceprint recognition on the audio key information to obtain a voiceprint atlas;
extracting target characteristic parameters of the voice of the speaker through a neural network method, and establishing a voice characteristic model diagram, wherein the target characteristic parameters comprise one or more of the following parameters: pitch spectrum, envelope, energy of pitch frame, appearance spectrum of pitch common frame peak.
9. The storage device of claim 6, wherein the set of instructions is further configured to perform:
the method comprises the following steps of calculating parameters of the voice content and the voice characteristic model through a preset algorithm to obtain a random factor for encryption, and specifically comprises the following steps:
converting the recorded sound source into analog signals, amplifying, filtering, A/D converting, pre-emphasizing digital signals, performing framing and windowing operations, performing endpoint detection, and finally performing feature extraction.
10. The storage device of claim 7, wherein the set of instructions is further configured to perform:
the "obtaining a random factor for decryption by calculating parameters of the speech feature model" specifically includes the following steps:
and acquiring parameters of the voice characteristic model, and calculating the parameters of the voice characteristic model to obtain random factors for decryption.
CN202111625048.8A 2021-12-28 2021-12-28 Data file encryption method and storage device based on voice band characteristics Pending CN114417372A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111625048.8A CN114417372A (en) 2021-12-28 2021-12-28 Data file encryption method and storage device based on voice band characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111625048.8A CN114417372A (en) 2021-12-28 2021-12-28 Data file encryption method and storage device based on voice band characteristics

Publications (1)

Publication Number Publication Date
CN114417372A true CN114417372A (en) 2022-04-29

Family

ID=81269283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111625048.8A Pending CN114417372A (en) 2021-12-28 2021-12-28 Data file encryption method and storage device based on voice band characteristics

Country Status (1)

Country Link
CN (1) CN114417372A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937441A (en) * 2022-11-08 2023-04-07 泰瑞数创科技(北京)股份有限公司 Three-dimensional collaborative plotting method and system under low-bandwidth environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937441A (en) * 2022-11-08 2023-04-07 泰瑞数创科技(北京)股份有限公司 Three-dimensional collaborative plotting method and system under low-bandwidth environment
CN115937441B (en) * 2022-11-08 2023-09-05 泰瑞数创科技(北京)股份有限公司 Three-dimensional collaborative plotting method and system in low-bandwidth environment

Similar Documents

Publication Publication Date Title
Kamble et al. Advances in anti-spoofing: from the perspective of ASVspoof challenges
Qian et al. Hidebehind: Enjoy voice input with voiceprint unclonability and anonymity
US9881604B2 (en) System and method for identifying special information
WO2017114307A1 (en) Voiceprint authentication method capable of preventing recording attack, server, terminal, and system
US20180146370A1 (en) Method and apparatus for secured authentication using voice biometrics and watermarking
Wu et al. Identification of electronic disguised voices
US20030200447A1 (en) Identification system
Neustein et al. Forensic speaker recognition
EP1962280A1 (en) Method and network-based biometric system for biometric authentication of an end user
US20210304783A1 (en) Voice conversion and verification
Qian et al. Speech sanitizer: Speech content desensitization and voice anonymization
CN112382300A (en) Voiceprint identification method, model training method, device, equipment and storage medium
CN1403953A (en) Palm acoustic-print verifying system
CN114417372A (en) Data file encryption method and storage device based on voice band characteristics
Kuznetsov et al. Methods of countering speech synthesis attacks on voice biometric systems in banking
MK et al. Voice Biometric Systems for User Identification and Authentication–A Literature Review
Shirvanian et al. Short voice imitation man-in-the-middle attacks on Crypto Phones: Defeating humans and machines
Shirvanian et al. Voicefox: Leveraging inbuilt transcription to enhance the security of machine-human speaker verification against voice synthesis attacks
Trysnyuk et al. A method for user authenticating to critical infrastructure objects based on voice message identification
Saini et al. Speaker Anonymity and Voice Conversion Vulnerability: A Speaker Recognition Analysis
JP2003302999A (en) Individual authentication system by voice
Agarwal et al. Speaker anonymization for machines using sinusoidal model
Chadha et al. Text-independent speaker recognition for low SNR environments with encryption
Lim et al. Overo: Sharing Private Audio Recordings
Kadu et al. Voice Based Authentication System for Web Applications using Machine Learning.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination