CN114598516B - Information encryption and information decryption methods, devices, equipment and storage medium - Google Patents

Information encryption and information decryption methods, devices, equipment and storage medium Download PDF

Info

Publication number
CN114598516B
CN114598516B CN202210185272.8A CN202210185272A CN114598516B CN 114598516 B CN114598516 B CN 114598516B CN 202210185272 A CN202210185272 A CN 202210185272A CN 114598516 B CN114598516 B CN 114598516B
Authority
CN
China
Prior art keywords
information
voiceprint
encryption
voiceprint feature
decryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210185272.8A
Other languages
Chinese (zh)
Other versions
CN114598516A (en
Inventor
崔伟才
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wutong Chelian Technology Co Ltd
Original Assignee
Beijing Wutong Chelian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wutong Chelian Technology Co Ltd filed Critical Beijing Wutong Chelian Technology Co Ltd
Priority to CN202210185272.8A priority Critical patent/CN114598516B/en
Publication of CN114598516A publication Critical patent/CN114598516A/en
Application granted granted Critical
Publication of CN114598516B publication Critical patent/CN114598516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/87Detection of discrete points within a voice signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0866Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2463/00Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00
    • H04L2463/062Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00 applying encryption of the keys

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses information encryption and information decryption methods, devices, equipment and storage media, and belongs to the technical field of computers. The method comprises the following steps: acquiring plaintext information to be encrypted, and determining an encryption object corresponding to the plaintext information to be encrypted; in response to the encryption object being a plurality of, acquiring a plurality of first voiceprint features from the plurality of encryption objects, wherein any encryption object in the plurality of encryption objects corresponds to one first voiceprint feature; determining a first hybrid voiceprint feature based on the plurality of first voiceprint features; encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information. And obtaining a first mixed voiceprint characteristic with the voiceprint characteristics of the plurality of encryption objects by splicing the first voiceprint characteristics of the plurality of encryption objects. Therefore, the information encryption based on a plurality of encryption objects can be completed by requiring a first mixed voiceprint feature, so that the encryption efficiency is improved while high confidentiality is ensured.

Description

Information encryption and information decryption methods, devices, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an information encryption and information decryption method, an information encryption and information decryption device, information encryption and information decryption equipment and a storage medium.
Background
With the development of computer technology, information encryption modes are gradually diversified. In addition to encrypting information using a legacy code as a key, information owned by an encrypted object may be encrypted based on a biometric implementation of the encrypted object.
Disclosure of Invention
The embodiment of the application provides an information encryption and information decryption method, device, equipment and storage medium, which can be used for solving the problems in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an information encryption method, where the method includes:
Acquiring plaintext information to be encrypted, and determining an encryption object corresponding to the plaintext information to be encrypted;
responding to the plurality of encryption objects, acquiring a plurality of first voiceprint features from the plurality of encryption objects, wherein any encryption object in the plurality of encryption objects corresponds to one first voiceprint feature;
Determining a first hybrid voiceprint feature based on the plurality of first voiceprint features;
encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information.
In one possible implementation, the determining a first hybrid voiceprint feature based on the plurality of first voiceprint features includes:
determining a first cut order;
dividing each first voiceprint feature of the plurality of first voiceprint features according to the first dividing sequence to obtain a plurality of first dividing results, wherein any one of the plurality of first voiceprint features corresponds to one first dividing result;
and splicing the plurality of first segmentation results to obtain the first mixed voiceprint feature.
In a possible implementation manner, the splicing the plurality of first division results to obtain the first hybrid voiceprint feature includes:
And splicing the plurality of first segmentation results according to the first segmentation sequence to obtain the first mixed voiceprint feature.
In one possible implementation manner, the encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information includes:
inputting the plaintext information and the first mixed voiceprint feature to an encryption model, encrypting the plaintext information based on the encryption model, and outputting the ciphertext information.
In one possible implementation, the obtaining a plurality of first voiceprint features from a plurality of encrypted objects includes:
Acquiring a plurality of first audio data from the plurality of encrypted objects, wherein any one of the plurality of encrypted objects corresponds to at least one first audio data;
and extracting voiceprint features of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint features.
In another aspect, an embodiment of the present application provides an information decryption method, where the method includes:
Obtaining ciphertext information to be decrypted, determining a decryption object corresponding to the ciphertext information to be decrypted, and encrypting plaintext information by the ciphertext information based on first mixed voiceprint features obtained by first voiceprint features of a plurality of encryption objects;
Responding to the plurality of decryption objects, acquiring a plurality of second voice characteristics from the plurality of decryption objects, wherein any one of the plurality of decryption objects corresponds to one second voice characteristic;
Determining a second hybrid voiceprint feature based on the plurality of second voiceprint features;
And decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information.
In one possible implementation, the determining a second hybrid voiceprint feature based on the plurality of second voiceprint features includes:
Determining a second segmentation order;
each second voice characteristic in the plurality of second voice characteristics is segmented according to the second segmentation sequence to obtain a plurality of second segmentation results, and any one of the plurality of second voice characteristics corresponds to one second segmentation result;
and splicing the plurality of second segmentation results to obtain the second mixed voiceprint features.
In a possible implementation manner, the splicing the plurality of second segmentation results to obtain the second hybrid voiceprint feature includes:
and splicing the plurality of second segmentation results according to the second segmentation order to obtain the second mixed voiceprint feature.
In one possible implementation manner, the decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information includes:
Responding to the ciphertext information to be decrypted based on an encryption model, and determining a decryption model corresponding to the encryption model;
and inputting the ciphertext information and the second mixed voiceprint feature to the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
In one possible implementation, the acquiring the plurality of second voiceprint features from the plurality of decrypted objects includes:
acquiring a plurality of second audio data from the plurality of decryption objects, wherein any one of the plurality of decryption objects corresponds to at least one second audio data;
And extracting voiceprint features of each second audio data in the plurality of second audio data to obtain the plurality of second voiceprint features.
In another aspect, there is provided an information encryption apparatus, the apparatus including:
the acquisition module is used for acquiring plaintext information to be encrypted and determining an encryption object corresponding to the plaintext information to be encrypted;
The acquisition module is further used for acquiring a plurality of first voiceprint features from a plurality of encrypted objects in response to the encrypted objects being a plurality of, wherein any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint feature;
a determining module for determining a first hybrid voiceprint feature based on the plurality of first voiceprint features;
and the encryption module is used for encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information.
In a possible implementation manner, the determining module is configured to determine a first division sequence; dividing each first voiceprint feature of the plurality of first voiceprint features according to the first dividing sequence to obtain a plurality of first dividing results, wherein any one of the plurality of first voiceprint features corresponds to one first dividing result; and splicing the plurality of first segmentation results to obtain the first mixed voiceprint feature.
In a possible implementation manner, the determining module is configured to splice the plurality of first division results according to the first division sequence, so as to obtain the first mixed voiceprint feature.
In one possible implementation manner, the encryption module is configured to input the plaintext information and the first hybrid voiceprint feature into an encryption model, encrypt the plaintext information based on the encryption model, and output the ciphertext information.
In a possible implementation manner, the acquiring module is configured to acquire a plurality of first audio data from the plurality of encrypted objects, where any one of the plurality of encrypted objects corresponds to at least one first audio data; and extracting voiceprint features of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint features.
In another aspect, there is provided an information decryption apparatus, the apparatus including:
The acquisition module is used for acquiring ciphertext information to be decrypted, determining a decryption object corresponding to the ciphertext information to be decrypted, and encrypting plaintext information by the ciphertext information based on first mixed voiceprint features obtained by first voiceprint features of a plurality of encryption objects;
the obtaining module is further configured to obtain a plurality of second voiceprint features from a plurality of decryption objects in response to the plurality of decryption objects, where any one of the plurality of decryption objects corresponds to one second voiceprint feature;
a determining module for determining a second hybrid voiceprint feature based on the plurality of second voiceprint features;
And the decryption module is used for decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information.
In one possible implementation manner, the determining module is configured to determine a second segmentation order; each second voice characteristic in the plurality of second voice characteristics is segmented according to the second segmentation sequence to obtain a plurality of second segmentation results, and any one of the plurality of second voice characteristics corresponds to one second segmentation result; and splicing the plurality of second segmentation results to obtain the second mixed voiceprint features.
In a possible implementation manner, the determining module is configured to splice the plurality of second segmentation results according to the second segmentation order, so as to obtain the second hybrid voiceprint feature.
In one possible implementation manner, the decryption module is configured to determine a decryption model corresponding to the encryption model in response to the ciphertext information to be decrypted being acquired based on the encryption model; and inputting the ciphertext information and the second mixed voiceprint feature to the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
In a possible implementation manner, the acquiring module is configured to acquire a plurality of second audio data from the plurality of decryption objects, where any one of the plurality of decryption objects corresponds to at least one second audio data; and extracting voiceprint features of each second audio data in the plurality of second audio data to obtain the plurality of second voiceprint features.
In another aspect, there is provided a computer device, the computer device including a processor and a memory, the memory storing at least one computer program, the at least one computer program being loaded and executed by the processor, to cause the computer device to implement any one of the above-described information encryption methods, or implement any one of the above-described information decryption methods.
In another aspect, there is provided a computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to cause a computer to implement any one of the information encryption methods described above, or to implement any one of the information decryption methods described above.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any one of the above-described information encryption methods, or implements any one of the above-described information decryption methods.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
And obtaining a first mixed voiceprint characteristic with the voiceprint characteristics of the plurality of encryption objects by splicing the first voiceprint characteristics of the plurality of encryption objects. Since the information encryption based on a plurality of encryption objects can be completed through the first mixed voiceprint feature, the encryption efficiency is improved while high confidentiality is ensured.
When the ciphertext information is decrypted, because the acquired second mixed voiceprint feature has the voiceprint characteristics of a plurality of decryption objects, the information decryption aiming at the plurality of decryption objects can be realized based on one second mixed voiceprint feature, and the decryption difficulty is not changed, and meanwhile, the decryption efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flowchart of an information encryption method according to an embodiment of the present application;
FIG. 3 is a flowchart of an information decryption method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an information encryption device according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of an information decryption device according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a network device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The embodiment of the application provides an information encryption and information decryption method, please refer to fig. 1, which shows a schematic diagram of an implementation environment of the method provided by the embodiment of the application. The implementation environment may include: a terminal 11 and a server 12.
The terminal 11 and the server 12 may independently implement the information encryption and information decryption methods provided in the embodiments of the present application. The terminal 11 and the server 12 can also implement the information encryption and information decryption methods provided by the embodiments of the present application through interaction. For example, the terminal 11 is installed with an application program capable of acquiring plaintext information to be encrypted, and when the application program acquires the plaintext information to be encrypted, the acquired plaintext information to be encrypted may be sent to the server 12, and the server 12 encrypts the plaintext information based on the method provided by the embodiment of the present application to obtain ciphertext information. The server 12 sends the ciphertext information to the terminal 11, and the terminal 11 decrypts the ciphertext information by the method provided by the embodiment of the application to obtain plaintext information. Or the terminal 11 is provided with an application program capable of acquiring the plaintext information to be encrypted, after the application program acquires the plaintext information to be encrypted, the terminal 11 encrypts the plaintext information based on the method provided by the embodiment of the application, after ciphertext information is obtained, the terminal 11 sends the ciphertext information to the server 12, and the server 12 decrypts the ciphertext information based on the method provided by the embodiment of the application, so as to obtain the plaintext information.
Alternatively, the terminal 11 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or handwriting device, such as a PC (Personal Computer ), a mobile phone, a smart phone, a PDA (Personal DIGITAL ASSISTANT, a Personal digital assistant), a wearable device, a PPC (Pocket PC), a tablet computer, a smart car machine, a smart television, a smart speaker, etc. The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The terminal 11 establishes a communication connection with the server 12 through a wired or wireless network.
Those skilled in the art will appreciate that the above-described terminal 11 and server 12 are only examples, and that other terminals or servers that may be present in the present application or in the future are applicable and within the scope of the present application and are incorporated herein by reference.
Based on the implementation environment shown in fig. 1, the embodiment of the present application provides an information encryption method, which may be performed by a terminal or a server. Taking the example that the method is applied to a terminal, the flow of the method is shown in fig. 2, and the method comprises steps 201 to 204.
In step 201, plaintext information to be encrypted is acquired, and an encryption object corresponding to the plaintext information to be encrypted is determined.
The embodiment of the application does not limit the plaintext information, and can be any information with confidentiality requirement. Illustratively, the plaintext information may be a picture with a security requirement, a text with a security requirement, an audio with a security requirement, or a video with a security requirement.
Optionally, the terminal acquires the plaintext information through the acquisition device. For example, a conference is held at the site a, the terminal records conference video based on the video acquisition device, and the recorded conference video has a security requirement because the conference is not disclosed externally, and the conference video is plaintext information to be encrypted. Optionally, the terminal obtains the plaintext information stored in the storage space by accessing the storage space. The storage space may be a storage space of the terminal, or may be a storage space of a server communicatively connected to the terminal. For example, the user a and the user B together complete a writing work of a text a which cannot be disclosed externally, the text a is stored in a storage space of the terminal, and the terminal obtains the text a by accessing the storage space, and the text a is plaintext information.
In one possible implementation manner, when obtaining the plaintext information to be encrypted, it is further required to determine an encryption object corresponding to the plaintext information to be encrypted. Illustratively, the encrypted object corresponding to the plaintext information to be encrypted refers to the owner of the plaintext information, that is, an object that needs to keep the plaintext information secret. Taking the conference video shown in the above embodiment as the plaintext information as an example, the encryption object corresponding to the plaintext information is the conference participant. Taking the text a shown in the above embodiment as plaintext information as an example, the encrypted object corresponding to the plaintext information is the writer of the text a, that is, the user a and the user B.
It should be noted that the above examples are intended to illustrate the relationship between the encrypted object and the plaintext information, and not to limit the encrypted object, and the encrypted object may be any object that needs to keep the plaintext information secret, and the number of people of the encrypted object may be any number, which is not limited in the embodiments of the present application.
In step 202, in response to the encrypted object being a plurality of, a plurality of first voiceprint features from the plurality of encrypted objects are obtained, any one of the plurality of encrypted objects corresponding to one of the first voiceprint features.
In one possible implementation, a plaintext message corresponds to a plurality of encrypted objects, and a plurality of first voiceprint features of the plurality of encrypted objects are acquired. The manner in which the plurality of first voiceprint features are acquired includes, but is not limited to: acquiring a plurality of first audio data from a plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data; and extracting voiceprint features of each first audio data in the plurality of first audio data to obtain a plurality of first voiceprint features.
Illustratively, the terminal provides an audio data input interface on the basis of which the encryption object inputs audio data, and the terminal takes the received audio data as the first audio data. The audio data input interface may be a MIC (microphone), or may be other devices that may be used to collect audio data, which is not limited in this embodiment of the present application. The microphone for inputting audio data may be a microphone externally connected to the terminal or may be a microphone built in the terminal. The microphone connected to the terminal may be connected by a wire or by a wireless connection.
When the encrypted object inputs the audio data based on the audio data input interface, the text read by the encrypted object can be any text or text provided by the terminal. For example, the terminal displays text on a screen for prompting an encryption object to input audio data while providing an audio data input interface. The embodiment of the application does not limit the text provided by the terminal, and can be the name of plaintext information or randomly generated characters. The terminal prompts the encrypted object when the encrypted object inputs the first audio data by displaying the text on the screen, so that the thinking time of the encrypted object when the encrypted object inputs the first audio data is reduced, and the efficiency of the terminal for acquiring a plurality of first audio data is improved. Alternatively, when the encrypted object inputs the audio data based on the audio data input interface, the audio data may be input a plurality of times, and the quality of the input audio data is improved by increasing the number of times of inputting the audio data, and the terminal acquires the first audio data with higher quality based on the quality. Regarding audio data input multiple times by the same encryption object, the content of the audio data may be the same or different, which is not limited in the embodiment of the present application. The content of the audio data refers to text read by the encryption object when the audio data is input.
In one possible implementation, since non-voice audio data, such as ambient noise, exists in the first audio data, the terminal may also pre-process the obtained first audio data before extracting the voiceprint features of each of the plurality of first audio data. The embodiment of the application is not limited to the preprocessing mode, and can be realized based on the VAD (Voice Activity Detection, voice endpoint detection) technology. For example, a start position and an end position of voice audio data in the first audio data are located based on the VAD technology, and the first audio data are segmented according to the start position and the end position, so as to obtain the voice audio data in the first audio data. By preprocessing, separation of voice audio data from non-voice audio data in the first audio data is realized, so that voiceprint features of the first audio data are extracted later more accurately.
The embodiment of the application does not limit the mode of extracting the voiceprint features of each first audio data, and may extract MFCC (Mel Frequency Cepstral Coefficents, mel-frequency cepstrum Coefficient) from the first audio data as the first voiceprint feature, extract LPCC (LINEAR PREDICTIVE CEPSTRAL coeffient, linear prediction cepstrum Coefficient) from the first audio data as the first voiceprint feature, or extract other voiceprint feature technologies. Note that, in the case where a plurality of pieces of first audio data are input to the same encryption object as shown in the above embodiment, since the voiceprint characteristics of the encryption object are independent of the content of the first audio data. Therefore, the voiceprint features extracted from the plurality of first audio data from the same encrypted object are the same, that is, the plurality of first audio data of the same encrypted object corresponds to one first voiceprint feature.
In step 203, a first hybrid voiceprint feature is determined based on the plurality of first voiceprint features.
With respect to the manner in which a first hybrid voiceprint feature is determined based on a plurality of first voiceprint features, including, but not limited to: determining a first cut order; dividing each first voiceprint feature of the plurality of first voiceprint features according to a first dividing sequence to obtain a plurality of first dividing results, wherein any one of the plurality of first voiceprint features corresponds to one first dividing result; and splicing the plurality of first segmentation results to obtain a first mixed voiceprint feature. Illustratively, the manner in which the first order of cuts is determined includes, but is not limited to, the following two ways.
The first determining mode is that the terminal determines a first division sequence based on the information of each first voiceprint feature.
The information of the first voiceprint feature may be the acquisition time of the first voiceprint feature, may be the name of the first voiceprint feature, or may be other information, which is not limited in the embodiment of the present application. When the information of the first voiceprint feature is the acquisition time of the first voiceprint feature, the terminal may arrange in the order from the early to the late based on the acquisition time of the first voiceprint feature, and use the arrangement result as the first division order. When the information of the first voiceprint feature is a name of the first voiceprint feature, the terminal may arrange according to a preset alphabetical table based on the name of the first voiceprint feature, and use the arrangement result as a first score order. Wherein the alphabetical table may be set based on empirical values. With respect to the method for acquiring the name of the first voiceprint feature, optionally, the terminal provides an audio data input interface and an information input interface, after the encrypted object inputs the first audio data based on the audio data input interface, the name of the first audio data can be input based on the information input interface, and the terminal uses the received name as the name of the first voiceprint feature corresponding to the first audio data. Of course, the operation of inputting the name of the first audio data by the encryption object based on the information input interface may also be performed before inputting the first audio data. It should be noted that, for the case that a plurality of first audio data are input to the same encrypted object shown in the above embodiment and one first voiceprint feature is obtained based on the plurality of first audio data, if names of the first audio data input by the encrypted object when the plurality of first audio data are input are different, the terminal may randomly determine a name from the plurality of names as the name of the first voiceprint feature of the encrypted object or use the last input name as the name of the first voiceprint feature of the encrypted object.
And determining a second determination mode, wherein the terminal determines a first division sequence based on the user requirement.
The user may be an encrypted object, or may be another object related to plaintext information. For example, after the terminal acquires the plurality of first voiceprint features of the plurality of encrypted objects, names of the acquired plurality of first voiceprint features are displayed on a screen, and the user can determine the order of clicking by the terminal as a first tangential order by clicking the names of the first voiceprint features in turn. Optionally, the terminal provides an information input interface, the user sequentially inputs names of the first voiceprint features based on the information input interface, and the terminal determines an order in which the user inputs the first voiceprint features as a first tangential order. Optionally, the terminal provides a plurality of sorting methods and controls corresponding to the sorting methods, and the user determines the first sorting order by triggering the controls corresponding to the sorting methods. The control corresponding to the trigger ordering method can be triggered by voice operation or by clicking operation, and the embodiment of the application is not limited to the trigger ordering method. The various sorting methods provided by the terminal are similar to those shown in the first determination mode, and will not be repeated here.
The embodiment of the application does not limit the manner of splitting the first voiceprint feature, and optionally, the n-1/n to 1 portions of the first voiceprint feature are split. Wherein n is a positive integer, which refers to the segmentation order of the first voiceprint feature. For example, the first voiceprint feature a (113344) is the first voiceprint feature to be segmented, that is, n is taken to be 1, and at this time, the first segmentation result corresponding to the first voiceprint feature a is from the start position (0) to the end position (1) of the first voiceprint feature a, that is, the first segmentation result corresponding to the first voiceprint feature a is 113344. For another example, the first voiceprint feature B (112244) is a second voiceprint feature to be segmented, that is, n is taken to be 2, and at this time, the first segmentation result corresponding to the first voiceprint feature B is from 1/2 part of the first voiceprint feature B to the end position (1), that is, the first segmentation result corresponding to the first voiceprint feature B is 244. The segmentation mode of each first voiceprint feature is determined based on the segmentation sequence of each first voiceprint feature, and the confidentiality of the first mixed voiceprint feature obtained by splicing a plurality of first segmentation results is improved by using different segmentation modes for different first voiceprint features. In addition, since the voiceprint feature data quantity reserved by the first voiceprint feature with the later segmentation order is smaller, the voiceprint feature data quantity of a plurality of first segmentation results is effectively controlled for subsequent operation.
After each first voiceprint feature is segmented to obtain a plurality of first segmentation results, the plurality of first segmentation results can be spliced to obtain a first mixed voiceprint feature. In one possible implementation manner, the order of splicing the plurality of first division results is the same as the first division order, and the terminal splices the plurality of first division results according to the first division order to obtain a first mixed voiceprint feature, where the process is shown in formula 1.
First hybrid voiceprint feature = F (t 1)+F(t2)+…+F(tn-1)+F(tn) (equation 1)
Wherein n is a positive integer, used for representing the segmentation order of the first voiceprint feature, t n refers to the nth first voiceprint feature to be segmented, F (t n) refers to the n-1/n to 1 part of the nth first voiceprint feature to be segmented, and F (t n-1)+F(tn) refers to the concatenation of F (t n-1) and F (t n). Taking n as 2 as an example, F (t 1) is 11, F (t 2) is 22, F (t 1)+F(t2) =1122, i.e. the first hybrid voiceprint feature is 1122.
Of course, the terminal may also select other sequences of splicing the plurality of first division results, which is not limited in the embodiment of the present application. And splitting each first voiceprint feature, and selecting a part of the first voiceprint features for splicing, wherein the split voiceprint feature data volume is reduced, so that the splicing efficiency is improved. Meanwhile, the length of the first mixed voiceprint feature obtained by splicing is controlled, and encryption efficiency is improved when plaintext information is encrypted based on the first mixed voiceprint feature subsequently. In addition, the confidentiality of the first mixed voiceprint features obtained by selecting and splicing a part of the first voiceprint features is higher than that of the first mixed voiceprint features obtained by splicing a plurality of first voiceprint features which are not segmented.
In step 204, the plaintext information to be encrypted is encrypted based on the first hybrid voiceprint feature, and ciphertext information corresponding to the plaintext information is obtained.
And encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature, namely encrypting the plaintext information by taking the first mixed voiceprint feature as a key. Illustratively, plaintext information and a first hybrid voiceprint feature are input to an encryption model, the plaintext information is encrypted based on the encryption model, and ciphertext information is output. The embodiment of the application is not limited to the encryption model, and can be any model for executing a symmetric encryption algorithm. For example, the encryption model is a model that implements encryption based on AES (Advanced Encryption Standard ). For another example, the encryption model is a model that implements encryption based on DES (Data Encryption Standard ).
In summary, according to the information encryption method provided by the embodiment of the application, the first mixed voiceprint features with the voiceprint features of the plurality of encryption objects are obtained by splicing the first voiceprint features of the plurality of encryption objects. By slicing the first voiceprint feature, confidentiality of the obtained first hybrid voiceprint feature is improved during slicing. Meanwhile, as the information encryption based on a plurality of encryption objects can be completed by only one first mixed voiceprint feature, the encryption efficiency is improved while the high confidentiality is ensured.
Based on the implementation environment shown in fig. 1, the embodiment of the present application provides an information decryption method, which may be performed by a terminal or a server. Taking the example that the method is applied to a terminal, the flow of the method is shown in fig. 3, and the method comprises steps 301 to 304.
In step 301, ciphertext information to be decrypted is obtained, a decryption object corresponding to the ciphertext information to be decrypted is determined, and the ciphertext information is obtained by encrypting plaintext information based on a first hybrid voiceprint feature obtained by first voiceprint features of a plurality of encryption objects.
Optionally, the ciphertext information obtaining process to be decrypted is detailed in the embodiment shown in fig. 2, and will not be described in detail herein. The decryption object corresponding to the ciphertext information is an object that decrypts the ciphertext information. Taking the clear text information as the conference video of the conference which is not disclosed outside as an example, conference participants need to review the conference video because the conference record is written, and begin to decrypt the ciphertext information corresponding to the conference video, and at the moment, the participants belong to decryption objects corresponding to the ciphertext information. Taking plaintext information as an example of a text which is not disclosed externally and is written by a user A and a user B together, a user C needs to check text content and starts to decrypt ciphertext information corresponding to the text, and at the moment, the user C belongs to a decryption object corresponding to the ciphertext information.
In step 302, in response to the decryption object being a plurality of, a plurality of second voiceprint features from the plurality of decryption objects are acquired, any one of the plurality of decryption objects corresponding to one of the second voiceprint features.
In one possible implementation, the manner in which the plurality of second voiceprint features are obtained includes, but is not limited to: acquiring a plurality of second audio data from a plurality of decryption objects, wherein any one of the plurality of decryption objects corresponds to at least one second audio data; and extracting voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
Alternatively, the method for obtaining the plurality of second audio data is similar to the method for obtaining the plurality of first audio data in the embodiment shown in fig. 2, and the method for extracting the voiceprint features of each second audio data is similar to the method for extracting the voiceprint features of each first audio data in the embodiment shown in fig. 2, which are not repeated herein.
It should be noted that, in the case where the extraction of the voiceprint features of the first audio data may be implemented based on the plurality of voiceprint feature extraction models shown in fig. 2, the voiceprint feature extraction model for extracting the voiceprint features of the second audio data is to be consistent with the voiceprint feature extraction model for extracting the voiceprint features of the first audio data.
In step 303, a second hybrid voiceprint feature is determined based on the plurality of second voiceprint features.
Illustratively, determining a second cut order; dividing each second voice characteristic in the plurality of second voice characteristics according to a second dividing sequence to obtain a plurality of second dividing results, wherein any one of the plurality of second voice characteristics corresponds to one second dividing result; and splicing the plurality of second segmentation results to obtain a second mixed voiceprint feature.
The method for determining the second segmentation order is similar to the method for determining the first segmentation order in the embodiment shown in fig. 2, and will not be repeated herein. The second dividing sequence may be the same as the first dividing sequence or may be different from the first dividing sequence, which is not limited in the embodiment of the present application. The method for splitting each second voiceprint feature based on the second splitting order is similar to the method for splitting each first voiceprint feature based on the first splitting order in the embodiment shown in fig. 2, and will not be described in detail.
With respect to concatenating the plurality of second segmentation results, in one possible implementation, the plurality of second segmentation results may be concatenated in a second segmentation order to obtain a second hybrid voiceprint feature. Of course, the order of splicing the second segmentation results may also be different from the second segmentation order, which is not limited in the embodiment of the present application.
In step 304, the ciphertext information to be decrypted is decrypted based on the second hybrid voiceprint feature, and plaintext information corresponding to the ciphertext information is obtained.
Illustratively, determining a decryption model corresponding to the encryption model in response to ciphertext information to be decrypted being obtained based on the encryption model; and inputting ciphertext information and the second mixed voiceprint feature into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting plaintext information. The decryption model corresponding to the encryption model refers to an inverse algorithm of an algorithm executed by the decryption model.
In one possible implementation manner, when the second mixed voiceprint feature is consistent with the first mixed voiceprint feature used for encrypting the plaintext information, decryption of the ciphertext information can be implemented based on the second mixed voiceprint feature and the decryption model, so as to obtain plaintext information corresponding to the ciphertext information. That is, when a plurality of decryption objects corresponding to ciphertext information to be decrypted are the same as encryption objects corresponding to plaintext information, the plurality of decryption objects can acquire the plaintext information.
In summary, according to the information decryption method provided by the embodiment of the present application, since a plurality of decryption objects are required to be identical to a plurality of encryption objects, plaintext information corresponding to ciphertext information can be obtained. Therefore, when a plurality of encryption objects do not agree on whether to decrypt the ciphertext information, the decryption of the ciphertext information cannot be completed. In addition, decryption of ciphertext information is completed based on a second mixed voiceprint feature, and decryption efficiency is improved while acquisition difficulty of plaintext information corresponding to the ciphertext information is not changed.
Referring to fig. 4, an embodiment of the present application provides an information encryption apparatus, including: an acquisition module 401, a determination module 402 and an encryption module 403.
An obtaining module 401, configured to obtain plaintext information to be encrypted, and determine an encryption object corresponding to the plaintext information to be encrypted;
The obtaining module 401 is further configured to obtain a plurality of first voiceprint features from a plurality of encrypted objects in response to the encrypted objects being a plurality of, where any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint feature;
a determining module 402 for determining a first hybrid voiceprint feature based on the plurality of first voiceprint features;
The encryption module 403 is configured to encrypt plaintext information to be encrypted based on the first hybrid voiceprint feature, so as to obtain ciphertext information corresponding to the plaintext information.
Optionally, a determining module 402, configured to determine a first cutting order; dividing each first voiceprint feature of the plurality of first voiceprint features according to a first dividing sequence to obtain a plurality of first dividing results, wherein any one of the plurality of first voiceprint features corresponds to one first dividing result; and splicing the plurality of first segmentation results to obtain a first mixed voiceprint feature.
Optionally, the determining module 402 is configured to splice the plurality of first division results according to the first division sequence, to obtain a first hybrid voiceprint feature.
Optionally, the encryption module 403 is configured to input plaintext information and the first hybrid voiceprint feature to the encryption model, encrypt the plaintext information based on the encryption model, and output ciphertext information.
Optionally, the obtaining module 401 is configured to obtain a plurality of first audio data from a plurality of encrypted objects, where any one of the plurality of encrypted objects corresponds to at least one first audio data; and extracting voiceprint features of each first audio data in the plurality of first audio data to obtain a plurality of first voiceprint features.
The device obtains a first mixed voiceprint feature with the voiceprint characteristics of a plurality of encryption objects by splicing the first voiceprint characteristics of the plurality of encryption objects. Since the information encryption based on a plurality of encryption objects can be completed through one first mixed voiceprint feature, the encryption efficiency is improved while high confidentiality is ensured.
Referring to fig. 5, an embodiment of the present application provides an information decryption apparatus including: an acquisition module 501, a determination module 502 and a decryption module 503.
The obtaining module 501 is configured to obtain ciphertext information to be decrypted, determine a decryption object corresponding to the ciphertext information to be decrypted, and encrypt plaintext information based on a first hybrid voiceprint feature obtained by first voiceprint features of a plurality of encryption objects;
the obtaining module 501 is further configured to obtain a plurality of second voiceprint features from a plurality of decryption objects in response to the decryption object being a plurality of, where any one of the plurality of decryption objects corresponds to one second voiceprint feature;
a determining module 502 configured to determine a second hybrid voiceprint feature based on the plurality of second voiceprint features;
and the decryption module 503 is configured to decrypt the ciphertext information to be decrypted based on the second hybrid voiceprint feature, to obtain plaintext information corresponding to the ciphertext information.
Optionally, a determining module 502 is configured to determine a second segmentation order; dividing each second voice characteristic in the plurality of second voice characteristics according to a second dividing sequence to obtain a plurality of second dividing results, wherein any one of the plurality of second voice characteristics corresponds to one second dividing result; and splicing the plurality of second segmentation results to obtain a second mixed voiceprint feature.
Optionally, the determining module 502 is configured to splice a plurality of second segmentation results according to a second segmentation order to obtain a second hybrid voiceprint feature.
Optionally, a decryption module 503 is configured to determine a decryption model corresponding to the encryption model in response to ciphertext information to be decrypted being acquired based on the encryption model; and inputting ciphertext information and the second mixed voiceprint feature into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting plaintext information.
Optionally, the obtaining module 501 is configured to obtain a plurality of second audio data from a plurality of decryption objects, where any one of the plurality of decryption objects corresponds to at least one second audio data; and extracting voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
When the device decrypts the ciphertext information, because the acquired second mixed voiceprint feature has the voiceprint characteristics of a plurality of decryption objects, the information decryption aiming at the plurality of decryption objects can be realized based on one second mixed voiceprint feature, and the decryption efficiency is improved while the decryption difficulty is not changed.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, where the server may include one or more processors (Central Processing Units, CPU) 601 and one or more memories 602, where the one or more memories 602 store at least one computer program, and the at least one computer program is loaded and executed by the one or more processors 601, so that the server implements the information encryption and information decryption methods provided in the foregoing method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
Fig. 7 is a schematic structural diagram of a network device according to an embodiment of the present application. The device may be a terminal, for example: a smart phone, a tablet computer, an MP3 (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminals may also be referred to by other names as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, the terminal includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor and a coprocessor, wherein the main processor is a processor for processing data in an awake state, and is also called a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to cause the terminal to implement the information encryption and information decryption methods provided by the method embodiments of the present application.
In some embodiments, the terminal may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, a positioning assembly 708, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 704 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, disposed on the front panel of the terminal; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones can be respectively arranged at different parts of the terminal. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The location component 708 is operative to locate a current geographic location of the terminal for navigation or LBS (Location Based Service, location-based services). The positioning component 708 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 709 is used to power the various components in the terminal. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyroscope sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 712 may collect a 3D motion of the user to the terminal in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the terminal and/or at a lower layer of the display screen 705. When the pressure sensor 713 is disposed at a side frame of the terminal, a grip signal of the terminal by a user may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 714 is used to collect a fingerprint of the user, and the processor 701 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 714 may be provided on the front, back or side of the terminal. When a physical key or vendor Logo (trademark) is provided on the terminal, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 716 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the terminal gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal gradually increases, the processor 701 controls the display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is not limiting of the network device and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer apparatus to implement any of the information encryption methods described above, or to implement any of the information decryption methods described above.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program loaded and executed by a processor of a computer apparatus to cause the computer to implement any one of the information encryption methods described above, or to implement any one of the information decryption methods described above.
In one possible implementation, the computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and so on.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any one of the above-described information encryption methods, or implements any one of the above-described information decryption methods.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the plaintext information referred to in the present application is obtained under a sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The above embodiments are merely exemplary embodiments of the present application and are not intended to limit the present application, any modifications, equivalent substitutions, improvements, etc. that fall within the principles of the present application should be included in the scope of the present application.

Claims (12)

1. An information encryption method, characterized in that the method comprises:
Acquiring plaintext information to be encrypted, and determining an encryption object corresponding to the plaintext information to be encrypted;
responding to a plurality of encryption objects, acquiring a plurality of first voiceprint features from the plurality of encryption objects, wherein any encryption object in the plurality of encryption objects corresponds to one first voiceprint feature, and the first voiceprint feature is extracted from first audio data of the encryption object;
Determining a first cut order; dividing each first voiceprint feature of the plurality of first voiceprint features according to the first division sequence to obtain a plurality of first division results, wherein any one of the plurality of first voiceprint features corresponds to one first division result, the division mode of each first voiceprint feature is determined based on the first division sequence of each first voiceprint feature, different first voiceprint features use different division modes, and each first division result is a part of the first voiceprint feature corresponding to each first division result; splicing the plurality of first segmentation results to obtain a first mixed voiceprint feature;
encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information.
2. The method of claim 1, wherein the concatenating the plurality of first cut results in a first hybrid voiceprint feature comprising:
And splicing the plurality of first segmentation results according to the first segmentation sequence to obtain the first mixed voiceprint feature.
3. The method according to claim 1 or 2, wherein encrypting the plaintext information to be encrypted based on the first hybrid voiceprint feature to obtain ciphertext information corresponding to the plaintext information, comprises:
inputting the plaintext information and the first mixed voiceprint feature to an encryption model, encrypting the plaintext information based on the encryption model, and outputting the ciphertext information.
4. The method of claim 1 or 2, wherein the obtaining a plurality of first voiceprint features from a plurality of encrypted objects comprises:
Acquiring a plurality of first audio data from the plurality of encrypted objects, wherein any one of the plurality of encrypted objects corresponds to at least one first audio data;
and extracting voiceprint features of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint features.
5. A method of decrypting information, the method comprising:
Obtaining ciphertext information to be decrypted, determining a decryption object corresponding to the ciphertext information to be decrypted, and encrypting plaintext information by the ciphertext information based on first mixed voiceprint features obtained by first voiceprint features of a plurality of encryption objects;
responding to the plurality of decryption objects, acquiring a plurality of second voice characteristics from the plurality of decryption objects, wherein any one decryption object in the plurality of decryption objects corresponds to one second voice characteristic, and the second voice characteristic is extracted from second audio data of the decryption object;
Determining a second segmentation order; dividing each second voiceprint feature in the plurality of second voiceprint features according to a second dividing sequence to obtain a plurality of second dividing results, wherein any one of the plurality of second voiceprint features corresponds to one second dividing result, dividing modes of each second voiceprint feature are determined based on the second dividing sequence of each second voiceprint feature, different second voiceprint features use different dividing modes, and each second dividing result is a part of the second voiceprint feature corresponding to each second dividing result; splicing a plurality of second segmentation results to obtain second mixed voiceprint features;
And decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information.
6. The method of claim 5, wherein concatenating the plurality of second segmentation results to obtain a second hybrid voiceprint feature, comprising:
and splicing the plurality of second segmentation results according to the second segmentation order to obtain the second mixed voiceprint feature.
7. The method according to claim 5 or 6, wherein decrypting the ciphertext information to be decrypted based on the second hybrid voiceprint feature to obtain plaintext information corresponding to the ciphertext information, comprises:
Responding to the ciphertext information to be decrypted based on an encryption model, and determining a decryption model corresponding to the encryption model;
and inputting the ciphertext information and the second mixed voiceprint feature to the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
8. The method of claim 5 or 6, wherein the obtaining a plurality of second voiceprint features from a plurality of decrypted objects comprises:
acquiring a plurality of second audio data from the plurality of decryption objects, wherein any one of the plurality of decryption objects corresponds to at least one second audio data;
And extracting voiceprint features of each second audio data in the plurality of second audio data to obtain the plurality of second voiceprint features.
9. An information encryption apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring plaintext information to be encrypted and determining an encryption object corresponding to the plaintext information to be encrypted;
the acquisition module is further used for responding to the plurality of encrypted objects to acquire a plurality of first voiceprint features from the plurality of encrypted objects, any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint feature, and the first voiceprint feature is extracted from first audio data of the encrypted object;
A determining module for determining a first cut order; dividing each first voiceprint feature of the plurality of first voiceprint features according to the first division sequence to obtain a plurality of first division results, wherein any one of the plurality of first voiceprint features corresponds to one first division result, the division mode of each first voiceprint feature is determined based on the first division sequence of each first voiceprint feature, different first voiceprint features use different division modes, and each first division result is a part of the first voiceprint feature corresponding to each first division result; splicing the plurality of first segmentation results to obtain a first mixed voiceprint feature;
and the encryption module is used for encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information.
10. An information decryption apparatus, the apparatus comprising:
The acquisition module is used for acquiring ciphertext information to be decrypted, determining a decryption object corresponding to the ciphertext information to be decrypted, and encrypting plaintext information by the ciphertext information based on first mixed voiceprint features obtained by first voiceprint features of a plurality of encryption objects;
The obtaining module is further configured to obtain a plurality of second voice characteristics from a plurality of decryption objects in response to the plurality of decryption objects, where any one of the plurality of decryption objects corresponds to one second voice characteristic, and the second voice characteristic is extracted from second audio data of the decryption object;
The determining module is used for determining a second segmentation order; dividing each second voiceprint feature in the plurality of second voiceprint features according to a second dividing sequence to obtain a plurality of second dividing results, wherein any one of the plurality of second voiceprint features corresponds to one second dividing result, dividing modes of each second voiceprint feature are determined based on the second dividing sequence of each second voiceprint feature, different second voiceprint features use different dividing modes, and each second dividing result is a part of the second voiceprint feature corresponding to each second dividing result; splicing a plurality of second segmentation results to obtain second mixed voiceprint features;
And the decryption module is used for decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information.
11. A computer device, characterized in that it comprises a processor and a memory, in which at least one computer program is stored, which is loaded and executed by the processor, to cause the computer device to implement the information encryption method according to any one of claims 1 to 4 or the information decryption method according to any one of claims 5 to 8.
12. A computer-readable storage medium, wherein at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is loaded and executed by a processor, so that a computer implements the information encryption method according to any one of claims 1 to 4, or the information decryption method according to any one of claims 5 to 8.
CN202210185272.8A 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium Active CN114598516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210185272.8A CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210185272.8A CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114598516A CN114598516A (en) 2022-06-07
CN114598516B true CN114598516B (en) 2024-04-26

Family

ID=81814817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210185272.8A Active CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114598516B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001144748A (en) * 1999-11-11 2001-05-25 Sony Corp Device and method for generating cryptographic key, device and method for enciphering and deciphering, and program providing medium
JP2001168854A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
JP2001168855A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
CN105991290A (en) * 2015-03-06 2016-10-05 科大讯飞股份有限公司 Pseudo random voiceprint cipher text generation method and system
WO2019085575A1 (en) * 2017-11-02 2019-05-09 阿里巴巴集团控股有限公司 Voiceprint authentication method and apparatus, and account registration method and apparatus
CN110677260A (en) * 2019-09-29 2020-01-10 京东方科技集团股份有限公司 Authentication method, authentication device, electronic equipment and storage medium
CN111756741A (en) * 2020-06-24 2020-10-09 安徽听见科技有限公司 Data transmission method, device, equipment and storage medium
CN112053695A (en) * 2020-09-11 2020-12-08 北京三快在线科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium
CN113762971A (en) * 2021-05-17 2021-12-07 腾讯科技(深圳)有限公司 Data encryption method and device, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001144748A (en) * 1999-11-11 2001-05-25 Sony Corp Device and method for generating cryptographic key, device and method for enciphering and deciphering, and program providing medium
JP2001168854A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
JP2001168855A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
CN105991290A (en) * 2015-03-06 2016-10-05 科大讯飞股份有限公司 Pseudo random voiceprint cipher text generation method and system
WO2019085575A1 (en) * 2017-11-02 2019-05-09 阿里巴巴集团控股有限公司 Voiceprint authentication method and apparatus, and account registration method and apparatus
CN110677260A (en) * 2019-09-29 2020-01-10 京东方科技集团股份有限公司 Authentication method, authentication device, electronic equipment and storage medium
CN111756741A (en) * 2020-06-24 2020-10-09 安徽听见科技有限公司 Data transmission method, device, equipment and storage medium
CN112053695A (en) * 2020-09-11 2020-12-08 北京三快在线科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN113762971A (en) * 2021-05-17 2021-12-07 腾讯科技(深圳)有限公司 Data encryption method and device, computer equipment and storage medium
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Zhen Huang ; Xiaomei Zhang ; Lei Wang ; Zhengying Li.Study and implementation of voiceprint identity authentication for Android mobile terminal.2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI).2018,全文. *
基于声纹的Android手机访问控制及文件加密系统;张旻;李明;李政;蒋嘉林;;信息网络安全(第04期);全文 *

Also Published As

Publication number Publication date
CN114598516A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN110491358B (en) Method, device, equipment, system and storage medium for audio recording
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN111445901B (en) Audio data acquisition method and device, electronic equipment and storage medium
CN111142838B (en) Audio playing method, device, computer equipment and storage medium
CN111462742B (en) Text display method and device based on voice, electronic equipment and storage medium
CN111241499B (en) Application program login method, device, terminal and storage medium
CN112788359B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN111276122B (en) Audio generation method and device and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN111128115B (en) Information verification method and device, electronic equipment and storage medium
CN109448676B (en) Audio processing method, device and storage medium
CN108831423B (en) Method, device, terminal and storage medium for extracting main melody tracks from audio data
CN113362836B (en) Vocoder training method, terminal and storage medium
CN114598516B (en) Information encryption and information decryption methods, devices, equipment and storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN112214115B (en) Input mode identification method and device, electronic equipment and storage medium
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN112764824B (en) Method, device, equipment and storage medium for triggering identity verification in application program
CN111314205B (en) Instant messaging matching method, device, system, equipment and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN112311652A (en) Message sending method, device, terminal and storage medium
CN111135571B (en) Game identification method, game identification device, terminal, server and readable storage medium
CN113539291B (en) Noise reduction method and device for audio signal, electronic equipment and storage medium
CN111613252B (en) Audio recording method, device, system, equipment and storage medium
CN112133267B (en) Audio effect processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant