CN114598516A - Information encryption method, information decryption method, device, equipment and storage medium - Google Patents

Information encryption method, information decryption method, device, equipment and storage medium Download PDF

Info

Publication number
CN114598516A
CN114598516A CN202210185272.8A CN202210185272A CN114598516A CN 114598516 A CN114598516 A CN 114598516A CN 202210185272 A CN202210185272 A CN 202210185272A CN 114598516 A CN114598516 A CN 114598516A
Authority
CN
China
Prior art keywords
voiceprint
information
encrypted
decryption
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210185272.8A
Other languages
Chinese (zh)
Other versions
CN114598516B (en
Inventor
崔伟才
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wutong Chelian Technology Co Ltd
Original Assignee
Beijing Wutong Chelian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wutong Chelian Technology Co Ltd filed Critical Beijing Wutong Chelian Technology Co Ltd
Priority to CN202210185272.8A priority Critical patent/CN114598516B/en
Publication of CN114598516A publication Critical patent/CN114598516A/en
Application granted granted Critical
Publication of CN114598516B publication Critical patent/CN114598516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/87Detection of discrete points within a voice signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0866Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2463/00Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00
    • H04L2463/062Additional details relating to network architectures or network communication protocols for network security covered by H04L63/00 applying encryption of the keys

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses information encryption and decryption methods, devices, equipment and storage media, and belongs to the technical field of computers. The method comprises the following steps: acquiring plaintext information to be encrypted, and determining an encrypted object corresponding to the plaintext information to be encrypted; responding to a plurality of encrypted objects, acquiring a plurality of first voiceprint characteristics from the plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint characteristic; determining a first hybrid voiceprint feature based on the first plurality of voiceprint features; and encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic to obtain ciphertext information corresponding to the plaintext information. And obtaining a first mixed voiceprint characteristic with the voiceprint characteristics of the plurality of encrypted objects by splicing the first voiceprint characteristics of the plurality of encrypted objects. Therefore, information encryption based on a plurality of encrypted objects can be completed by one first mixed voiceprint feature, high confidentiality is guaranteed, and encryption efficiency is improved.

Description

Information encryption method, information decryption method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an information encryption method, an information decryption method, an information encryption device, an information decryption device and a storage medium.
Background
With the development of computer technology, the way of information encryption is gradually diversified. In addition to implementing information encryption using a conventional code as a key, encryption of information possessed by an encrypted object may also be implemented based on a biometric feature of the encrypted object.
Disclosure of Invention
The embodiment of the application provides an information encryption method, an information decryption method, an information encryption device, an information decryption device and a storage medium, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an information encryption method, where the method includes:
acquiring plaintext information to be encrypted, and determining an encrypted object corresponding to the plaintext information to be encrypted;
responding to a plurality of encrypted objects, and acquiring a plurality of first voiceprint characteristics from the plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint characteristic;
determining a first hybrid voiceprint feature based on the first plurality of voiceprint features;
and encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic to obtain ciphertext information corresponding to the plaintext information.
In one possible implementation, the determining a first mixed voiceprint feature based on the first plurality of voiceprint features includes:
determining a first cutting sequence;
segmenting each first voiceprint feature in the first voiceprint features according to the first segmentation sequence to obtain a plurality of first segmentation results, wherein any one first voiceprint feature in the first voiceprint features corresponds to one first segmentation result;
and splicing the plurality of first cutting results to obtain the first mixed voiceprint characteristic.
In one possible implementation manner, the stitching the plurality of first cut results to obtain the first mixed voiceprint feature includes:
and splicing the plurality of first cutting results according to the first cutting sequence to obtain the first mixed voiceprint characteristic.
In a possible implementation manner, the encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information includes:
and inputting the plaintext information and the first mixed voiceprint characteristic to an encryption model, encrypting the plaintext information based on the encryption model, and outputting the ciphertext information.
In one possible implementation, the obtaining a plurality of first voiceprint features from a plurality of encrypted objects includes:
acquiring a plurality of first audio data from the plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data;
and extracting the voiceprint characteristics of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint characteristics.
On the other hand, an embodiment of the present application provides an information decryption method, where the method includes:
acquiring ciphertext information to be decrypted, and determining a decryption object corresponding to the ciphertext information to be decrypted, wherein the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint characteristic obtained by first voiceprint characteristics of a plurality of encryption objects;
responding to a plurality of decryption objects, acquiring a plurality of second voiceprint features from the plurality of decryption objects, wherein any one decryption object in the plurality of decryption objects corresponds to one second voiceprint feature;
determining a second hybrid voiceprint feature based on the second plurality of voiceprint features;
and decrypting the ciphertext information to be decrypted based on the second mixed voiceprint characteristic to obtain plaintext information corresponding to the ciphertext information.
In one possible implementation, the determining a second mixed voiceprint feature based on the second plurality of voiceprint features includes:
determining a second segmentation order;
segmenting each second acoustic line feature in the second acoustic line features according to the second segmentation sequence to obtain a plurality of second segmentation results, wherein any second acoustic line feature in the second acoustic line features corresponds to one second segmentation result;
and splicing the plurality of second segmentation results to obtain the second mixed voiceprint characteristics.
In a possible implementation manner, the stitching the plurality of second segmentation results to obtain the second mixed voiceprint feature includes:
and splicing the plurality of second segmentation results according to the second segmentation sequence to obtain the second mixed voiceprint characteristic.
In a possible implementation manner, the decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information includes:
responding to the encrypted message to be decrypted and acquiring the encrypted message based on an encryption model, and determining a decryption model corresponding to the encryption model;
and inputting the ciphertext information and the second mixed voiceprint characteristic into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
In one possible implementation, the obtaining a plurality of second voiceprint features from a plurality of decryption objects includes:
acquiring a plurality of second audio data from the plurality of decryption objects, wherein any decryption object in the plurality of decryption objects corresponds to at least one second audio data;
and extracting the voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
In another aspect, an information encryption apparatus is provided, the apparatus including:
the acquisition module is used for acquiring plaintext information to be encrypted and determining an encrypted object corresponding to the plaintext information to be encrypted;
the obtaining module is further configured to obtain, in response to that the encrypted objects are multiple, multiple first voiceprint characteristics from multiple encrypted objects, where any encrypted object in the multiple encrypted objects corresponds to one first voiceprint characteristic;
a determining module for determining a first mixed voiceprint feature based on the plurality of first voiceprint features;
and the encryption module is used for encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic to obtain ciphertext information corresponding to the plaintext information.
In a possible implementation manner, the determining module is configured to determine a first cutting order; segmenting each first voiceprint feature in the first voiceprint features according to the first segmentation sequence to obtain a plurality of first segmentation results, wherein any one first voiceprint feature in the first voiceprint features corresponds to one first segmentation result; and splicing the plurality of first cutting results to obtain the first mixed voiceprint characteristic.
In a possible implementation manner, the determining module is configured to splice the plurality of first cut results according to the first cut sequence to obtain the first mixed voiceprint feature.
In a possible implementation manner, the encryption module is configured to input the plaintext information and the first mixed voiceprint feature to an encryption model, encrypt the plaintext information based on the encryption model, and output the ciphertext information.
In a possible implementation manner, the obtaining module is configured to obtain a plurality of first audio data from the plurality of encrypted objects, where any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data; and extracting the voiceprint features of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint features.
In another aspect, there is provided an information decrypting apparatus, the apparatus including:
the acquisition module is used for acquiring ciphertext information to be decrypted and determining a decryption object corresponding to the ciphertext information to be decrypted, wherein the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint characteristic obtained by first voiceprint characteristics of a plurality of encryption objects;
the obtaining module is further configured to obtain, in response to that the decryption objects are multiple, multiple second fingerprint features from the multiple decryption objects, where any decryption object in the multiple decryption objects corresponds to one second fingerprint feature;
a determining module for determining a second hybrid voiceprint feature based on the second plurality of voiceprint features;
and the decryption module is used for decrypting the ciphertext information to be decrypted based on the second mixed voiceprint characteristic to obtain plaintext information corresponding to the ciphertext information.
In a possible implementation manner, the determining module is configured to determine a second segmentation order; segmenting each second voiceprint feature in the second voiceprint features according to the second segmentation sequence to obtain a plurality of second segmentation results, wherein any one of the second voiceprint features corresponds to one second segmentation result; and splicing the plurality of second segmentation results to obtain the second mixed voiceprint characteristics.
In a possible implementation manner, the determining module is configured to splice the plurality of second segmentation results according to the second segmentation order to obtain the second mixed voiceprint feature.
In a possible implementation manner, the decryption module is configured to determine, in response to that the ciphertext information to be decrypted is obtained based on an encryption model, a decryption model corresponding to the encryption model; and inputting the ciphertext information and the second mixed voiceprint characteristic into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
In a possible implementation manner, the obtaining module is configured to obtain a plurality of second audio data from the plurality of decryption objects, where any decryption object in the plurality of decryption objects corresponds to at least one second audio data; and extracting the voiceprint characteristics of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint characteristics.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one computer program is stored in the memory, and the at least one computer program is loaded by the processor and executed to enable the computer device to implement any one of the above-mentioned information encryption methods or any one of the above-mentioned information decryption methods.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor, so as to make a computer implement any one of the above-mentioned information encryption methods or implement any one of the above-mentioned information decryption methods.
In another aspect, a computer program product or a computer program is also provided, comprising computer instructions stored in a computer readable storage medium. A processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes any one of the information encryption methods described above or implements any one of the information decryption methods described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
and obtaining a first mixed voiceprint characteristic with the voiceprint characteristics of the plurality of encrypted objects by splicing the first voiceprint characteristics of the plurality of encrypted objects. Because the information encryption based on a plurality of encrypted objects can be completed through the first mixed voiceprint feature, the encryption efficiency is improved while high confidentiality is ensured.
When the ciphertext information is decrypted, the acquired second mixed voiceprint features have the voiceprint features of a plurality of decryption objects, so that the information decryption aiming at the decryption objects can be realized based on one second mixed voiceprint feature, and the decryption efficiency is improved while the decryption difficulty is not changed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an information encryption method provided in an embodiment of the present application;
fig. 3 is a flowchart of an information decryption method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an information encryption apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an information decryption apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a network device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
An embodiment of the present application provides an information encryption method and an information decryption method, please refer to fig. 1, which shows a schematic diagram of an implementation environment of the method provided in the embodiment of the present application. The implementation environment may include: a terminal 11 and a server 12.
The terminal 11 and the server 12 may independently implement the information encryption and decryption methods provided in the embodiments of the present application. The terminal 11 and the server 12 may also implement the information encryption and decryption methods provided in the embodiments of the present application through interaction. For example, the terminal 11 is installed with an application program capable of acquiring plaintext information to be encrypted, and after the application program acquires the plaintext information to be encrypted, the acquired plaintext information to be encrypted may be sent to the server 12, and the server 12 encrypts the plaintext information based on the method provided in this embodiment of the present application to obtain ciphertext information. The server 12 sends the ciphertext information to the terminal 11, and the terminal 11 decrypts the ciphertext information by the method provided by the embodiment of the application to obtain the plaintext information. Or, the terminal 11 is installed with an application program capable of acquiring plaintext information to be encrypted, after the application program acquires the plaintext information to be encrypted, the terminal 11 encrypts the plaintext information based on the method provided in the embodiment of the present application, and after ciphertext information is obtained, the terminal 11 sends the ciphertext information to the server 12, and the server 12 decrypts the ciphertext information based on the method provided in the embodiment of the present application, so as to obtain the plaintext information.
Alternatively, the terminal 11 may be any electronic product capable of performing man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment, such as a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a PPC (Pocket PC, palmtop), a tablet Computer, a smart car, a smart television, a smart speaker, and the like. The server 12 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. The terminal 11 and the server 12 establish a communication connection through a wired or wireless network.
It should be understood by those skilled in the art that the above-mentioned terminal 11 and server 12 are only examples, and other existing or future terminals or servers may be suitable for the present application and are included within the scope of the present application and are hereby incorporated by reference.
Based on the implementation environment shown in fig. 1, an embodiment of the present application provides an information encryption method, which may be executed by a terminal or a server. Taking the method as an example for being applied to a terminal, the flow of the method is shown in fig. 2 and includes steps 201 to 204.
In step 201, plaintext information to be encrypted is obtained, and an encrypted object corresponding to the plaintext information to be encrypted is determined.
The plaintext information is not limited in the embodiment of the application, and the plaintext information can be any information with a secrecy requirement. Illustratively, the plaintext information may be a picture with a security requirement, a text with a security requirement, an audio with a security requirement, or a video with a security requirement.
Optionally, the terminal obtains the plaintext information through the collecting device. For example, a conference is held at place a, the terminal records a conference video based on the video capture device, and the conference video recorded has a security requirement because the conference is not open to the outside, and the conference video is plaintext information to be encrypted. Optionally, the terminal obtains the plaintext information stored in the storage space by accessing the storage space. The storage space may be a storage space of the terminal, and may also be a storage space of a server communicatively connected to the terminal. For example, the user a and the user B complete a piece of writing work of a text a which is not publicized to the outside, the text a is stored in a storage space of the terminal, and the terminal obtains the text a by accessing the storage space, wherein the text a is plaintext information.
In a possible implementation manner, when acquiring plaintext information to be encrypted, an encryption object corresponding to the plaintext information to be encrypted also needs to be determined. Illustratively, the encrypted object corresponding to the plaintext information to be encrypted refers to an owner of the plaintext information, that is, an object that needs to keep the plaintext information secret. Taking the conference video shown in the above embodiment as plaintext information as an example, the encrypted object corresponding to the plaintext information is a conference participant. Taking the text a shown in the above embodiment as plaintext information as an example, the encrypted object corresponding to the plaintext information is a writer of the text a, that is, the user a and the user B.
It should be noted that, the above examples are intended to illustrate the relationship between the encrypted object and the plaintext information, and not to limit the encrypted object, and the encrypted object may be any object that needs to keep the plaintext information secret, and the number of the encrypted objects may be any number, which is not limited in the embodiments of the present application.
In step 202, in response to the plurality of encrypted objects, a plurality of first voiceprint characteristics from the plurality of encrypted objects are obtained, wherein any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint characteristic.
In a possible implementation manner, one plaintext information corresponds to a plurality of encrypted objects, and at this time, a plurality of first voiceprint characteristics of the plurality of encrypted objects need to be acquired. The manner of obtaining the plurality of first voiceprint features includes, but is not limited to: acquiring a plurality of first audio data from a plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data; and extracting the voiceprint features of each first audio data in the plurality of first audio data to obtain a plurality of first voiceprint features.
Illustratively, the terminal provides an audio data input interface based on which the encryption object inputs audio data, and the terminal takes the received audio data as the first audio data. The audio data input interface may be an MIC (microphone), or may be other devices that can be used to collect audio data, which is not limited in this embodiment of the present application. The microphone for inputting the audio data may be a microphone externally connected to the terminal or a microphone built in the terminal. The connection mode of the microphone externally connected to the terminal may be wired connection or wireless connection.
It should be noted that, when the encryption object inputs audio data based on the audio data input interface, the spoken text may be any text or a text provided by the terminal. For example, the terminal displays text on a screen for prompting the encryption object to input audio data while providing an audio data input interface. The text provided by the terminal is not limited in the embodiment of the application, and can be the name of plaintext information or randomly generated characters. The terminal prompts the encrypted object when the encrypted object inputs the first audio data by displaying the text on the screen, so that the thinking time of the encrypted object when the encrypted object inputs the first audio data is reduced, and the efficiency of acquiring a plurality of first audio data by the terminal is improved. Optionally, when the encrypted object inputs audio data based on the audio data input interface, the audio data may be input for multiple times, the quality of the input audio data is improved by increasing the number of times of inputting the audio data, and the terminal obtains the first audio data with higher quality based on the input audio data. The contents of the audio data that are input multiple times by the same encryption object may be the same or different, and this is not limited in the embodiments of the present application. The content of the audio data refers to a text read when the audio data is input by the encryption object.
In a possible implementation manner, since the first audio data includes non-voice audio data, such as environmental noise, the terminal may further perform preprocessing on the obtained first audio data before extracting a voiceprint feature of each of the plurality of first audio data. The embodiment of the present application does not limit the preprocessing manner, and may be implemented based on VAD (Voice Activity Detection) technology. For example, the start position and the end position of the voice audio data in the first audio data are located based on the VAD technique, and the first audio data is segmented according to the start position and the end position, so as to obtain the voice audio data in the first audio data. Through preprocessing, separation of voice audio data and non-voice audio data in the first audio data is achieved, and voiceprint features of the first audio data are extracted more accurately subsequently.
The embodiment of the present application does not limit the manner of extracting the voiceprint features of each piece of first audio data, and may extract MFCC (Mel Frequency Cepstral coeffients, Mel Frequency Cepstral coefficients) from the first audio data as the first voiceprint features, may extract LPCC (Linear Predictive Cepstral coefficients) from the first audio data as the first voiceprint features, or may extract other voiceprint feature extraction technologies. It should be noted that, in the case where a plurality of pieces of first audio data are input to the same encryption object as described in the above embodiment, the voiceprint feature of the encryption object is not related to the content of the first audio data. Therefore, the voiceprint features extracted from the plurality of first audio data from the same encrypted object are the same, that is, the plurality of first audio data of the same encrypted object correspond to one first voiceprint feature.
In step 203, a first hybrid voiceprint feature is determined based on the first plurality of voiceprint features.
With respect to the manner of determining a first mixed voiceprint feature based on a plurality of first voiceprint features, including but not limited to: determining a first cutting sequence; segmenting each first voiceprint feature in the first voiceprint features according to a first segmentation order to obtain a plurality of first segmentation results, wherein any one first voiceprint feature in the first voiceprint features corresponds to one first segmentation result; and splicing the plurality of first cutting results to obtain a first mixed voiceprint characteristic. Exemplary ways to determine the first cut order include, but are not limited to, the following two ways.
And in the first determination mode, the terminal determines a first cut sequence based on the information of each first voiceprint feature.
The information of the first voiceprint feature may be the acquisition time of the first voiceprint feature, may be the name of the first voiceprint feature, or may be other information, which is not limited in this embodiment of the application. When the information of the first voiceprint feature is the acquisition time of the first voiceprint feature, the terminal may arrange the information in the order from morning to evening based on the acquisition time of the first voiceprint feature, and use the arrangement result as a first cutting order. When the information of the first voiceprint feature is the name of the first voiceprint feature, the terminal may arrange according to a preset alphabetical order list based on the name of the first voiceprint feature, and take the arrangement result as a first cutting order. Wherein the alphabetical list may be set based on empirical values. Regarding the obtaining manner of the name of the first voiceprint feature, optionally, the terminal provides an information input interface while providing the audio data input interface, after the encryption object inputs the first audio data based on the audio data input interface, the name of the first audio data may be input based on the information input interface, and the terminal takes the received name as the name of the first voiceprint feature corresponding to the first audio data. Of course, the operation of the encryption object to input the name of the first audio data based on the information input interface may also be performed before the first audio data is input. It should be noted that, in the case where a plurality of pieces of first audio data are input to the same encryption object and one first voiceprint feature is obtained based on the plurality of pieces of first audio data as shown in the above embodiment, if the names of the first audio data input by the encryption object when the plurality of pieces of first audio data are input are different, the terminal may randomly determine one name from the plurality of names as the name of the first voiceprint feature of the encryption object or may use the last input name as the name of the first voiceprint feature of the encryption object.
And determining a second mode, wherein the terminal determines the first scoring sequence based on the user requirement.
The user may be an encrypted object or other objects related to plaintext information. Illustratively, after acquiring a plurality of first voiceprint features of a plurality of encrypted objects, the terminal displays names of the acquired plurality of first voiceprint features on a screen, and the terminal determines the sequence of user clicks as a first cut sequence by clicking the names of the first voiceprint features in sequence. Optionally, the terminal provides an information input interface, the user sequentially inputs the names of the first voiceprint features based on the information input interface, and the terminal determines the sequence in which the user inputs the first voiceprint features as the first cut sequence. Optionally, the terminal provides multiple sorting methods and controls corresponding to the sorting methods, and the user determines the first scoring order by triggering the controls corresponding to the sorting methods. The control corresponding to the triggering and sorting method may be triggered by voice operation or by click operation, which is not limited in the embodiment of the present application. The various sorting methods provided by the terminal are similar to the method shown in the first determination mode, and are not described herein again.
The method for segmenting the first voiceprint feature is not limited, and optionally, the part from n-1/n to 1 of the first voiceprint feature is segmented. Wherein n is a positive integer, which refers to the segmentation order of the first voiceprint feature. For example, the first voiceprint feature a (113344) is the first voiceprint feature to be divided, that is, n is 1, and at this time, the first division result corresponding to the first voiceprint feature a is from the starting position (0) to the ending position (1) of the first voiceprint feature a, that is, the first division result corresponding to the first voiceprint feature a is 113344. For another example, the first voiceprint feature B (112244) is a second voiceprint feature to be divided, that is, n is 2, at this time, the first division result corresponding to the first voiceprint feature B is from the part 1/2 of the first voiceprint feature B to the end position (1), that is, the first division result corresponding to the first voiceprint feature B is 244. Because the segmentation mode of each first voiceprint feature is determined based on the segmentation sequence of each first voiceprint feature, the confidentiality of the first mixed voiceprint feature obtained by splicing a plurality of first segmentation results is improved by using different segmentation modes for different first voiceprint features. In addition, the more the later the segmentation sequence, the less the amount of the voiceprint feature data reserved by the first voiceprint feature, the voiceprint feature data amount of the plurality of first segmentation results is effectively controlled, so that the subsequent operation is facilitated.
After each first voiceprint feature is segmented to obtain a plurality of first segmentation results, the plurality of first segmentation results can be spliced to obtain a first mixed voiceprint feature. In a possible implementation manner, the sequence of splicing the plurality of first segmentation results is the same as the first segmentation sequence, and the terminal splices the plurality of first segmentation results according to the first segmentation sequence to obtain a first mixed voiceprint feature, where the process is shown in formula 1.
First mixed voiceprint feature ═ F (t)1)+F(t2)+…+F(tn-1)+F(tn) (formula 1)
Wherein n is a positive integer for representing the segmentation order of the first voiceprint feature, tnRefers to the nth first voiceprint feature to be divided, F (t)n) The part from n-1/n to 1 of the nth first voiceprint feature to be divided is designated, F (t)n-1)+F(tn) Refers to the splicing F (t)n-1) And F (t)n). Taking n as 2, F (t)1) Is 11, F (t)2) Is 22, then F (t)1)+F(t2) 1122, the first mixed voiceprint feature is 1122.
Of course, the terminal may also select another order for splicing the plurality of first slicing results, which is not limited in this embodiment of the application. And segmenting each first voiceprint feature, and selecting a part of the first voiceprint features for splicing, wherein the data volume of the segmented voiceprint features is reduced, so that the splicing efficiency is improved. Meanwhile, the control of the length of the spliced first mixed voiceprint feature is realized, and the encryption efficiency is improved when plaintext information is encrypted based on the first mixed voiceprint feature subsequently by controlling the length of the first mixed voiceprint feature. In addition, the confidentiality of the first mixed voiceprint feature resulting from the selection of a portion of the first voiceprint feature for stitching is higher than the stitching result resulting from the stitching of multiple unslit first voiceprint features.
In step 204, the plaintext information to be encrypted is encrypted based on the first mixed voiceprint feature, and ciphertext information corresponding to the plaintext information is obtained.
And (3) encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic, namely encrypting the plaintext information by taking the first mixed voiceprint characteristic as a key. Illustratively, plaintext information and a first hybrid voiceprint feature are input to an encryption model, the plaintext information is encrypted based on the encryption model, and ciphertext information is output. The encryption model is not limited in the embodiments of the present application, and may be any model for executing a symmetric encryption algorithm. For example, the Encryption model is a model that implements Encryption based on AES (Advanced Encryption Standard). For another example, the Encryption model is a model that implements Encryption based on DES (Data Encryption Standard).
In summary, in the information encryption method provided in the embodiment of the present application, a first mixed voiceprint feature having the voiceprint characteristics of a plurality of encrypted objects is obtained by concatenating the first voiceprint features of the plurality of encrypted objects. By segmenting the first voiceprint feature, the privacy of the obtained first mixed voiceprint feature is improved during the segmentation. Meanwhile, information encryption based on a plurality of encrypted objects can be completed only by one first mixed voiceprint feature, so that high confidentiality is guaranteed, and encryption efficiency is improved.
Based on the implementation environment shown in fig. 1, an embodiment of the present application provides an information decryption method, which may be executed by a terminal or a server. Taking the method as an example for being applied to a terminal, the flow of the method is shown in fig. 3, and includes steps 301 to 304.
In step 301, ciphertext information to be decrypted is obtained, a decryption object corresponding to the ciphertext information to be decrypted is determined, and the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint feature obtained by first voiceprint features of a plurality of encryption objects.
Optionally, details of the ciphertext information obtaining process to be decrypted are described in the embodiment shown in fig. 2, and are not described herein again. The decryption object corresponding to the ciphertext information is an object for decrypting the ciphertext information. Taking the conference video of the conference which is not disclosed to the outside as an example, conference participants need to review the conference video due to writing a conference record, and begin to decrypt ciphertext information corresponding to the conference video, at this moment, the participants belong to a decryption object corresponding to the ciphertext information. Taking plaintext information as an example of a text which is written by the user A and the user B together and is not disclosed to the outside, the user C needs to check the text content and start to decrypt ciphertext information corresponding to the text, and at this time, the user C belongs to a decryption object corresponding to the ciphertext information.
In step 302, in response to the plurality of decryption objects, a plurality of second fingerprint features from the plurality of decryption objects are obtained, and any one decryption object in the plurality of decryption objects corresponds to one second fingerprint feature.
In one possible implementation, the manner of obtaining the plurality of second acoustic line features includes, but is not limited to: acquiring a plurality of second audio data from a plurality of decryption objects, wherein any one decryption object in the plurality of decryption objects corresponds to at least one second audio data; and extracting the voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
Optionally, a method for acquiring a plurality of second audio data is similar to the method for acquiring a plurality of first audio data in the embodiment shown in fig. 2, and extracting the voiceprint feature of each second audio data is similar to the method for extracting the voiceprint feature of each first audio data in the embodiment shown in fig. 2, which is not repeated herein.
It should be noted that, for the case that the extraction of the voiceprint features of the first audio data can be implemented based on multiple voiceprint feature extraction models shown in fig. 2, the voiceprint feature extraction model for extracting the voiceprint features of the second audio data is consistent with the voiceprint feature extraction model for extracting the voiceprint features of the first audio data.
In step 303, a second hybrid voiceprint feature is determined based on the second plurality of voiceprint features.
Illustratively, a second slicing order is determined; segmenting each second acoustic line feature in the second acoustic line features according to a second segmentation sequence to obtain a plurality of second segmentation results, wherein any second acoustic line feature in the second acoustic line features corresponds to one second segmentation result; and splicing the plurality of second segmentation results to obtain a second mixed voiceprint characteristic.
The method for determining the second slicing order is similar to the method for determining the first slicing order in the embodiment shown in fig. 2, and is not repeated here. The second cutting sequence may be the same as or different from the first cutting sequence, which is not limited in this embodiment of the application. The method for segmenting each second voiceprint feature based on the second segmentation order is similar to the method for segmenting each first voiceprint feature based on the first segmentation order in the embodiment shown in fig. 2, and further description is omitted here.
Regarding the stitching of the plurality of second segmentation results, in one possible implementation manner, the plurality of second segmentation results may be stitched according to a second segmentation order to obtain a second mixed voiceprint feature. Of course, the order of stitching the second segmentation result may also be different from the second segmentation order, which is not limited in this application embodiment.
In step 304, the ciphertext information to be decrypted is decrypted based on the second mixed voiceprint feature, so as to obtain plaintext information corresponding to the ciphertext information.
Illustratively, in response to the ciphertext information to be decrypted being obtained based on the encryption model, determining a decryption model corresponding to the encryption model; and inputting the ciphertext information and the second mixed voiceprint characteristic into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting plaintext information. The decryption model corresponding to the encryption model refers to an algorithm executed by the decryption model being an inverse algorithm of an algorithm executed by the encryption model.
In a possible implementation manner, when the second mixed voiceprint feature is consistent with the first mixed voiceprint feature used for encrypting the plaintext information, the ciphertext information can be decrypted based on the second mixed voiceprint feature and the decryption model, so that the plaintext information corresponding to the ciphertext information is obtained. That is, when a plurality of decryption objects corresponding to ciphertext information to be decrypted are the same as an encryption object corresponding to plaintext information, the plurality of decryption objects can acquire the plaintext information.
To sum up, according to the information decryption method provided in the embodiment of the present application, since a plurality of decryption objects are required to be the same as a plurality of encryption objects, plaintext information corresponding to ciphertext information can be obtained. Therefore, when the plurality of encrypted objects do not agree on whether to decrypt the ciphertext message, the decryption of the ciphertext message cannot be completed. In addition, the decryption of the ciphertext information is completed based on the second mixed voiceprint feature, and the decryption efficiency is improved while the acquisition difficulty of the plaintext information corresponding to the ciphertext information is not changed.
Referring to fig. 4, an embodiment of the present application provides an information encryption apparatus, including: an acquisition module 401, a determination module 402 and an encryption module 403.
An obtaining module 401, configured to obtain plaintext information to be encrypted, and determine an encrypted object corresponding to the plaintext information to be encrypted;
the obtaining module 401 is further configured to obtain, in response to that the encrypted objects are multiple, multiple first voiceprint characteristics from the multiple encrypted objects, where any encrypted object in the multiple encrypted objects corresponds to one first voiceprint characteristic;
a determining module 402 for determining a first mixed voiceprint feature based on a plurality of first voiceprint features;
the encrypting module 403 is configured to encrypt plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information.
Optionally, a determining module 402, configured to determine a first cut order; segmenting each first voiceprint feature in the first voiceprint features according to a first segmentation order to obtain a plurality of first segmentation results, wherein any one first voiceprint feature in the first voiceprint features corresponds to one first segmentation result; and splicing the plurality of first cutting results to obtain a first mixed voiceprint characteristic.
Optionally, the determining module 402 is configured to splice the plurality of first cut results according to a first cut order, so as to obtain a first mixed voiceprint feature.
Optionally, the encrypting module 403 is configured to input the plaintext information and the first mixed voiceprint feature to the encryption model, encrypt the plaintext information based on the encryption model, and output the ciphertext information.
Optionally, the obtaining module 401 is configured to obtain a plurality of first audio data from a plurality of encrypted objects, where any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data; and extracting the voiceprint features of each first audio data in the plurality of first audio data to obtain a plurality of first voiceprint features.
The device obtains a first mixed voiceprint characteristic with the voiceprint characteristics of the multiple encrypted objects by splicing the first voiceprint characteristics of the multiple encrypted objects. Because the information encryption based on a plurality of encrypted objects can be completed through one first mixed voiceprint feature, the encryption efficiency is improved while high confidentiality is ensured.
Referring to fig. 5, an embodiment of the present application provides an information decryption apparatus, including: an acquisition module 501, a determination module 502 and a decryption module 503.
The obtaining module 501 is configured to obtain ciphertext information to be decrypted, determine a decryption object corresponding to the ciphertext information to be decrypted, where the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint feature obtained by first voiceprint features of multiple encryption objects;
the obtaining module 501 is further configured to, in response to that a plurality of decryption objects are provided, obtain a plurality of second fingerprint features from the plurality of decryption objects, where any decryption object in the plurality of decryption objects corresponds to a second fingerprint feature;
a determining module 502 for determining a second hybrid voiceprint feature based on the second plurality of voiceprint features;
the decryption module 503 is configured to decrypt the ciphertext information to be decrypted based on the second mixed voiceprint feature, so as to obtain plaintext information corresponding to the ciphertext information.
Optionally, the determining module 502 is configured to determine a second slicing order; segmenting each second acoustic line feature in the second acoustic line features according to a second segmentation sequence to obtain a plurality of second segmentation results, wherein any second acoustic line feature in the second acoustic line features corresponds to one second segmentation result; and splicing the plurality of second segmentation results to obtain a second mixed voiceprint characteristic.
Optionally, the determining module 502 is configured to splice a plurality of second segmentation results according to a second segmentation order to obtain a second mixed voiceprint feature.
Optionally, the decryption module 503 is configured to determine, in response to that the ciphertext information to be decrypted is obtained based on the encryption model, a decryption model corresponding to the encryption model; and inputting the ciphertext information and the second mixed voiceprint characteristic into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting plaintext information.
Optionally, the obtaining module 501 is configured to obtain a plurality of second audio data from a plurality of decryption objects, where any decryption object in the plurality of decryption objects corresponds to at least one second audio data; and extracting the voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
When the device decrypts the ciphertext information, the acquired second mixed voiceprint feature has the voiceprint characteristics of a plurality of decryption objects, the information decryption aiming at the plurality of decryption objects can be realized based on the second mixed voiceprint feature, and the decryption efficiency is improved while the decryption difficulty is not changed.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, where the server may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where at least one computer program is stored in the one or more memories 602, and is loaded and executed by the one or more processors 601, so as to enable the server to implement the information encryption and decryption methods provided by the method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
Fig. 7 is a schematic structural diagram of a network device according to an embodiment of the present application. The device may be a terminal, and may be, for example: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. A terminal may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, a terminal includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 702 is used for storing at least one instruction, which is used for being executed by the processor 701, so as to enable the terminal to implement the information encryption and information decryption method provided by the method embodiment in the present application.
In some embodiments, the terminal may further include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 704, a display screen 705, a camera assembly 706, an audio circuit 707, a positioning component 708, and a power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display screen 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, disposed on the front panel of the terminal; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in other embodiments, the display 705 may be a flexible display, disposed on a curved surface or on a folded surface of the terminal. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones can be arranged at different parts of the terminal respectively. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic Location of the terminal to implement navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 709 is used to supply power to various components in the terminal. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power supply 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on the side frames of the terminal and/or underneath the display 705. When the pressure sensor 713 is arranged on the side frame of the terminal, a holding signal of a user to the terminal can be detected, and the processor 701 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal. When a physical key or a vendor Logo (trademark) is provided on the terminal, the fingerprint sensor 714 may be integrated with the physical key or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the display screen 705 is increased; when the ambient light intensity is low, the display brightness of the display screen 705 is adjusted down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also known as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 716 is used to collect the distance between the user and the front face of the terminal. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal gradually decreases, the processor 701 controls the display screen 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front face of the terminal is gradually increased, the processor 701 controls the display 705 to switch from the rest state to the bright state.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is not limiting of network devices and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer device is also provided, the computer device comprising a processor and a memory, the memory having at least one computer program stored therein. The at least one computer program is loaded and executed by one or more processors to cause the computer device to implement any one of the above-described information encryption methods or to implement any one of the above-described information decryption methods.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein at least one computer program, which is loaded and executed by a processor of a computer apparatus, to cause a computer to implement any one of the above-described information encryption methods or to implement any one of the above-described information decryption methods.
In one possible implementation, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes any one of the above-mentioned information encryption methods, or implements any one of the above-mentioned information decryption methods.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data is required to comply with relevant laws and regulations and standards in relevant countries and regions. For example, the plaintext information referred to in this application is obtained with sufficient authorization.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above description is only an exemplary embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. An information encryption method, characterized in that the method comprises:
acquiring plaintext information to be encrypted, and determining an encrypted object corresponding to the plaintext information to be encrypted;
responding to a plurality of encrypted objects, and acquiring a plurality of first voiceprint characteristics from the plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to one first voiceprint characteristic;
determining a first hybrid voiceprint feature based on the first plurality of voiceprint features;
and encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic to obtain ciphertext information corresponding to the plaintext information.
2. The method of claim 1, wherein determining a first hybrid voiceprint feature based on the first plurality of voiceprint features comprises:
determining a first cutting sequence;
segmenting each first voiceprint feature in the first voiceprint features according to the first segmentation sequence to obtain a plurality of first segmentation results, wherein any one first voiceprint feature in the first voiceprint features corresponds to one first segmentation result;
and splicing the plurality of first cutting results to obtain the first mixed voiceprint characteristic.
3. The method of claim 2, wherein said stitching the plurality of first cut results to obtain the first hybrid voiceprint feature comprises:
and splicing the plurality of first cutting results according to the first cutting sequence to obtain the first mixed voiceprint characteristic.
4. The method according to any one of claims 1 to 3, wherein the encrypting the plaintext information to be encrypted based on the first mixed voiceprint feature to obtain ciphertext information corresponding to the plaintext information comprises:
and inputting the plaintext information and the first mixed voiceprint characteristic to an encryption model, encrypting the plaintext information based on the encryption model, and outputting the ciphertext information.
5. The method of any of claims 1-3, wherein obtaining a plurality of first voiceprint features from a plurality of encrypted objects comprises:
acquiring a plurality of first audio data from the plurality of encrypted objects, wherein any encrypted object in the plurality of encrypted objects corresponds to at least one first audio data;
and extracting the voiceprint features of each first audio data in the plurality of first audio data to obtain the plurality of first voiceprint features.
6. A method for decrypting information, the method comprising:
acquiring ciphertext information to be decrypted, and determining a decryption object corresponding to the ciphertext information to be decrypted, wherein the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint characteristic obtained by first voiceprint characteristics of a plurality of encryption objects;
responding to a plurality of decryption objects, acquiring a plurality of second voiceprint features from the plurality of decryption objects, wherein any one decryption object in the plurality of decryption objects corresponds to one second voiceprint feature;
determining a second hybrid voiceprint feature based on the second plurality of voiceprint features;
and decrypting the ciphertext information to be decrypted based on the second mixed voiceprint characteristic to obtain plaintext information corresponding to the ciphertext information.
7. The method of claim 6, wherein determining a second hybrid voiceprint feature based on the second plurality of voiceprint features comprises:
determining a second segmentation order;
segmenting each second acoustic line feature in the second acoustic line features according to the second segmentation sequence to obtain a plurality of second segmentation results, wherein any second acoustic line feature in the second acoustic line features corresponds to one second segmentation result;
and splicing the plurality of second segmentation results to obtain the second mixed voiceprint characteristics.
8. The method of claim 7, wherein said stitching the plurality of second segmentation results to obtain the second hybrid voiceprint feature comprises:
and splicing the plurality of second segmentation results according to the second segmentation sequence to obtain the second mixed voiceprint characteristic.
9. The method according to any one of claims 6 to 8, wherein the decrypting the ciphertext information to be decrypted based on the second mixed voiceprint feature to obtain plaintext information corresponding to the ciphertext information includes:
responding to the encrypted message to be decrypted and acquiring the encrypted message based on an encryption model, and determining a decryption model corresponding to the encryption model;
and inputting the ciphertext information and the second mixed voiceprint characteristic into the decryption model, decrypting the ciphertext information based on the decryption model, and outputting the plaintext information.
10. The method according to any one of claims 6-8, wherein said obtaining a plurality of second voiceprint features from a plurality of decryption objects comprises:
acquiring a plurality of second audio data from the plurality of decryption objects, wherein any decryption object in the plurality of decryption objects corresponds to at least one second audio data;
and extracting the voiceprint features of each second audio data in the plurality of second audio data to obtain a plurality of second voiceprint features.
11. An information encryption apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring plaintext information to be encrypted and determining an encrypted object corresponding to the plaintext information to be encrypted;
the obtaining module is further configured to obtain, in response to that the encrypted objects are multiple, multiple first voiceprint characteristics from multiple encrypted objects, where any encrypted object in the multiple encrypted objects corresponds to one first voiceprint characteristic;
a determining module for determining a first mixed voiceprint feature based on the plurality of first voiceprint features;
and the encryption module is used for encrypting the plaintext information to be encrypted based on the first mixed voiceprint characteristic to obtain ciphertext information corresponding to the plaintext information.
12. An information decryption apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring ciphertext information to be decrypted and determining a decryption object corresponding to the ciphertext information to be decrypted, wherein the ciphertext information is obtained by encrypting plaintext information based on a first mixed voiceprint characteristic obtained by first voiceprint characteristics of a plurality of encryption objects;
the obtaining module is further configured to obtain, in response to that the decryption objects are multiple, multiple second fingerprint features from the multiple decryption objects, where any decryption object in the multiple decryption objects corresponds to one second fingerprint feature;
a determining module for determining a second hybrid voiceprint feature based on the second plurality of voiceprint features;
and the decryption module is used for decrypting the ciphertext information to be decrypted based on the second mixed voiceprint characteristic to obtain plaintext information corresponding to the ciphertext information.
13. A computer device comprising a processor and a memory, wherein at least one computer program is stored in the memory, the at least one computer program being loaded and executed by the processor to cause the computer device to carry out the information encryption method according to any one of claims 1 to 5 or the information decryption method according to any one of claims 6 to 10.
14. A computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to cause a computer to implement the information encryption method according to any one of claims 1 to 5 or the information decryption method according to any one of claims 6 to 10.
15. A computer program product comprising a computer program or instructions which are executable by a processor to cause a computer to implement the information encryption method of any one of claims 1 to 5 or the information decryption method of any one of claims 6 to 10.
CN202210185272.8A 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium Active CN114598516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210185272.8A CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210185272.8A CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114598516A true CN114598516A (en) 2022-06-07
CN114598516B CN114598516B (en) 2024-04-26

Family

ID=81814817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210185272.8A Active CN114598516B (en) 2022-02-28 2022-02-28 Information encryption and information decryption methods, devices, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114598516B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001144748A (en) * 1999-11-11 2001-05-25 Sony Corp Device and method for generating cryptographic key, device and method for enciphering and deciphering, and program providing medium
JP2001168854A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
JP2001168855A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
CN105991290A (en) * 2015-03-06 2016-10-05 科大讯飞股份有限公司 Pseudo random voiceprint cipher text generation method and system
WO2019085575A1 (en) * 2017-11-02 2019-05-09 阿里巴巴集团控股有限公司 Voiceprint authentication method and apparatus, and account registration method and apparatus
CN110677260A (en) * 2019-09-29 2020-01-10 京东方科技集团股份有限公司 Authentication method, authentication device, electronic equipment and storage medium
CN111756741A (en) * 2020-06-24 2020-10-09 安徽听见科技有限公司 Data transmission method, device, equipment and storage medium
CN112053695A (en) * 2020-09-11 2020-12-08 北京三快在线科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium
CN113762971A (en) * 2021-05-17 2021-12-07 腾讯科技(深圳)有限公司 Data encryption method and device, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001144748A (en) * 1999-11-11 2001-05-25 Sony Corp Device and method for generating cryptographic key, device and method for enciphering and deciphering, and program providing medium
JP2001168854A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
JP2001168855A (en) * 1999-12-13 2001-06-22 Sony Corp Encryption key generator, encryption/decoding device and encryption key generating method, encryption/ decoding method, and program service medium
CN105991290A (en) * 2015-03-06 2016-10-05 科大讯飞股份有限公司 Pseudo random voiceprint cipher text generation method and system
WO2019085575A1 (en) * 2017-11-02 2019-05-09 阿里巴巴集团控股有限公司 Voiceprint authentication method and apparatus, and account registration method and apparatus
CN110677260A (en) * 2019-09-29 2020-01-10 京东方科技集团股份有限公司 Authentication method, authentication device, electronic equipment and storage medium
CN111756741A (en) * 2020-06-24 2020-10-09 安徽听见科技有限公司 Data transmission method, device, equipment and storage medium
CN112053695A (en) * 2020-09-11 2020-12-08 北京三快在线科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN113762971A (en) * 2021-05-17 2021-12-07 腾讯科技(深圳)有限公司 Data encryption method and device, computer equipment and storage medium
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHEN HUANG; XIAOMEI ZHANG; LEI WANG; ZHENGYING LI: "Study and implementation of voiceprint identity authentication for Android mobile terminal", 2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI), 26 February 2018 (2018-02-26) *
张旻;李明;李政;蒋嘉林;: "基于声纹的Android手机访问控制及文件加密系统", 信息网络安全, no. 04 *

Also Published As

Publication number Publication date
CN114598516B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
KR101661969B1 (en) Mobile terminal and operation control method thereof
CN111246300B (en) Method, device and equipment for generating clip template and storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN109327608B (en) Song sharing method, terminal, server and system
CN108965757B (en) Video recording method, device, terminal and storage medium
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110827820B (en) Voice awakening method, device, equipment, computer storage medium and vehicle
CN110109608B (en) Text display method, text display device, text display terminal and storage medium
CN108922506A (en) Song audio generation method, device and computer readable storage medium
CN112667835A (en) Work processing method and device, electronic equipment and storage medium
CN112788359A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN110991445B (en) Vertical text recognition method, device, equipment and medium
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN111128115B (en) Information verification method and device, electronic equipment and storage medium
CN110992954A (en) Method, device, equipment and storage medium for voice recognition
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN114598516B (en) Information encryption and information decryption methods, devices, equipment and storage medium
CN112764824B (en) Method, device, equipment and storage medium for triggering identity verification in application program
CN114388001A (en) Multimedia file playing method, device, equipment and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111613252B (en) Audio recording method, device, system, equipment and storage medium
CN113539291B (en) Noise reduction method and device for audio signal, electronic equipment and storage medium
CN111049970B (en) Method and device for operating equipment, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant